Free social media automation tools

The free tools everyone forgets are the ones that run after you post.

Every listicle for this query is a ranking of post schedulers with caps. Scheduling is one verb. Watching the inbox, replying to DMs, scraping views, learning from what performed, those are the other four. S4L is free and has all five.

M
Matthew Diakonov
10 min read
4.9from S4L operators
scan_reddit_replies.py: 350 lines, polls inbox every 5 min
scrape_reddit_views.py: scrapes views Reddit's API hides
top_performers.py: 476 lines, closed-loop feedback
Zero seat fee on any of them

What the SERP actually gives you

I searched the target keyword and read every top-10 result. Here is what every single one of them ships as “automation”: schedule a post, optionally cross-post, optionally apply an image template. That is the whole list. The word automate is doing a lot of work.

Same word, very different meaning

Log into dashboard. Paste caption. Pick time. Click schedule. Wait. The vendor's server posts it on a fixed cadence, capped at N per channel or per hour. You come back later, refresh the dashboard, read the engagement number, decide manually what to do with it.

  • Schedules posts
  • Capped (3-50 posts/month on free tier)
  • No inbox awareness
  • No DM handling
  • No view-count data (API does not expose it)
  • No feedback loop to the next draft

The four tools the listicles skip

Each card below is a real file in ~/social-autoposter, all free, all plain text, all independently runnable.

1. Inbound reply scanning

scripts/scan_reddit_replies.py polls old.reddit.com/message/inbox/.json with the reddit-agent profile cookies, paginates up to 10 pages of 100 items, inserts new replies into the Neon replies table, and immediately fires engage_reddit.py with --limit. Runs every 5 minutes via launchd. 350 lines of plain Python, no SaaS.

2. Cross-platform DM auto-reply

skill/engage-dm-replies.sh dispatches to reddit / twitter / linkedin variants. Each opens its persistent Chromium profile, classifies new DMs via classify_all_dms.py, drafts a reply through Claude Code, sends it, logs it. The scheduler half is launchd. Every DM is a browser action, not an API call.

3. View-count scraping

scripts/scrape_reddit_views.py does what Reddit's API refuses to: read view counts. Incremental 600px scrolls, regex for /^\d[\d,.]*[KkMm]?\s*views?$/, dedup by URL until article count plateaus. Updates the views column in Postgres. Same pattern available for LinkedIn (scrape_linkedin_stats.py).

4. Closed-loop performance feedback

scripts/top_performers.py queries Neon for top and bottom posts by platform and project, prints a deterministic plain-text report. The drafter reads that report before writing the next comment, so you stop repeating what already underperformed. --platform, --project, --top N flags.

Tool 1: the inbox scanner in detail

The core insight is that Reddit’s inbox is a JSON endpoint (/message/inbox/.json) that returns the same payload for a logged-in user that the web UI consumes. If you have the session cookie, you have the feed. The script does exactly that.

scripts/scan_reddit_replies.py (excerpt)

The cookies come from the reddit-agent persistent Chromium profile at ~/.claude/browser-profiles/reddit, kept fresh by a sibling script that exports cookies after every manual login. The dedup is against the replies table in Neon Postgres. The whole thing runs on a launchd schedule (com.m13v.social-scan-reddit-replies.plist) every five minutes.

The full inbox-to-reply sequence

This is what happens end to end, every five minutes, on zero dollars of SaaS. Five participants, eight messages, one closed loop.

scan_reddit_replies → engage_reddit (one tick)

reddit-agent Chromiumscan_reddit_replies.pyNeon Postgresengage_reddit.pyold.reddit.comload cookiescookie header + UAGET /message/inbox/.json100 items (or after=T3_xxx)INSERT IGNORE INTO repliesexec engage_reddit.py --limitPOST reply to thread URLUPDATE replies SET status=sent

Tool 3 in detail: scraping a number Reddit refuses to publish

Reddit’s public API does not return view counts for posts. They exist in the web UI. They are not in the JSON. Any free SaaS tool that claims to track Reddit performance is tracking upvotes and comments, not views. S4L does the browser work instead.

scrape_reddit_views.py in action

Why this scrape is fiddly

  • Reddit virtualizes the DOM, articles scrolled off get removed
  • So you must collect visible rows incrementally, not after scrolling
  • Pattern: collect, scroll 600px, wait 800-1500ms, collect, dedup by URL
  • Stop when article count stops growing (not scrollHeight)
  • Regex view text nodes: /^\d[\d,.]*[KkMm]?\s*views?$/
  • Parse '1.3K' to 1300, '2M' to 2000000

Tool 4 in detail: closing the loop

Buffer shows you a chart. A chart is a dashboard, and a human has to decide what to learn from it. top_performers.py is the same data aimed at the drafter instead. It queries Neon for your top and bottom posts by platform and by project, prints a deterministic text digest, and the drafter reads the digest before writing the next comment.

top-performers.sh

The drafter (Claude Code, but you can use any LLM or your own brain) reads the report and biases toward what worked. That is what closes the loop. No SaaS tool pipes dashboard data into the writer automatically; they assume a human sits in between.

Where the pieces flow

Each of the four tools is an independent Python file. They all share the same data layer. This is why the loop closes.

Inputs → shared Postgres → outputs

Reddit inbox
DMs (3 platforms)
Profile pages
Your posts
Neon Postgres
top_performers.py
drafter (Claude)
launchd + skill/
s4l.ai/stats/<handle>

By the numbers

0Lines in scan_reddit_replies.py
0Lines in scrape_reddit_views.py
0Lines in top_performers.py
$0Seat fee
0 min

inbox poll interval set in launchd

0 h

BACKFILL_HOURS window for first-run scan

0

PAGE_LIMIT items per inbox request

What “free” gets you across the category

Everything in the marquee below ranks for this keyword. Everything below is a post scheduler. Hover to pause.

Buffer free (10/channel cap)Later free (5 posts/platform/month)SocialOomph free (3 posts/hour cap)Metricool free (50 posts/month cap)Publer free (10 posts/account cap)IFTTT free (2 custom applets)EvergreenFeed free (scheduler only)Hootsuite free (deprecated 2023)Postiz free (self-host scheduler)Zapier free (100 tasks/month cap)Gumloop free (schedule + image)

Schedule vs. automate

Same category name on Google, very different surface area.

FeatureTypical 'free SaaS automation'S4L (free)
Schedule postsYes (capped)Yes (uncapped, launchd-driven)
Read inbox replies automaticallyNoscan_reddit_replies.py every 5 min via launchd
Auto-draft and send reply to those inbox repliesNoengage_reddit.py fires inline, via Claude Code drafter
DM auto-reply across multiple platformsNoengage-dm-replies-{reddit,twitter,linkedin}.sh
Scrape view counts (not exposed by Reddit API)Noscrape_reddit_views.py (incremental DOM scroll)
Closed-loop performance feedback to the writerDashboard charts (read by humans)top_performers.py (read by the drafter agent)
Inspect and modify the automation logicNo (vendor-owned closed code)Every file is plain .py or .sh text
Monthly cost at real usage volume$0 until cap, then $15-$99/mo$0 + optional Claude tokens for drafting

If you only try one of the four

Try scan_reddit_replies.py --no-engage. It does not post anything, it just reports what it finds in your inbox. Once you see it work, flip the flag off and let it draft. That is the smallest possible proof that social automation can mean more than scheduling.

All four scripts live in ~/social-autoposter/scripts/. You can read each file before you run it. That is what free should have meant all along.

Want the four non-scheduler tools running on your laptop in 30 minutes?

Book 20 minutes. I will walk you through installing scan_reddit_replies.py, the DM engage set, the view scraper, and top_performers.py, live.

Book a call

Frequently asked questions

Why do you say 'free social media automation tools' is a misleading category name?

Because every page that ranks for it (Buffer's free tools page, Later, IFTTT explore, SocialOomph, EvergreenFeed, Hootsuite, Postiz, Zapier, Gumloop) lists post schedulers. Post scheduling is one verb. Social automation as a real job also includes: noticing when someone replies to you, drafting and sending a reply back, handling DMs, tracking views (not just likes), and learning what historically performed so the next post is better. The SERP pretends those verbs are not part of social automation. They are, and S4L has them all, free.

What exactly does scan_reddit_replies.py do, in plain English?

It is a 350-line Python script that opens https://old.reddit.com/message/inbox/.json as your logged-in Reddit account (cookies come from the reddit-agent Chromium profile), paginates up to 10 pages of 100 items, finds new replies that are not yet in your Postgres replies table, inserts them, and immediately runs engage_reddit.py --limit to draft and post responses. Constants inside the file: PAGE_LIMIT=100, MAX_PAGES=10, BACKFILL_HOURS=48, JITTER_MAX_SECS=60. Nothing in Buffer or Later watches an inbox. They cannot; they are scheduling UIs, not agents.

How do you scrape Reddit view counts if Reddit does not expose them via API?

The first line of scripts/scrape_reddit_views.py's docstring says it: 'Reddit doesn't expose view counts via API.' The script opens your profile page with the reddit-agent browser profile, then scrolls 600px at a time, collects visible <article> elements with their view-count text nodes (regex /^\d[\d,.]*[KkMm]?\s*views?$/), deduplicates by URL into a Python dict, and keeps going until article count stops growing. You do this because Reddit virtualizes the DOM (off-screen articles get removed). The scrape writes JSON, and the script updates the views column in Neon Postgres. No SaaS tool I know of shows real Reddit view counts, paid or free.

What is the DM auto-reply piece and why is it not in any free SaaS tier?

skill/engage-dm-replies.sh dispatches to skill/engage-dm-replies-reddit.sh, skill/engage-dm-replies-twitter.sh, and skill/engage-dm-replies-linkedin.sh, each of which drives a persistent Chromium profile (~/.claude/browser-profiles/{reddit,twitter,linkedin}) to read new DMs, classify them with scripts/classify_all_dms.py, draft replies, and send them. It is not in SaaS free tiers because it is not in SaaS paid tiers either: vendors do not want to be responsible for automated private messages for legal/platform TOS reasons. The workaround S4L takes is that the 'vendor' is you, running your own browser on your Mac.

Why does top_performers.py matter for drafting the next post?

Because generating captions from scratch every time means you repeat patterns that already underperformed. scripts/top_performers.py (476 lines) queries Neon for your top and bottom posts by platform and project, prints a deterministic plain-text report (--platform, --project, --top N flags), and the drafting step (invoked by Claude Code) reads that report before writing the next comment. It closes the loop between publish and learn. Buffer shows you charts. Charts are not a closed loop, they are a dashboard, and a human has to decide what to learn from them. Here, the agent reads the text output and conditions on it directly.

What is still free SaaS good for?

Buffer free is still a decent first scheduler if you post three times a week to two accounts and never want to touch a terminal. SocialOomph free gives you unlimited queue (at 3/hour) which is more generous than most. IFTTT free handles trivial cross-posting (new Instagram post -> tweet). None of them are wrong, they are just a small slice of what social automation can be. If you only need 'schedule a post,' they are fine. If you need 'respond to what happens after you post,' you need S4L or you need to pay enterprise tier money, and even then I have not seen view scraping + inbox replies + DM replies + closed-loop drafting in one product.

What does it actually cost to run the free version of S4L for a month?

Zero, for almost everyone. The scripts run on your laptop. launchd is built into macOS. Python 3, Node, Chromium are free. Neon Postgres free tier gives 0.5 GB which holds tens of thousands of post and reply rows. The only paid component is optional: Anthropic Claude Code CLI tokens for the drafting step (typically a few dollars/month for a personal volume), and Resend if you want daily_stats_email.py digests (free tier is 3k emails/month, way more than one person needs). If you draft manually instead of via Claude, even the Claude cost goes away.

Does this work cross-platform?

The scripts work on any OS with Python, Node, and Chromium. The launchd scheduler half is Mac-only because launchd is Apple's LaunchAgent daemon. On Linux, write systemd .timer + .service units that call the same skill/*.sh wrappers; the wrappers source skill/lib/platform.sh which already has a gtimeout shim. On Windows, use WSL2 + systemd or Task Scheduler. The browser profiles are cross-platform (Chromium on any OS). The core automation logic, the scan/engage/scrape/feedback scripts, do not care what OS they run on.

Is this safe from a platform TOS perspective?

Closer to 'a power-user browsing at superhuman pace' than to 'bot spam.' Every action runs inside a real Chromium with your real login cookies; there is no API impersonation. linkedin_cooldown.py enforces cooldowns so you do not trigger rate limiters. engagement_styles.py pushes the writer toward specific, genuine comments instead of generic ones. That said: each platform has its own rules, and automating DMs in particular (the engage-dm-replies-*.sh set) is closer to the edge than scanning your own inbox. Use the subset you are comfortable with; that is a feature of the script-per-tool architecture.

How do I try the four non-scheduler tools without installing the whole thing?

Clone ~/social-autoposter from GitHub, set DATABASE_URL to a free Neon instance, log in once through playwright in the reddit-agent / twitter-agent / linkedin-agent profiles, then run the four scripts directly: python3 scripts/scan_reddit_replies.py --no-engage (read-only, reports what it found), python3 scripts/scrape_reddit_views.py --from-json /tmp/reddit_views.json, python3 scripts/classify_all_dms.py, python3 scripts/top_performers.py --platform reddit --top 10. No launchd, no SaaS account, no credit card.