Social media automation tools, free: 146 files, not one app
Every listicle for this query hands you a menu of SaaS apps with post caps. S4L hands you a folder of 146 individual scripts. Each one is a free tool on its own. Together they are a whole social stack.
Count them. The file system is the spec.
On a fresh install, ~/social-autoposter contains three folders that matter for this guide: scripts/, skill/, and launchd/. Run the three commands below and you will get the exact numbers this page claims.
Not a single one of them is compiled, obfuscated, or hidden behind a login. file on any of them returns "Python script" or "Bourne-Again shell script." You can open them in any editor. That is what should be true of any tool you call free.
A sampling of what is in the box
This is a partial list. It is already longer than most "free plans."
A real single-tool run, from scratch
The demo below is one of the 62 Python tools used standalone. No LLM, no cron, no dashboard. A shell, a Python file, stdout. That is the whole interface.
The only hard dependency is scripts/db.py, and even that is soft: find_threads.py only imports the dedup helpers at call time, so running it without a DATABASE_URL prints every candidate instead of filtering against the posts table.
What the three folders do, visually
The launchd plists do not contain any automation logic. They call a shell wrapper. The shell wrapper calls one or more Python scripts. That is the whole system.
launchd → skill/ → scripts/ → platforms
You can enter the system at any layer. Run a Python tool directly and skip launchd and the shell wrappers. Run a shell wrapper directly and skip launchd. Let launchd run the whole cycle. Nothing in the stack forces you up.
Nine free tools you have probably never seen called a tool
Each card below is a single file in ~/social-autoposter/scripts/. Each one is runnable on its own.
find_threads.py
Hits Reddit public JSON + Moltbook API. Returns candidate threads by subreddit or topic. Also emits Twitter/LinkedIn search URLs for browser-based discovery. --include-moltbook, --include-twitter, --include-linkedin flags.
top_performers.py
Feedback report from Neon. --platform, --project, --top N. The closed-loop tool Claude reads before drafting a new comment.
top_twitter_queries.py
Returns the top-performing historical search queries by posts produced. Style inspiration for the LLM, not literal keyword reuse.
scan_reddit_replies.py
Polls /message/inbox/.json with the reddit-agent profile cookies. Inserts new rows into replies, fires engage_reddit.py end-to-end.
linkedin_cooldown.py
Reads/writes /tmp/linkedin_cooldown.json. Exit code 0 = clear to post, 1 = still in cooldown. Auto-cleans stale state.
update_stats.py
Fetches Reddit + Moltbook engagement via public APIs. Updates upvotes, comments_count, status. No browser required.
scrape_reddit_views.py
Updates view counts from profile-page scrape. Incremental-scroll pattern because Reddit virtualizes the DOM.
pick_thread_target.py
Cooldown floor enforcer. DEFAULT_OWN_FLOOR_DAYS=1, DEFAULT_EXTERNAL_FLOOR_DAYS=3. Overridable per project in config.json.
engagement_styles.py
Shared style taxonomy (critic, helper, teacher, storyteller, etc.) with per-platform guidance. Single source of truth every pipeline imports from. Lives at ~/social-autoposter/scripts/engagement_styles.py.
By the numbers
independently runnable files in the install payload
vendor accounts required to run the whole stack
Free tier vs. actually-free
Same word, different meaning. Here is what 'free social media automation tools' usually means versus what it means here.
| Feature | Typical 'free' SaaS | S4L toolbox |
|---|---|---|
| What 'tool' means | A web dashboard behind a login | A file on your laptop you can open in VS Code |
| Post cap on free tier | Buffer 10/channel, SocialOomph 3/hr, Metricool 50/mo, Publer 10/account | None (SKILL.md enforces 'do not add one') |
| Accounts per free plan | Typically 2-3 connected accounts | Unlimited (one Chromium userDataDir per account you log into) |
| Ability to inspect the logic | No, vendor-owned binary | Yes, every tool is a text file (.py, .sh, .plist) |
| Ability to modify behavior | Change settings in the UI and hope | Open the script, edit the constant, save, re-run |
| Data custody | Vendor SaaS database | Your Neon Postgres (bring your own DATABASE_URL) |
| Scheduler | Proprietary vendor queue | macOS launchd, 41 plists under ~/social-autoposter/launchd/ |
| Works if vendor shuts down | No | Yes, every script still runs |
How to cherry-pick tools instead of adopting a platform
You are not supposed to use all 146. The goal is to find the two or three that replace a paid SaaS tier for you, and ignore the rest until you need them.
The pick-and-run workflow
Read the folder
'ls ~/social-autoposter/scripts/*.py' and 'ls ~/social-autoposter/skill/*.sh'. The 62+43 file names are self-describing (find_threads, post_reddit, scan_reddit_replies, update_stats, top_performers).
Pick the one that matches a verb you want
Want to discover threads? find_threads.py. Want to know what has worked historically? top_performers.py. Want to never get rate-limited by LinkedIn? linkedin_cooldown.py.
Run it with --help
Every script uses argparse. 'python3 scripts/find_threads.py --help' prints its own flags. No doc site to search, the flags live in the file.
Schedule only the ones you want
'launchctl bootstrap gui/$UID ~/Library/LaunchAgents/com.m13v.social-stats-reddit.plist' loads that one schedule. Skip the others. launchd does not require all-or-nothing.
Fork the script when the defaults do not fit
Open scripts/pick_thread_target.py in your editor. Change DEFAULT_EXTERNAL_FLOOR_DAYS from 3 to 7 if you want a longer cooldown. Save. Next cron tick picks up the new floor.
The three-tool minimum
If you want to replace a free SaaS plan today and nothing else, the smallest useful subset is three files:
scripts/find_threads.pydiscovers where to commentscripts/post_reddit.py(or moltbook_post.py) postsscripts/update_stats.pymeasures what happened
That is a closed loop. Run it with your own cron, skip launchd entirely, and you still have more capability than most free SaaS tiers.
Want help cherry-picking the right three tools for your stack?
Book 20 minutes. Bring the platforms you post on and I will point you at the exact scripts that replace whatever you are paying for.
Book a call →Frequently asked questions
Why describe S4L as '146 tools' instead of 'one tool'?
Because the install payload literally is 146 files that each do one job. Run 'ls ~/social-autoposter/scripts/*.py | wc -l' and you get 62 Python scripts. Run 'ls ~/social-autoposter/skill/*.sh | wc -l' and you get 43 shell wrappers. Run 'ls ~/social-autoposter/launchd/*.plist | wc -l' and you get 41 launchd plists. Each Python script has its own argparse CLI. Each shell wrapper is a self-contained bash entry point you can symlink to /usr/local/bin if you want. Each plist is a separate schedule. Nothing forces you to run all of them. You can cherry-pick.
Can I actually use just one script standalone?
Yes. scripts/find_threads.py takes nothing but --subreddits or --topic, prints a JSON list of candidate threads, and exits. scripts/top_twitter_queries.py takes --limit and --window-days, reads Neon, and returns the top-performing historical search queries. scripts/linkedin_cooldown.py accepts 'check' or 'set', reads/writes /tmp/linkedin_cooldown.json, and exits with code 0 or 1. scripts/top_performers.py takes --platform and --project flags and prints a factual top/bottom report. Each one imports only the sibling modules it actually needs (db, moltbook_tools, engagement_styles). They are not a framework; they are a folder of utilities.
Which tool in the box is the one most people miss?
scripts/top_performers.py. It is the closed-loop feedback tool: it queries Neon for your top and bottom performing posts by platform and project, then prints a plain-text report. The idea is that Claude (or you) reads the report before drafting the next comment so you stop repeating what already underperformed. No SaaS I have seen exposes this; they show you charts, not an opinionated digest. The report is deterministic, stdout-only, and grep-friendly.
How is 'free' different here from Buffer free or SocialOomph free?
Buffer free caps you at 10 scheduled posts per channel. SocialOomph free caps you at 3 posts per hour. Metricool free caps you at 50 posts per month. Publer free caps you at 10 per social account. Those caps exist because the vendor pays to run your automation on their servers; the free tier is a funnel to paid. S4L runs on your Mac. Your laptop is the server. The SKILL.md file at the repo root literally says 'There is NO posting rate limit. Do not add one, do not enforce one, do not invent one.' Free here means free software running on hardware you already own.
Is there any piece I do have to pay for?
Three optional dependencies, all with free tiers that cover small-account usage: (1) Neon Postgres for the posts table and replies table, free tier includes 0.5 GB which is plenty for thousands of rows, bring your own DATABASE_URL; (2) Anthropic Claude Code CLI, pay-per-token or via your existing Claude subscription, used only for the drafting step in the comment-posting workflow, no charge for the sensor/scraper scripts; (3) Resend for the daily_stats_email.py digest, free tier gives 3k emails/month, optional. Everything else (Playwright, Node, Python, launchd, Chromium) is free and already on a Mac or one npm install away.
What is the 'skill' folder for, versus 'scripts'?
scripts/ holds single-purpose Python utilities you call directly. skill/ holds 43 bash wrappers that chain those utilities together for the five canonical workflows: engage (post a comment), run-reddit-threads (post an original thread), stats (six-hour engagement scrape), audit (full browser audit), and engage-dm-replies (reply to DMs across platforms). Each wrapper is also a standalone shell entry point; skill/stats.sh takes --platform reddit|twitter|linkedin|moltbook and --quiet. The launchd plists in launchd/ schedule the shell wrappers. You can replace any of them with your own cron or GitHub Actions workflow without touching the Python.
Is this Mac-only? I saw launchd mentioned.
The scheduler half is Mac-only because launchd is Apple's LaunchAgent daemon. The actual 62 Python scripts and the Playwright browser automation are cross-platform; they run fine on Linux. Port path: delete the launchd/ folder, write equivalent systemd .timer + .service units, and point them at the same skill/*.sh wrappers. The shell wrappers source skill/lib/platform.sh, which already has a gtimeout shim for Linux where gtimeout is not the default name for timeout. The database layer is Postgres, which is obviously platform-agnostic.
How do I actually install 146 tools at once?
'npx social-autoposter init'. The installer clones the repo into ~/social-autoposter, creates a blank .env, installs Python deps via pip3, generates the 41 launchd plists (they reference $HOME, so your username is baked in at install time), seeds three persistent Chromium userDataDir directories at ~/.claude/browser-profiles/{twitter,reddit,linkedin}, and symlinks SKILL.md into ~/.claude/skills/social-autoposter/ so Claude Code can discover it. After that, 'launchctl bootstrap gui/$UID ~/Library/LaunchAgents/com.m13v.social-*.plist' loads every schedule. Or skip launchd and just run 'python3 ~/social-autoposter/scripts/<whatever>.py' whenever you want.
Do I need to run all of them? What is the minimum useful subset?
The three-tool minimum is: scripts/find_threads.py (discover), scripts/post_reddit.py or scripts/moltbook_post.py (post), scripts/update_stats.py (measure). That is a fully closed loop and nothing else is required. If you only want a scheduler, run scripts/update_stats.py --quiet from your own crontab every 6 hours and drop the rest. The 146 count is the ceiling, not the floor.
Where do I see the results without running a dashboard locally?
https://s4l.ai/stats/<your_handle>. The handle comes from config.json accounts.*.handle or accounts.*.username. For example, https://s4l.ai/stats/m13v_ (Twitter, without the @) and https://s4l.ai/stats/Deep_Ad1959 (Reddit). Each platform account gets its own URL. The public stats page reads the same posts table your local tools write to, so the numbers match scripts/top_performers.py --platform <x> line-for-line.