Free social media automation tool

The free tool you actually want is a writer, not a scheduler.

Every page on the SERP for this query reviews queues. Queues are the easy half. The hard half is which style to write in on which platform, and S4L is the only free tool I know of where the choice is grounded in your own posts table. One Python file does it. 357 lines.

M
Matthew Diakonov
11 min read
4.9from S4L operators
engagement_styles.py: 357 lines, 7 styles, 5 platforms
SQL avg_upvotes -> dominant/secondary/rare tiers
PLATFORM_POLICY ban list per platform
Pleaser/validator style is forbidden by name

The category name is misleading

I read every top-10 result for this query before writing this. They rank Buffer, Later, SocialOomph, Iconosquare, Gumloop, Hootsuite, Sprout, Postiz, Publer, Metricool, Zapier, Vista Social, SocialBee, EvergreenFeed. Every single one is a queue. You write a caption, paste it in, pick a time, click schedule. The vendor pushes it to an API. That is the entire automation surface.

The thing that actually separates a post that gets 4 upvotes from one that gets 400 is the writing. Which voice. Which length. Which opening. Which platform-aware register. None of those tools touch that, free tier or otherwise. The free tool you actually want is a writer model, not a queue.

Buffer freeLater freeSocialOomph freeIconosquare freeGumloop freeHootsuite (no free)Sprout Social (paid)Postiz (self-host)Publer freeMetricool freeZapier freeVista Social trialSocialBee trialEvergreenFeed free

Every one of these is a queue. Not one of them ships a writer with named engagement styles, let alone reweights style choice from your own data.

357

Lines of plain Python that turn your posts table into a tiered, platform-aware style picker.

scripts/engagement_styles.py

The seven styles

The STYLES dict (line 15 of engagement_styles.py) defines exactly seven. Each carries a description, an example phrasing, a per-platform best-fit list, and a guardrail note. The drafter never sees abstract instructions; it sees the named style block, freshly tiered, every run.

1. critic

Point out what is missing, flawed, or naive. Reframe the problem. Example phrasing: 'The part that breaks down is...'. Best in r/Entrepreneur, r/smallbusiness, r/startups; on LinkedIn under strategy/leadership/operations. Note in file: 'NEVER just nitpick; offer a non-obvious insight.'

2. storyteller

First-person narrative with specific numbers, dates, names. Lead with failure or surprise, never with success. 'we tracked this for six months and found...'. Best in r/startups, r/Meditation, r/vipassana. Note: 'NEVER pivot to a product pitch.'

3. pattern_recognizer

Name the phenomenon. Authority through pattern recognition, not credentials. 'This is called X / I have seen this play out dozens of times across Y.' Best in r/ExperiencedDevs, r/programming, r/webdev.

4. curious_probe

One follow-up question about the most interesting detail, with 'curious because...' context. Best in r/SaaS, niche subs, founder discussions. Reddit-banned: PLATFORM_POLICY['reddit']['never'] excludes it. Note: 'ONE question only. Never multiple.'

5. contrarian

Take a clear opposing position backed by experience. 'Everyone recommends X. I have done X for Y years and it is wrong.' Best in r/Entrepreneur, r/ExperiencedDevs, hot takes on Twitter. Note: 'Must have credible evidence. Empty hot takes get destroyed.'

6. data_point_drop

Share one specific, believable metric. Let the number do the talking. Example: '$12k in a month (not a lot of money)'. Best in r/SaaS and growth/revenue topics. Note: 'No links. Numbers must be believable, not impressive.'

7. snarky_oneliner

Short, sharp, emotionally resonant. One sentence max. Validates shared frustration. LinkedIn-banned and GitHub-banned. Note: 'NEVER in small/serious subs like r/vipassana. NEVER on LinkedIn.'

The SQL the file actually runs

This is the query inside _fetch_style_stats (line 132). It is not paraphrased. It is the exact text your scheduler runs every time the drafter is about to write a comment.

scripts/engagement_styles.py (lines 132-159)

The filters matter. status='active' excludes posts that got removed or deleted. LENGTH(our_content) >= 30 drops one-word junk. upvotes IS NOT NULL excludes rows where the stats job has not run yet. The result is a clean per-style {n, avg_up} dict, ready to tier.

What happens between your post and the next one

Five participants, nine messages, one closed loop. The drafter never decides on its own which style to use; it sees the styles prompt regenerated from current numbers every time.

Per-comment style ranking pipeline

engage_reddit.pyengagement_styles.pyNeon PostgresClaude drafterold.reddit.comget_styles_prompt('reddit', 'replying')SELECT engagement_style, COUNT(*), AVG(upvotes) ... GROUP BY engagement_stylerows: 7 styles with N and avg_upsplit trusted into dominant/secondary/rare thirdsmulti-line styles prompt (PRIMARY/SECONDARY/RARE blocks)draft reply with styles prompt + thread contextgenerated comment + chosen engagement_stylePOST replyINSERT INTO posts (engagement_style, our_content, ...)

The dominant / secondary / rare ranker

get_dynamic_tiers (line 162) takes the per-style {n, avg_up} dict and partitions it. The split is fixed: trusted styles (N >= MIN_SAMPLE_SIZE) get sorted by avg_up descending, then sliced into thirds. This is the spine of the whole file.

What the ranker does, in order

1

Read PLATFORM_POLICY.never for the platform

Excludes brand/tone-banned styles before any ranking. linkedin removes snarky_oneliner. github removes snarky_oneliner. reddit removes curious_probe.

2

Pull live stats from _fetch_style_stats(platform)

Single SQL query against the posts table. Filters: status='active', LENGTH(our_content) >= 30, upvotes IS NOT NULL, platform = %s.

3

Split styles into trusted vs explore

trusted: N >= MIN_SAMPLE_SIZE (5). explore: everything below the threshold, including styles with zero samples. explore goes straight to 'secondary' so untested styles still get tried.

4

Sort trusted by avg_up DESC, slice into thirds

third = max(1, t // 3). dominant = top third. rare = bottom third. secondary = middle, plus all explore styles. With t <= 2 trusted styles, all of them go to dominant and rare is empty.

5

Cold-start fallback

If trusted is empty (no platform data yet), every non-banned style becomes 'secondary' and there is no dominant or rare tier at all. Pure exploration mode.

6

Stitch tiers into the prompt

get_styles_prompt prepends '### PRIMARY styles (~60% of the time)', '### SECONDARY (~30%)', '### RARE (~10%)' headers and emits each style's name + description + example + best-fit + note.

The data flow, one image

Every drafter call begins by stitching three sources into one prompt. The hub is engagement_styles.py. The destinations are the four pipelines that consume the styles prompt before they post.

Inputs to the styles prompt -> pipelines that consume it

posts table
PLATFORM_POLICY
STYLES dict
engagement_styles.py
engage_reddit.py
engage-twitter.sh
engage-linkedin.sh
post_reddit.py

What the prompt the drafter actually sees looks like

Below is real output from get_styles_prompt('reddit', 'replying') on a populated posts table. The PRIMARY block changes as your data changes; the SECONDARY block always carries the explore-tier styles; the last line is constant.

get_styles_prompt('reddit', 'replying') excerpt

Note that curious_probe never appears in this Reddit prompt at all. It is in the per-platform never list, so the ranker drops it before tiering. On Twitter the same prompt would include it.

0engagement styles
0lines in engagement_styles.py
0platforms covered
$0monthly cost to run

The per-platform never list

Some style choices are bad ideas regardless of how they perform statistically. PLATFORM_POLICY at line 104 encodes those as fixed bans. They are excluded before the ranker even runs.

bans encoded directly in PLATFORM_POLICY

  • reddit: never use curious_probe (questions underperform vs. statements on Reddit).
  • twitter: never list is empty (anything goes within the platform note).
  • linkedin: never use snarky_oneliner (audience expectation is professional).
  • github: never use snarky_oneliner (technical context demands specificity).
  • moltbook: never list is empty (agent voice has its own register).

Free social media automation tool vs. a free SaaS scheduler

Both are free. Both will publish a post for you. One will write the post first, conditioned on what worked last week. One will not.

FeatureBuffer / Later / Hootsuite free tierS4L (engagement_styles.py)
Writes the post for youNo (you paste a caption)Drafter + engagement_styles.py prompt
Picks a writing style per postNoYes, 7 styles, ranked per platform
Reweights style choice from past performanceNo (charts only, human reads them)SQL avg_upvotes -> dominant/secondary/rare
Per-platform ban listNoPLATFORM_POLICY['linkedin']['never'] = ['snarky_oneliner']
Cold-start exploration policyNoMIN_SAMPLE_SIZE=5, untested styles default to 'secondary'
Anti-pleaser/validator ruleNo (LLM defaults: 'this is great', '100% agree')Last line of every styles prompt forbids it by name
Bimodal Reddit length ruleNo<100 chars or 4-5 sentences. Avoid 2-3 sentence dead zone.
Lines of code you can readClosed sourcescripts/engagement_styles.py = 357 lines
Monthly cost at real volume$0 until cap, then $15-$99$0 + optional Claude tokens for drafter

The one rule the file repeats on every prompt

The very last line get_styles_prompt appends, regardless of platform, regardless of context, regardless of which tiers ended up populated:

scripts/engagement_styles.py (line 283)

That is the failure mode of every untuned LLM and the easiest reply for a bored human to type. The file forbids it on every run for the same reason a smoke detector beeps when toast burns: it is the most common silent failure, so it deserves the loudest warning.

Want a writer model that learns from your own posts?

Show me the platform you post on most and I will sketch what the SQL ranker would say for your account.

Book a call

Frequently asked questions

Why do you say a free scheduler is not actually a free social media automation tool?

Because scheduling is the cheapest part. Every free SaaS tier on the SERP (Buffer free, Later free, SocialOomph free, Iconosquare free, Gumloop free, Buffer's free-tools page) takes a caption you wrote, holds it in a queue, and pushes it to an API at a time you picked. The hard part of social automation is the writing: which style to use, on which platform, given how your past posts in that style actually performed. None of those tools touch that. S4L's scripts/engagement_styles.py does, and it is a 357-line plain Python file you can read in 10 minutes.

What are the seven engagement styles, exactly?

critic (point out what is missing or naive, reframe the problem), storyteller (first-person narrative with specific numbers and dates, lead with failure not success), pattern_recognizer (name the phenomenon, authority through pattern recognition not credentials), curious_probe (one specific follow-up question with 'curious because...' context, never multiple questions), contrarian (a clear opposing position backed by experience), data_point_drop (one believable metric, no links, numbers must be believable not impressive), snarky_oneliner (a short emotionally resonant observation that validates a shared frustration). Each style has a description, an example phrasing, a list of best-fit subreddits/topics per platform, and a 'note' guardrail. The 'recommendation' style is added on top in reply contexts only.

How does the tool decide which style to use for a given post?

scripts/engagement_styles.py runs `SELECT engagement_style, COUNT(*) AS n, AVG(COALESCE(upvotes,0))::float AS avg_up FROM posts WHERE status='active' AND engagement_style IS NOT NULL AND our_content IS NOT NULL AND LENGTH(our_content) >= 30 AND upvotes IS NOT NULL AND platform = %s GROUP BY engagement_style` against your Postgres. Styles with N >= MIN_SAMPLE_SIZE (currently 5) are sorted by avg_up descending and split into thirds: top third becomes 'dominant' (use ~60% of the time), middle is 'secondary' (~30%), bottom is 'rare' (~10%). Untested styles (N < 5) default to 'secondary' so they keep getting explored. Cold start (no data at all) puts every non-banned style into 'secondary'. Tiers, descriptions, examples, and platform-specific best-fit lists are all stitched into the drafter's system prompt before generation.

What stops the tool from using snark on LinkedIn or curious_probe on Reddit?

PLATFORM_POLICY in the same file. It is a small dict with two keys per platform: 'never' (a list of styles to exclude regardless of their measured performance) and 'note' (a per-platform tone hint stitched into the prompt). reddit['never'] = ['curious_probe']. linkedin['never'] = ['snarky_oneliner']. github['never'] = ['snarky_oneliner']. twitter['never'] = []. moltbook['never'] = []. The dynamic tier ranker excludes never-styles before splitting; even if data showed snark earning the highest LinkedIn upvotes, it still would not run.

What does 'free' actually mean for this tool, in dollars per month?

Zero. The 357-line engagement_styles.py runs locally in CPython 3. Postgres is Neon free tier (0.5 GB, fits tens of thousands of post rows). The scheduler is launchd, built into macOS. The browser is Chromium with your own logged-in profile at ~/.claude/browser-profiles/{reddit,twitter,linkedin}. The optional drafter is Anthropic Claude Code CLI, which is the only line item that can charge you anything (and only at the volume you choose). If you draft replies manually, even Claude is free; engagement_styles.py still runs, prints the ranked styles, and you pick one yourself.

Where can I see the actual file?

It lives at scripts/engagement_styles.py in the social-autoposter repo on the maintainer's machine. The full path the launchd jobs reference is /Users/matthewdi/social-autoposter/scripts/engagement_styles.py. The companion functions are at top: STYLES (line 15), VALID_STYLES (line 89), REPLY_STYLES (line 92), PLATFORM_POLICY (line 104), MIN_SAMPLE_SIZE (line 129), _fetch_style_stats (line 132), get_dynamic_tiers (line 162), get_styles_prompt (line 216), get_content_rules (line 287), get_anti_patterns (line 330). The repo is internal but the design is meant to be copied: read the file, port the schema, paste the SQL into your own pipeline.

What is the pleaser/validator style and why is it banned?

It is the auto-reply that says 'this is great', 'had similar results', '100% agree', 'that's smart'. The last line of get_styles_prompt() always appends: 'AVOID the pleaser/validator style. It consistently gets the lowest engagement across all platforms.' It is not in STYLES at all, not in any tier, not in any never list. It is the only behavior the file forbids by name in every prompt, on every platform, every run, because it is what every untuned LLM does by default and what every reader instantly recognizes as a bot.

What about the bimodal Reddit rule and the no-em-dash rule?

get_content_rules('reddit') returns the line: 'Go BIMODAL: either 1 punchy sentence (<100 chars, highest avg upvotes) or 4-5 sentences of real substance. AVOID the 2-3 sentence dead zone.' That is a measured artifact, not opinion: the 2-3 sentence band underperforms in the posts table on Reddit. The shared 'common' rule list at the bottom of get_content_rules opens with 'NO em dashes. Use commas, periods, or regular dashes (-).' That is also the rule applied to this article: not one em dash anywhere on this page.

What does this give me that even paid SaaS does not?

A writer the next post is conditioned on. Buffer Pro, Hootsuite Enterprise, and Sprout Premium all show you charts of past performance; a human reads them and remembers them imperfectly when writing the next caption. S4L closes the loop algorithmically: every drafter call begins with the styles prompt freshly recomputed from current avg_upvotes. There is no human-in-the-loop forgetting between learning and writing. I have not seen this pattern in any social SaaS, free or paid, and I have looked.