Auto posting to social media is a feedback loop, not a queue.
Every tool on the first page of Google for this phrase is a content queue: you write a post, the tool posts it on schedule, done. S4L is the opposite design. The first comment is written in one of 7 engagement styles ranked live by upvote average per platform. The link is never included in that first comment. A separate launchd job runs hours later, reads the community response off the thread, and only then edits the link in. That is the whole product in one sentence.
the 2-upvote rule, in one SQL predicate
The link is never in the original comment. It is only edited in later, if the community said yes.
This is the entire gate, taken verbatim from skill/link-edit-reddit.sh. It runs on its own launchd job every 6 hours. Every row in the posts table is tested against this predicate. If the row passes, S4L opens the comment, edits it to append the project link, and stamps link_edited_at. If it fails, the comment stays link-free forever.
posted_at < NOW() - INTERVAL '6 hours' AND link_edited_at IS NULL AND our_url IS NOT NULL AND upvotes > 2
The 7 engagement styles, as they live in the source
These are the seven keys of the STYLES dict inside scripts/engagement_styles.py. They are not tone suggestions. They are enum values written into the engagement_style column of every posts row, which is what allows the tier builder to group upvotes by style.
critic
Point out what is missing, flawed, or naive. Reframe the problem. Best in r/Entrepreneur, r/smallbusiness, r/startups. Never nitpick, always offer a non-obvious insight.
storyteller
First-person narrative with specific details (numbers, dates, names). Lead with failure or surprise, not success. Best in r/startups, founder stories on Twitter, career posts on LinkedIn.
pattern_recognizer
Name the pattern. "This is called X. I have seen it play out dozens of times across Y." Authority through recognition, not credentials. Best in r/ExperiencedDevs, r/programming, r/webdev.
curious_probe
One specific follow-up question about the most interesting detail, with a "curious because..." context line. ONE question only. Never used on Reddit (listed in never[] for that platform).
contrarian
Take a clear opposing position backed by real experience. "Everyone recommends X. I have done X for Y years and it is wrong." Must have credible evidence or it gets destroyed.
data_point_drop
Share one specific, believable metric. "$12k in a month" beats "a lot of money." Numbers must be believable, not impressive. No links. Best in r/Entrepreneur, r/SaaS, growth Twitter.
snarky_oneliner
One sharp sentence that validates a shared frustration. Best in large subs (500k+ members) and on Twitter. Blocked entirely on LinkedIn and GitHub (no snark allowed in those policies).
pleaser (forbidden)
"This is great," "had similar results," "100% agree." Explicitly flagged in the prompt as the lowest-engagement pattern across every platform. The system is told to avoid it by name.
The query that ranks voices by upvote
This is the whole exploit half of the loop. It is a plain GROUP BY. Nothing about the ranking is heuristic: it is the average of real posts that were written on that platform, filtered to active rows and to comments long enough to be real text (length >= 30).
The loop, as a wiring diagram
On the left, what decides how a comment is written. In the center, the single writer. On the right, the four downstream states every row ends up in. The link-edit job is one of those destinations, not part of the writer.
feedback → writer → destinations
How trusted styles are split into thirds
There is no ML. No bandit. Just sort by avg_upvotes and cut the list into thirds. Top third becomes dominant (used ~60% of the time), middle becomes secondary (~30%), bottom becomes rare (~10%). Below MIN_SAMPLE_SIZE, a style is forced into secondary regardless, which is how new styles stay in rotation long enough to become measurable.
PLATFORM_POLICY, every value in the dict
These are the only static judgments in the style system. Everything else is live data. The never lists ban a style even if it has the best average on that platform. The note strings are concatenated onto the top of the prompt as tone hints.
| Feature | never[] | note (prompt prefix) |
|---|---|---|
| PLATFORM_POLICY["reddit"].never | curious_probe | Short wins. 1 punchy sentence or 4-5 of real substance. Start with 'I' or 'my'. |
| PLATFORM_POLICY["twitter"].never | (none) | Brevity wins. Direct product mentions OK (unlike Reddit). 1-2 sentences max. |
| PLATFORM_POLICY["linkedin"].never | snarky_oneliner | Professional but human. Softer critic framing. No snark. 2-4 sentences. |
| PLATFORM_POLICY["github"].never | snarky_oneliner | Technical and specific. Lead with the pain, then the fix. 400-600 chars. |
| PLATFORM_POLICY["moltbook"].never | (none) | Agent voice ('my human'). Conversational but substantive. 2-4 sentences. |
One full lap around the loop
Start at a scheduled fire, end at the feedback that ranks styles for tomorrow. Every step is a real file in the repo and a real column in the posts table.
from launchd fire to next-day style ranking
launchd fires the posting job
com.m13v.social-reddit-threads at 00:10, 06:10, 12:10, 18:10. Separate jobs for twitter, linkedin, github, moltbook.
get_dynamic_tiers(platform) runs
Reads AVG(upvotes) GROUP BY engagement_style from the posts table, filters against PLATFORM_POLICY[platform].never, splits trusted styles into dominant/secondary/rare thirds, places untested styles in secondary.
prompt is assembled with tiered style guidance
60% dominant, 30% secondary, 10% rare. A specific banned-phrase list is appended: the pleaser/validator pattern ("this is great", "100% agree") is explicitly called out as lowest-engagement.
the comment is written and posted
log_post.py INSERTs a row with platform, thread_url, our_content, engagement_style, and status='active'. our_url is populated but upvotes and link_edited_at are both NULL.
stats job polls upvotes every 6 hours
com.m13v.social-stats-<platform> scrapes the public upvote count off the thread and UPDATEs the row. This is the only way upvotes enters the system.
link-edit job runs, applies the 2-upvote rule
com.m13v.social-link-edit-<platform>. SELECTs rows where posted_at < NOW() - 6h AND link_edited_at IS NULL AND upvotes > 2, opens each winning comment, edits the link in, stamps link_edited_at.
tomorrow's tier rebuild reads this row
get_dynamic_tiers() runs on the next posting fire. This row's (engagement_style, upvotes) now counts toward the style's running average. A failed style becomes rarer; a winning style becomes more dominant.
The posts table, with the columns that matter to auto-posting
Three columns carry the whole design: engagement_style, upvotes, and link_edited_at. The first is the input to tier selection, the second is the signal, the third is the gate.
“The link is never in the original comment. It is only edited in later, if the community upvoted past a threshold.”
skill/link-edit-reddit.sh
Real constants from the codebase
Each number below can be grepped. None of them are marketing numbers.
dominant tier (top-third avg_upvotes)
secondary tier (middle third + all untested)
rare tier (bottom third avg_upvotes)
minimum upvotes before the link is appended
What the two jobs look like in the logs
Trimmed output from a typical day. First the posting job writes the comment. Six hours later, the link-edit job finds it, confirms upvotes > 2, and edits the link in. A second comment from the same run does not make the cut and stays link-free.
why 2, not 0
Reddit shows every comment with a floor of 1 point (the author). A score of 2 is the thinnest possible evidence that one unrelated human pressed the up arrow. A score of 0 or negative means at least one human pressed down. The gate is set at > 2, not >= 2, so the comment has to actually clear that first other-human signal. Any stricter number would delay the link for comments that are quietly working. Any looser number would append links to comments the community is already voting against.
The full link-edit SQL
One SELECT, one INTERVAL, one upvotes threshold. Everything downstream (opening the DOM, editing the comment, stamping link_edited_at) keys off this query's result set. If this query returns zero rows, no links are edited in on that fire.
Want your own posts to earn the link before promoting?
Book a 20-minute walkthrough of the engagement-style tiers, the 2-upvote gate, and how to point S4L at your own Neon database.
Book a call →Auto posting to social media: common questions
What does "auto posting to social media" actually mean inside S4L?
Two things, running on independent schedulers. First, a per-platform posting cycle (com.m13v.social-reddit-threads, com.m13v.social-twitter-cycle, com.m13v.social-linkedin) finds untouched threads, writes a comment in one of 7 engagement styles, and inserts a row into a Postgres posts table. Second, a separate launchd family (com.m13v.social-link-edit-reddit, -twitter, -linkedin, -github, -moltbook) runs every 6 hours and, for each past comment that has cleared a 2-upvote threshold, edits the comment to append the project link. The writing and the promoting are different jobs. The second one is gated on community reaction, not on a timer.
How are the 7 engagement styles chosen?
scripts/engagement_styles.py defines STYLES as a dict with seven keys: critic, storyteller, pattern_recognizer, curious_probe, contrarian, data_point_drop, snarky_oneliner. Each style carries a description, an example line, and a best_in map per platform. get_dynamic_tiers() queries the posts table for AVG(upvotes) grouped by engagement_style for the requesting platform, filters out any style in PLATFORM_POLICY[platform].never, requires at least MIN_SAMPLE_SIZE=5 posts before a style is trusted for tiering, and splits trusted styles into dominant / secondary / rare thirds. The prompt tells the model to use dominant ~60%, secondary ~30%, rare ~10%.
Why split the link into a later edit instead of including it in the first comment?
Because the comment has to earn the link. The link-edit-reddit.sh SQL requires posted_at < NOW() - INTERVAL '6 hours' AND link_edited_at IS NULL AND upvotes > 2. If a comment flops or gets downvoted, the link never appears at all. If the comment resonates, the link is inserted later as an addendum. Reddit and LinkedIn users both treat link-first comments as promotion; they treat link-appended-to-an-upvoted-comment as a reference. The schema has a dedicated link_edited_at TIMESTAMP column so this state is queryable.
What happens to styles that do not have enough history yet?
They are treated as "explore" and forced into the secondary tier regardless of their noisy early stats. That is what MIN_SAMPLE_SIZE does. A style with 2 posts showing a 40-upvote average is not considered reliable. By forcing it into secondary (~30% of posts), the system keeps feeding it traffic so its average becomes meaningful. A style with zero posts on this platform is also placed in secondary, so it still gets tested. This is the explore side of an explore/exploit split.
How does the system avoid writing duplicate comments?
A UNIQUE(platform, thread_url) constraint on the posts table, enforced at the database layer. Every candidate thread is pre-filtered against the table before the browser is even asked to open. If a row exists, scripts/log_post.py returns DUPLICATE_THREAD and the posting pipeline skips the thread. Combined with the link-edit job's link_edited_at IS NULL filter, neither half of the loop can touch the same thread twice.
What counts as "upvotes" if the system never calls the Reddit API?
A separate launchd job family (com.m13v.social-stats-reddit, stats-twitter, stats-linkedin, stats-moltbook) polls the real thread URL every 6 hours and scrapes the public upvote count off the page. Those values are UPDATEd onto the posts row. The link-edit job reads that same column, so by design the freshest upvote number at link-edit time is at most about 6 hours stale. That is a feature: the link edit wants a settled signal, not a minute-one spike.
Can I turn a style off for one platform without deleting it from the code?
Yes. PLATFORM_POLICY is a small dict keyed by platform, each entry has a never list and a note string. To ban snark on LinkedIn the entry is {"never": ["snarky_oneliner"], "note": "..."}. The dynamic tier builder filters against that list before ranking. In production today: curious_probe is banned on Reddit, snarky_oneliner on LinkedIn and GitHub, nothing banned on Twitter or Moltbook. The note string is also concatenated into the prompt header as a tone hint.
How long between the original comment and the link edit?
At minimum 6 hours, enforced by the SQL interval. In practice the launchd fire cadence stretches it: on Reddit the posting cycle runs at 00:10, 06:10, 12:10, 18:10, and the link-edit job runs on its own 6-hour interval. A comment posted at 00:10 is not eligible until 06:10, and is evaluated on the next link-edit fire after that, which is typically another few hours later. So the median gap is roughly 8 to 10 hours.
What if the community downvotes the comment?
Nothing happens. No link edit, no retry, no deletion. The row stays in the posts table with its final upvote count and its original content. The row still counts toward the style's avg_up statistic, which pulls that style's ranking down for future posts on that platform. In other words, a failed comment is a measurement. It makes the next choice of style on that platform smarter.
What is PLATFORM_POLICY["moltbook"]?
Moltbook is an agent-to-agent social platform S4L supports alongside the human ones. Its policy note says "Agent voice ('my human'). Conversational but substantive. 2-4 sentences." and its never list is empty. The same STYLES dict and the same tier machinery apply. The Moltbook link-edit job (com.m13v.social-link-edit-moltbook) uses the identical SQL shape as the Reddit one. This is the cleanest proof that the style-tiered + delayed-link design is platform-agnostic.