The social media automation tool that waits 6 hours before it earns the link back.
S4L is a social media automation tool. The part nobody else ships is the delay. The first comment never carries a URL. Six hours later, a launchd job queries the posts table for rows where upvotes crossed a floor, then generates a one-off SEO landing page scoped to that one thread, and only then edits the comment to include it. If the page generator fails, the comment stays linkless. A bare homepage URL is explicitly banned.
the uncopyable bit
The whole behavior fits in one SQL predicate and one shell rule.
Eligibility is posted_at < NOW() - INTERVAL '6 hours' AND link_edited_at IS NULL AND upvotes > 2. Qualifying rows trigger seo/generate_page.py --trigger reddit, and the page it produces is the only thing allowed to land in the edit. Homepage fallback is a hard no.
The four phases of the pipeline
Every step writes durable state. The comment, the row, the page, and the edit each leave a trail a SELECT can recover later.
From draft to link, with a six-hour gap in the middle
Draft + post (linkless)
Agent picks an engagement style, writes 2-4 lines, posts via browser MCP. No URL, no CTA.
Log to posts table
INSERT writes platform, thread_url, our_url, our_content, engagement_style, project_name. link_edited_at is NULL on insert.
launchd fires (4x/day)
com.m13v.social-link-edit-reddit runs at 02:35, 08:35, 14:35, 20:35. Takes a 2700-second platform lock so a prior run can't collide.
SQL filters for earned comments
6h+ age, no prior edit, upvotes > 2. Everything else is ignored.
generate_page.py runs per row
Picks a keyword, picks a slug, builds an SEO page in /src/app/(content)/t/, commits, pushes, verifies live URL.
Edit the comment
Navigate to the old.reddit.com permalink, click edit, append one casual sentence + the generated URL, save.
UPDATE posts
link_edited_at = NOW(), link_edit_content = the exact appended sentence. Row is permanently excluded from the next eligibility query.
The one SELECT that decides who gets a link
This is the whole gate. Every other piece of the automation runs downstream of these five WHERE clauses. Adding a column to the filter is the closest the tool gets to a policy change.
The decision the agent makes on each eligible row
Three gates before a single character is edited. Fail any one, skip the row and move on.
- 1
Match a project
Scan config.json projects[] for the best topic fit against thread_title + our_content. If project_name is set but clearly wrong, overwrite it back to the DB.
- 2
Generate the page
Pick a 3-6 word keyword, derive a slug, run generate_page.py --trigger reddit. On failure, abort this row and retry next cron.
- 3
Edit the comment
Append one casual sentence and the returned page_url. Save the edit. Verify the comment actually updated.
- 4
Write the row
UPDATE posts SET link_edited_at, link_edit_content WHERE id. This row is now invisible to the next eligibility SELECT.
why 6 hours, and why 4 fires per day
The plist wakes at 02:35, 08:35, 14:35, 20:35 local time. That spacing gives every comment at least one eligibility check within the first 12 hours of its life, and the 6-hour SQL floor guarantees no comment is touched before the vote signal stabilizes. The cadence is coarse by design. A faster loop would pull links in before the community had time to decide.
What the shell job actually does
Four phases: query, delegate, generate, write. The shell itself is thin. The interesting work sits in the prompt handed to the claude agent and in the landing-page generator it calls.
The hard rule against homepage fallback
This text lives inside the prompt, which means it travels with every single invocation. No config flag can disable it. If the page builder fails, the row stays eligible and the agent walks away empty-handed.
A comment is eligible for a link-edit only if every one of these is true
- status is 'active' (not removed or deleted)
- platform is reddit, linkedin, github, or moltbook
- posted_at is at least 6 hours in the past
- link_edited_at is still NULL
- our_url is set (we actually posted it)
- upvotes is greater than 2 (community vote passed)
- a project in config.json topic-matches the thread
- generate_page.py returned success with a real page_url
Guardrails the edit agent cannot override
Six rules, embedded directly in the prompt. The agent has no model-side flag to disable them. They are policy-as-prompt, which means every single invocation carries the same hard limits.
Never suggest calls or demos
The prompt literally says: NEVER suggest, offer, or agree to calls, meetings, demos, or video chats. The edit bot cannot schedule a human's calendar and it will not pretend to.
Never promise unshared resources
Only links from config.json projects (plus a landing page just deployed) are allowed. No 'I will send you docs', no 'I will follow up with a template'.
Never offer to DM
The agent works in-thread, in public. Private promises can't be audited by the posts table, so they're forbidden.
Never make time-bound promises
No 'I will check back next week'. The next scheduled pass happens on the plist's calendar, not on a pinky-promise.
Bare homepage URLs are banned
If the per-thread page generator fails, the row is left untouched and retried next cron. The comment stays linkless before it carries a generic URL.
Every edit is logged
The posts table grows a new column the moment an edit succeeds: link_edited_at timestamps the moment, link_edit_content stores the exact string appended. Skipped posts log SKIPPED: REASON instead.
Where the linkless comments are posted
The posting pipeline draws from a per-project list of subreddits in config.json. Each target has its own floor-days filter to prevent same-community spam. A sample of subs S4L has posted into below.
The pipeline by the numbers
Every value below is a constant in the shell script, the plist, or the schema. None of them are marketing.
second lease on the per-platform lock
max characters in an auto-derived SEO slug
words max in the keyword phrase the agent picks
bare homepage URLs allowed as fallback
One link-edit pass, as it shows up in the log
Abbreviated output from one real Reddit pass. Two rows eligible, one edited, one skipped because the page generator returned failure.
Typical scheduler vs S4L's link-edit loop
A SaaS scheduler's job ends when the post lands. S4L treats the post as phase one of a longer pipeline whose real payoff is a landing page that did not exist until the community upvoted.
| Feature | Typical social scheduler | S4L |
|---|---|---|
| When is the link added | In the original post, every time | 6+ hours after posting, only if upvotes > 2 |
| What the link points to | Static homepage or a UTM-tagged route | A per-thread SEO landing page generated on demand |
| If the target cannot be built | Falls back to the product homepage | Row stays eligible, no edit, retry on the next 6h cron |
| Who picks the project to mention | Whatever the scheduler was loaded with | Topic match against projects[] in config.json; bad tags are overwritten back to the DB |
| Audit trail | Scheduler log, no per-comment state | link_edited_at + link_edit_content columns on posts, every edit or skip reasoned |
| Cadence | Continuous scheduler or on-demand queue | launchd StartCalendarInterval, exactly 4 times per day per platform |
| Guardrails against overreach | Enforced in product rules, often after the fact | 4 hard NEVER rules injected into the prompt itself (no calls, no DMs, no unshared resources, no time promises) |
Run this pipeline against your own handle
Point S4L at your projects list. It finds threads, posts linkless comments in your voice, and generates a per-thread landing page in your Next.js repo the moment a comment earns the right to a link. Every decision lands in one audit table.
See pricing →Social media automation tool: common questions
What does social media automation mean in the S4L context?
S4L is a social media automation tool that automates three things: finding a thread worth commenting on, drafting the comment under one of 7 named engagement styles, and a retroactive link-edit pass that appends a URL only to comments the community has already upvoted. The first two run on a comment cadence; the third runs on its own launchd schedule, four times per day per platform.
Why wait 6 hours before editing in a link?
Two reasons. First, a 6-hour floor gives the Reddit vote signal enough samples to distinguish a comment the thread liked from a comment that dropped quietly. The SQL filter is AND posted_at < NOW() - INTERVAL '6 hours' AND upvotes > 2. Second, it pushes link promotion away from the 'new comment' spike where moderators are most alert. The comment has already proven it belongs.
What is in the posts table that makes this auditable?
The schema has two columns that only exist for this flow: link_edited_at (timestamptz, NULL until an edit pass touches the row) and link_edit_content (text, stores the exact sentence appended, or SKIPPED: REASON when the agent decided not to edit). Combined with posted_at and upvotes at the time of edit, every decision is reconstructable from one SELECT.
Where does the link actually point?
To a page that did not exist until a few seconds before the edit. The link-edit prompt forces the agent to pick a short keyword phrase (3-6 words) and a URL slug, then run python3 ~/social-autoposter/seo/generate_page.py with --trigger reddit. That script builds an SEO page in the same Next.js repo you are reading now, commits it, pushes to origin, verifies the live URL, and only then returns success. Its stdout JSON contains page_url, which is the URL that lands in the comment edit.
What happens if the page generator fails?
The comment is not edited. link_edited_at stays NULL. On the next launchd fire (4h to 6h later), the row meets the eligibility filter again and the agent gets another shot. There is no bare-URL fallback; the shell prompt says verbatim that a custom landing page per thread is a hard requirement. A homepage link is considered worse than no link.
Does this run on every platform, or just Reddit?
Four platforms have their own link-edit pipeline: Reddit (com.m13v.social-link-edit-reddit), LinkedIn, GitHub, Moltbook. Each has its own shell script under ~/social-autoposter/skill/link-edit-*.sh, its own platform lock (2700-second lease), and its own browser MCP config. A LinkedIn hang no longer blocks a Reddit edit, and vice versa. Only platforms with an API surface that supports comment edits are in the loop.
What are the hard rules the editing agent will refuse to violate?
Four NEVERs, embedded directly in the prompt so they travel with every invocation. No calls, meetings, demos, or video chats. No promising to send files or links that are not already in config.json projects. No offers to DM. No time-bound promises ('I will check back next week'). These sit in step 8 of the prompt and override anything the LLM might otherwise draft.
How is this different from a normal social media scheduler?
A scheduler fires at a target time and forgets. S4L's link-edit flow is closer to a slow-motion bandit: it waits for the community's upvote signal, spends compute to build a bespoke page only when the signal says yes, and writes a durable audit row for every decision. The cadence is coarse on purpose (4 times per day), so the system has time to see whether a comment earned the link before it acts.
What is the engagement_style column for, and how does it interact with link-edits?
Every row in posts carries an engagement_style (one of 7: critic, storyteller, pattern_recognizer, curious_probe, contrarian, data_point_drop, snarky_oneliner) picked at draft time. The link-edit pass does not change that value. It only touches link_edited_at and link_edit_content. That separation matters because the style-selection bandit needs upvotes on the original content, not on the edited version.