s4l.ai / guide / social media automation tool

The social media automation tool that waits 6 hours before it earns the link back.

S4L is a social media automation tool. The part nobody else ships is the delay. The first comment never carries a URL. Six hours later, a launchd job queries the posts table for rows where upvotes crossed a floor, then generates a one-off SEO landing page scoped to that one thread, and only then edits the comment to include it. If the page generator fails, the comment stays linkless. A bare homepage URL is explicitly banned.

M
Matthew Diakonov
9 min read
4.9from 33
6-hour floor before a comment can receive a link
upvotes > 2 is the community-vote gate
per-thread SEO landing page generated via generate_page.py
4 launchd pipelines (reddit / linkedin / github / moltbook), 4x per day each

the uncopyable bit

The whole behavior fits in one SQL predicate and one shell rule.

Eligibility is posted_at < NOW() - INTERVAL '6 hours' AND link_edited_at IS NULL AND upvotes > 2. Qualifying rows trigger seo/generate_page.py --trigger reddit, and the page it produces is the only thing allowed to land in the edit. Homepage fallback is a hard no.

The four phases of the pipeline

Every step writes durable state. The comment, the row, the page, and the edit each leave a trail a SELECT can recover later.

From draft to link, with a six-hour gap in the middle

1

Draft + post (linkless)

Agent picks an engagement style, writes 2-4 lines, posts via browser MCP. No URL, no CTA.

2

Log to posts table

INSERT writes platform, thread_url, our_url, our_content, engagement_style, project_name. link_edited_at is NULL on insert.

3

launchd fires (4x/day)

com.m13v.social-link-edit-reddit runs at 02:35, 08:35, 14:35, 20:35. Takes a 2700-second platform lock so a prior run can't collide.

4

SQL filters for earned comments

6h+ age, no prior edit, upvotes > 2. Everything else is ignored.

5

generate_page.py runs per row

Picks a keyword, picks a slug, builds an SEO page in /src/app/(content)/t/, commits, pushes, verifies live URL.

6

Edit the comment

Navigate to the old.reddit.com permalink, click edit, append one casual sentence + the generated URL, save.

7

UPDATE posts

link_edited_at = NOW(), link_edit_content = the exact appended sentence. Row is permanently excluded from the next eligibility query.

The one SELECT that decides who gets a link

This is the whole gate. Every other piece of the automation runs downstream of these five WHERE clauses. Adding a column to the filter is the closest the tool gets to a policy change.

link-edit-reddit.sh (EDITABLE query)

The decision the agent makes on each eligible row

Three gates before a single character is edited. Fail any one, skip the row and move on.

  1. 1

    Match a project

    Scan config.json projects[] for the best topic fit against thread_title + our_content. If project_name is set but clearly wrong, overwrite it back to the DB.

  2. 2

    Generate the page

    Pick a 3-6 word keyword, derive a slug, run generate_page.py --trigger reddit. On failure, abort this row and retry next cron.

  3. 3

    Edit the comment

    Append one casual sentence and the returned page_url. Save the edit. Verify the comment actually updated.

  4. 4

    Write the row

    UPDATE posts SET link_edited_at, link_edit_content WHERE id. This row is now invisible to the next eligibility SELECT.

why 6 hours, and why 4 fires per day

0hours minimum between post and link-edit

The plist wakes at 02:35, 08:35, 14:35, 20:35 local time. That spacing gives every comment at least one eligibility check within the first 12 hours of its life, and the 6-hour SQL floor guarantees no comment is touched before the vote signal stabilizes. The cadence is coarse by design. A faster loop would pull links in before the community had time to decide.

02:35
launchd fire
08:35
launchd fire
14:35
launchd fire
20:35
launchd fire

What the shell job actually does

Four phases: query, delegate, generate, write. The shell itself is thin. The interesting work sits in the prompt handed to the claude agent and in the landing-page generator it calls.

skill/link-edit-reddit.sh

The hard rule against homepage fallback

This text lives inside the prompt, which means it travels with every single invocation. No config flag can disable it. If the page builder fails, the row stays eligible and the agent walks away empty-handed.

skill/link-edit-reddit.sh (prompt body)

A comment is eligible for a link-edit only if every one of these is true

  • status is 'active' (not removed or deleted)
  • platform is reddit, linkedin, github, or moltbook
  • posted_at is at least 6 hours in the past
  • link_edited_at is still NULL
  • our_url is set (we actually posted it)
  • upvotes is greater than 2 (community vote passed)
  • a project in config.json topic-matches the thread
  • generate_page.py returned success with a real page_url

Guardrails the edit agent cannot override

Six rules, embedded directly in the prompt. The agent has no model-side flag to disable them. They are policy-as-prompt, which means every single invocation carries the same hard limits.

Never suggest calls or demos

The prompt literally says: NEVER suggest, offer, or agree to calls, meetings, demos, or video chats. The edit bot cannot schedule a human's calendar and it will not pretend to.

Never promise unshared resources

Only links from config.json projects (plus a landing page just deployed) are allowed. No 'I will send you docs', no 'I will follow up with a template'.

Never offer to DM

The agent works in-thread, in public. Private promises can't be audited by the posts table, so they're forbidden.

Never make time-bound promises

No 'I will check back next week'. The next scheduled pass happens on the plist's calendar, not on a pinky-promise.

Bare homepage URLs are banned

If the per-thread page generator fails, the row is left untouched and retried next cron. The comment stays linkless before it carries a generic URL.

Every edit is logged

The posts table grows a new column the moment an edit succeeds: link_edited_at timestamps the moment, link_edit_content stores the exact string appended. Skipped posts log SKIPPED: REASON instead.

Where the linkless comments are posted

The posting pipeline draws from a per-project list of subreddits in config.json. Each target has its own floor-days filter to prevent same-community spam. A sample of subs S4L has posted into below.

r/Entrepreneur
r/startups
r/SaaS
r/ExperiencedDevs
r/webdev
r/programming
r/smallbusiness
r/Meditation
r/vipassana

The pipeline by the numbers

Every value below is a constant in the shell script, the plist, or the schema. None of them are marketing.

0hour floor between post and edit
0minimum upvotes for edit eligibility
0launchd fires per platform per day
0platforms with their own link-edit pipeline
0

second lease on the per-platform lock

0

max characters in an auto-derived SEO slug

0

words max in the keyword phrase the agent picks

0

bare homepage URLs allowed as fallback

One link-edit pass, as it shows up in the log

Abbreviated output from one real Reddit pass. Two rows eligible, one edited, one skipped because the page generator returned failure.

skill/link-edit-reddit.sh, one iteration

Typical scheduler vs S4L's link-edit loop

A SaaS scheduler's job ends when the post lands. S4L treats the post as phase one of a longer pipeline whose real payoff is a landing page that did not exist until the community upvoted.

FeatureTypical social schedulerS4L
When is the link addedIn the original post, every time6+ hours after posting, only if upvotes > 2
What the link points toStatic homepage or a UTM-tagged routeA per-thread SEO landing page generated on demand
If the target cannot be builtFalls back to the product homepageRow stays eligible, no edit, retry on the next 6h cron
Who picks the project to mentionWhatever the scheduler was loaded withTopic match against projects[] in config.json; bad tags are overwritten back to the DB
Audit trailScheduler log, no per-comment statelink_edited_at + link_edit_content columns on posts, every edit or skip reasoned
CadenceContinuous scheduler or on-demand queuelaunchd StartCalendarInterval, exactly 4 times per day per platform
Guardrails against overreachEnforced in product rules, often after the fact4 hard NEVER rules injected into the prompt itself (no calls, no DMs, no unshared resources, no time promises)

Run this pipeline against your own handle

Point S4L at your projects list. It finds threads, posts linkless comments in your voice, and generates a per-thread landing page in your Next.js repo the moment a comment earns the right to a link. Every decision lands in one audit table.

See pricing

Social media automation tool: common questions

What does social media automation mean in the S4L context?

S4L is a social media automation tool that automates three things: finding a thread worth commenting on, drafting the comment under one of 7 named engagement styles, and a retroactive link-edit pass that appends a URL only to comments the community has already upvoted. The first two run on a comment cadence; the third runs on its own launchd schedule, four times per day per platform.

Why wait 6 hours before editing in a link?

Two reasons. First, a 6-hour floor gives the Reddit vote signal enough samples to distinguish a comment the thread liked from a comment that dropped quietly. The SQL filter is AND posted_at < NOW() - INTERVAL '6 hours' AND upvotes > 2. Second, it pushes link promotion away from the 'new comment' spike where moderators are most alert. The comment has already proven it belongs.

What is in the posts table that makes this auditable?

The schema has two columns that only exist for this flow: link_edited_at (timestamptz, NULL until an edit pass touches the row) and link_edit_content (text, stores the exact sentence appended, or SKIPPED: REASON when the agent decided not to edit). Combined with posted_at and upvotes at the time of edit, every decision is reconstructable from one SELECT.

Where does the link actually point?

To a page that did not exist until a few seconds before the edit. The link-edit prompt forces the agent to pick a short keyword phrase (3-6 words) and a URL slug, then run python3 ~/social-autoposter/seo/generate_page.py with --trigger reddit. That script builds an SEO page in the same Next.js repo you are reading now, commits it, pushes to origin, verifies the live URL, and only then returns success. Its stdout JSON contains page_url, which is the URL that lands in the comment edit.

What happens if the page generator fails?

The comment is not edited. link_edited_at stays NULL. On the next launchd fire (4h to 6h later), the row meets the eligibility filter again and the agent gets another shot. There is no bare-URL fallback; the shell prompt says verbatim that a custom landing page per thread is a hard requirement. A homepage link is considered worse than no link.

Does this run on every platform, or just Reddit?

Four platforms have their own link-edit pipeline: Reddit (com.m13v.social-link-edit-reddit), LinkedIn, GitHub, Moltbook. Each has its own shell script under ~/social-autoposter/skill/link-edit-*.sh, its own platform lock (2700-second lease), and its own browser MCP config. A LinkedIn hang no longer blocks a Reddit edit, and vice versa. Only platforms with an API surface that supports comment edits are in the loop.

What are the hard rules the editing agent will refuse to violate?

Four NEVERs, embedded directly in the prompt so they travel with every invocation. No calls, meetings, demos, or video chats. No promising to send files or links that are not already in config.json projects. No offers to DM. No time-bound promises ('I will check back next week'). These sit in step 8 of the prompt and override anything the LLM might otherwise draft.

How is this different from a normal social media scheduler?

A scheduler fires at a target time and forgets. S4L's link-edit flow is closer to a slow-motion bandit: it waits for the community's upvote signal, spends compute to build a bespoke page only when the signal says yes, and writes a durable audit row for every decision. The cadence is coarse on purpose (4 times per day), so the system has time to see whether a comment earned the link before it acts.

What is the engagement_style column for, and how does it interact with link-edits?

Every row in posts carries an engagement_style (one of 7: critic, storyteller, pattern_recognizer, curious_probe, contrarian, data_point_drop, snarky_oneliner) picked at draft time. The link-edit pass does not change that value. It only touches link_edited_at and link_edit_content. That separation matters because the style-selection bandit needs upvotes on the original content, not on the edited version.