s4l.ai / guide

Social media auto posting that waits 5 minutes before it decides.

Every other auto-poster on this SERP is really a scheduler: you write the content, it publishes on the calendar. S4L is the other thing. It scans candidate threads, snapshots their engagement at T0, literally sleeps 300 seconds, snapshots again at T1, and ranks everything by the delta. A reply only goes to threads that grew during those 5 minutes. Below is the exact formula and the file it lives in.

M
Matthew Diakonov
9 min read
4.9from 31
T0 snapshot, sleep 300s, T1 snapshot (run-twitter-cycle.sh:191)
delta formula verbatim from fetch_twitter_t1.py:51
6-hour half-life age decay on top of velocity
top 5 by delta, reply to at most 3 per cycle
T0 snapshot -> sleep 300s -> T1 snapshotdelta_score = Δlikes + 3·Δretweets + 2·Δreplies + Δviews/1000 + Δbookmarks6-hour half-life: exp(-0.1155 · age_hours)retweet weight 3xreply weight 2xviews /1000 so megatweets do not dominateauthor sweet spot: 5K to 50K followers = 1.0xcandidates older than 12h -> auto-expiredtop 5 by delta, Claude posts up to 3every candidate row kept with its T0/T1 snapshot

the uncopyable bit

The measurement window is hard-coded as sleep 300 on line 191 of skill/run-twitter-cycle.sh.

Phase 1 runs a scan, enriches tweets via fxtwitter, writes a T0 engagement snapshot for every candidate, and tags the row with a batch_id. Then the shell does one thing: it sleeps 300 seconds. Phase 2a re-polls the same batch and writes the delta. Phase 2b reads the top 5 rows by delta and posts up to 3 replies. The 5-minute pause is not a retry window or a rate-limit sleep. It is the measurement itself.

Phase 1, sleep 300, Phase 2

This is the actual structure of the posting cycle shell. The batch identifier ties the T0 and T1 reads together. Without it, Phase 2 would re-poll arbitrary tweets, not the ones we snapshotted 5 minutes earlier.

skill/run-twitter-cycle.sh

The delta formula, verbatim

Everything non-trivial about S4L's auto posting is encoded in this function. The weights are the model. Retweets matter most because they drag the thread into more timelines. Replies matter next because an active thread gives our reply more eyeballs. Views get divided down because a single popular account can rack up a million views in minutes and that would swamp everything else.

scripts/fetch_twitter_t1.py

Why each weight is what it is

These numbers were picked by running the pipeline for weeks, watching which replies actually accrued impressions, and tuning until the top-delta candidate was also the one worth engaging with. They are production constants, not a theoretical model.

Velocity beats volume

A tweet at 2,000 likes that grew by 4 in the last 5 minutes ranks below a tweet at 60 likes that grew by 12. The first one peaked, the second is still climbing. S4L only cares about the second kind because that is where a reply can still accrue impressions.

Retweets weight 3x

Retweets are reshare intent. One retweet in the sample is worth 3 units of delta. This is the single most load-bearing weight in the formula, picked because retweets drag a thread into more timelines and that is where a reply gets seen.

Replies weight 2x

Active discussion = visibility for our reply. 2 new replies add 4 delta. A reply bonus also gets layered in on top: 15 replies total gives a +1x multiplier in the virality score, capped at +4x for 60+ replies.

Views divided by 1000

Views can run into the millions on a single tweet in minutes. Without the division, views would dominate the delta and everything else would be noise. Dividing by 1000 brings views back to the same order of magnitude as likes and retweets.

Bookmarks count 1x

Bookmarks are private save intent. They do not amplify, but they correlate with depth. They enter the formula at 1x, same as likes, so the score gets a quiet nudge from threads people privately want to return to.

Top 5 by delta, post up to 3

After the T1 pass, the pipeline pulls the top 5 rows from twitter_candidates by delta_score desc. Claude reads each thread, drops anything off-topic or toxic, and replies to at most 3. Everything else in the batch is marked expired.

Age decay: a 6-hour half-life on top of velocity

Delta tells you a thread is growing. Age tells you whether it has runway left. A tweet that grew hard 9 hours ago is probably done. The fix is not a hard cutoff, which would waste good threads that were posted a little early. The fix is exponential: gentle for the first few hours, merciless past 12.

scripts/score_twitter_candidates.py
0%score retained at 3 hours
0%score retained at 6 hours (half-life)
0%score retained at 12 hours
0%score retained at 18 hours

the one constant that defines the loop

0seconds between T0 and T1

Too short and cache jitter drowns out the signal. Too long and the candidate drifts out of relevance. 300 seconds is small enough to catch a climb, big enough to dwarf the 60-second coarseness of Twitter's engagement counters. Change this one number and the whole picker behaves differently.

The flow, left to right

Four independent engagement counters come in on the left. They meet at a single Python function that subtracts two snapshots. Out the right side comes an ordered list the reply pipeline can act on.

T0 + T1 -> compute_delta -> top 5 ordered

Δlikes
Δretweets
Δreplies
Δviews
Δbookmarks
compute_delta
delta_score
ORDER BY delta_score DESC
Claude reads top 5

One 20-minute cycle, step by step

The whole thing runs as a single launchd job. One browser lock is held from start to finish, so a scan cannot collide with a post and a post cannot collide with the next scan.

1

1. launchd fires the plist every 1800 seconds

com.m13v.social-twitter-cycle kicks run-twitter-cycle.sh. The shell acquires a per-platform lock at /tmp/social-autoposter-twitter-cycle.lock via atomic mkdir, writes the holding pid, and times out at 3600 seconds.

2

2. Claude drafts 6 search queries, one per project

A strict-mcp-config subprocess only sees the twitter-agent MCP. It reads the project config, looks at past top-performing query styles, and produces one `since:YYYY-MM-DD min_faves:50` query per project so every query returns tweets from the last 24 hours.

3

3. Scan the search feeds, extract top 5 per query

The subprocess navigates to x.com/search?q=...&f=live, waits 4 seconds, and evaluates a page.evaluate() block that pulls tweet text, URL, author handle, datetime, replies, retweets, likes, views, bookmarks from each article.

4

4. Enrich via fxtwitter, write T0 snapshot to Postgres

enrich_twitter_candidates.py re-fetches each tweet via the fxtwitter JSON API to get clean numbers. score_twitter_candidates.py --batch-id computes the opening virality_score, writes a row to twitter_candidates with likes_t0, retweets_t0, replies_t0, views_t0, bookmarks_t0, tagged with the batch_id.

5

5. Sleep 300 seconds

This is the measurement. The shell literally calls `sleep 300`. During this window, the candidates are published, trending, or dying in real time. Nothing happens in S4L.

6

6. Phase 2a: re-poll fxtwitter, write delta_score

fetch_twitter_t1.py iterates twitter_candidates WHERE batch_id = ? AND status = 'pending', re-fetches via fxtwitter, runs compute_delta(t0, t1), and UPDATEs the row with likes_t1 through bookmarks_t1 plus delta_score.

7

7. Phase 2b: Claude reads top 5 by delta, posts up to 3

A second strict-mcp-config subprocess reads the 5 highest deltas, checks each thread for fit and tone, drafts a reply with the current engagement-style tiers, and posts via scripts/twitter_browser.py. Every successful reply writes to the posts table.

8

8. Mark remaining rows expired, release lock

UPDATE twitter_candidates SET status='expired' WHERE batch_id = ? AND status='pending'. Expired rows keep their T0/T1 snapshots for 7 days, long enough to audit which queries surfaced threads that actually grew.

5 min

A tweet at 60 likes growing by 12 in 5 minutes beats a tweet at 2,000 likes growing by 4. The first one still has room. The second one already peaked.

S4L picker rationale

Velocity-based auto posting vs. scheduled auto posting

Both are called social media auto posting. Only one writes the content at post time.

FeatureSchedulersS4L
When the tool decides what to post aboutQueued days in advance, posted on the calendarDecided at T1 (5 minutes after the scan), based on which candidate is trending harder right now
What signal ranks candidatesTotal likes, total impressions, hashtag popularitydelta_score = Δlikes + 3·Δretweets + 2·Δreplies + Δviews/1000 + Δbookmarks over 300 seconds
Age treatmentPost at a chosen time regardless of thread ageexp(-0.1155 · age_hours), 6-hour half-life, dead after ~24h but soft curve
Measurement windowNone, posts go out blindExactly 300 seconds between T0 and T1, hard-coded as `sleep 300` in run-twitter-cycle.sh:191
Candidate expirationPosts are evergreen, no expiry conceptCandidates older than 12 hours are auto-expired by score_twitter_candidates.py before scoring
Reach multiplierNo author weightingTiered by follower count: sweet spot 5K to 50K = 1.0x, 50K to 200K = 1.4x, <1K = 0.3x
What gets stored after a missNothing, missed posts disappearEvery candidate row keeps its T0/T1 snapshots, delta_score, and status ('posted', 'skipped', 'expired')
How the cadence runsCloud scheduler, you rent the timingA single launchd plist fires every 1800 seconds; one job does scan and post inside one browser lock

What the log looks like for one cycle

Abbreviated and timestamped. The 5-minute sleep is the quiet row in the middle. Everything around it is either scanning, writing snapshots, or picking winners.

skill/logs/twitter-cycle-2026-04-19_0940.log

Want this running on your accounts

S4L runs the T0 / sleep 300 / T1 cycle plus the Reddit and LinkedIn pipelines against your own projects. Self-hosted on a Mac, your logged-in browser profiles, one Neon Postgres for history.

See pricing and setup

Frequently asked questions

What does social media auto posting actually mean in S4L?

Not scheduled calendar publishing. S4L's auto posting is reactive: it scans live tweets (and Reddit threads, and LinkedIn posts), takes a T0 engagement snapshot, waits 300 seconds, takes a T1 snapshot, and picks the top candidates by 5-minute engagement velocity. It then drafts a reply with Claude and posts it through a logged-in browser profile. There is no pre-built content calendar. The content is written at post time, on threads that are still climbing.

Why 5 minutes specifically for the measurement window?

Twitter and fxtwitter both serve cacheable data at a ~1-minute granularity, so anything shorter than that is mostly noise. 5 minutes is the smallest window where a delta is real signal rather than cache jitter. It is also short enough that a candidate from Phase 1 is still worth acting on by Phase 2 (tweets older than 12 hours are auto-expired by score_twitter_candidates.py anyway). The value is `sleep 300` on line 191 of skill/run-twitter-cycle.sh.

Where exactly is the delta formula in the code?

scripts/fetch_twitter_t1.py, lines 45-51. The function is compute_delta(t0, t1). It subtracts T0 values from T1 values for likes, retweets, replies, views, bookmarks, then returns `dl + 3 * dr + 2 * dp + dv / 1000.0 + db`. Every candidate in the batch runs through this function after the 300-second sleep, and delta_score is written back to the twitter_candidates row so the downstream Claude call can ORDER BY delta_score DESC LIMIT 5.

What if two tweets have the same delta?

The opening virality score (from Phase 1) acts as the tiebreaker, because the SELECT in Phase 2b reads both delta_score and virality_score per row. Virality score itself stacks velocity (eng/hour), reach multiplier (author followers tier), a 6-hour exponential age decay (math.exp(-0.1155 * age_hours) in score_twitter_candidates.py:86), and bonuses for active discussion and retweet ratio. A tweet with the same delta but a freshly-posted timestamp wins on age_decay.

How does the 6-hour half-life actually behave?

At 3 hours: 71% of the score retained. At 6 hours: 50%. At 12 hours: 25%. At 18 hours: 12.5%. The curve is exponential, so the drop is steep for the first few hours and then flattens. The coefficient is ln(2)/6 ≈ 0.1155, and the constant is hard-coded on line 86 of scripts/score_twitter_candidates.py. Scores past 24 hours approach zero, which is why candidates older than 12 hours get `status = 'expired'` before they even reach the T1 pass.

Does this only work on Twitter?

The T0/T1 velocity loop is twitter-specific because fxtwitter gives a clean JSON snapshot that can be polled twice cheaply. Reddit and LinkedIn use a different pipeline: scripts/pick_thread_target.py weighs subreddits with floor windows so the bot cannot post to the same community more than once per 1 to 3 days, and the Reddit browser agent posts via CDP into old.reddit.com. The shared part is the learning loop: every post logs engagement_style to Postgres and the next draft re-ranks styles by avg_upvotes.

What happens to tweets in the batch that do not get a reply?

They are marked `status = 'expired'` by a final UPDATE in run-twitter-cycle.sh after Phase 2b finishes. The row stays in twitter_candidates with its T0 and T1 snapshots, delta_score, matched_project, and search_topic, so later runs can audit which queries surfaced threads that actually grew. Rows stay for 7 days, then score_twitter_candidates.py prunes posted and expired rows older than that.

Is this safer than API-based auto posting?

It sidesteps the Twitter API entirely. No client_id, no OAuth scopes, no developer review. Scanning goes through fxtwitter (a public gateway that returns tweet JSON) and posting goes through a Chromium profile already logged in as the real account, driven by a strict-mcp-config claude subprocess that can only see the twitter-agent MCP. If the X API tightens tomorrow, the loop keeps running as long as the browser session is alive.