M
Matthew Diakonov

The best social media automation tool is the one that re-polls an hour later before replying

Every listicle for this keyword grades tools by feature checkbox: calendar, caption AI, analytics, inbox. S4L is graded by a different question. When the loop is about to reply to a tweet, has the tweet kept climbing since we first saw it, or is it already decaying? The answer is a single column in our database called delta_score, computed by a second poll through api.fxtwitter.com. If delta is negative, we skip it. If positive, it goes to the reply queue.

4.9from operators running multi-platform reply loops
Two-phase scoring, not calendar scheduling
6-hour half-life on the velocity term
Retweets weighted 3x, replies 2x, likes 1x
api.fxtwitter.com re-poll between T0 and T1
delta_score column in twitter_candidates

The listicles all picked from the same shelf

Search for "best social media automation tool" and the top pages rotate through the same twelve names. They differ on seat pricing and which inbox view they ship. None of them describes when to reply as a math problem. All of them treat the act of replying as a human task bolted onto a calendar scheduler.

BufferHootsuiteSprout SocialLaterSocialBeeSendibleAgorapulseMetricoolPallyyEclincherPostizZoho Social

S4L sits in a different category. It is not a calendar with captions. It is a reply decision engine backed by a Postgres table called twitter_candidates, and the most load-bearing column in that table is delta_score.

The anchor fact: delta_score is a five-term sum you can read

Here is the entire scoring rule. It is not hidden behind a SaaS dashboard, it is a Python function you can grep for on the box.

scripts/fetch_twitter_t1.py · line 51

delta = Δlikes + 3 · Δretweets + 2 · Δreplies + Δviews / 1000 + Δbookmarks

Retweets are tripled because a retweet is a reshare commitment, not a one-tap reaction. Replies are doubled because a live reply thread is the only condition under which a new reply actually gets read. Views are divided by 1,000 because raw impression counts would otherwise drown every other term on large accounts.

Δretweets x3Δreplies x2Δviews ÷10006h half-life on age

Two phases, one table

Raw tweet metadata enters from the scanner side. The twitter_candidates table is the hub. Phase 1 writes T0 columns. Phase 2 re-enters the same rows via fxtwitter and writes T1 columns next to them. The reply loop reads only where delta_score is positive.

candidate pipeline

Search: social media automation
Search: reddit autoposter
Search: engagement agent
Search: user-configured topics
twitter_candidates (Postgres)
T0 engagement + virality_score
api.fxtwitter.com re-poll
T1 engagement + delta_score
Reply queue (delta > 0)

Phase 1: six signals, one score

Phase 1 never looks at the tweet twice. It takes one snapshot and turns it into a number. The signal mix is deliberately opinionated: velocity matters most, retweet ratio and reply activity compound, author reach and age modulate.

velocity

total_eng / age_hours. The strongest single predictor of where a tweet ends up. A tweet with 200 interactions in 20 minutes beats a tweet with 2,000 over two days.

retweet ratio bonus

rt_ratio = retweets / total_eng. If the ratio is above 0.3, the score is multiplied by up to 2x. Reshare intent is the signal. Likes come cheap, retweets do not.

reply activity

min(replies / 15, 4.0). 15 replies adds 1x, 30 replies adds 2x, 60+ caps at 4x. Live threads get seen. Replies into dead threads get buried.

discussion quality

reply-to-like ratio, capped at 1.0x. 0.05 ratio adds 0.5x, 0.1+ adds 1.0x. Separates actual arguments from pure-broadcast posts where a reply would just shout into a void.

author reach

Followers: under 1K gets 0.3x, 5K-50K is the 1.0x sweet spot, 50K-200K gets 1.4x. Mega accounts taper back to 1.1x because the reply column is too crowded.

age decay

math.exp(-0.1155 * age_hours), a 6-hour half-life. 3h old = 71%, 6h = 50%, 12h = 25%, 18h = 12.5%. A day-old tweet is worth about 5% of the same tweet scored fresh.

The Phase 1 function, exactly as shipped

Everything in the list above maps to a line below. The return value is what lands in the virality_score column.

scripts/score_twitter_candidates.py

Phase 2: re-poll the same rows, compute the delta

About an hour after Phase 1 stamps the batch, a second job walks the same rows by batch_id, calls fxtwitter, writes the T1 columns back next to T0, and fills delta_score. Calls are throttled by a 300ms sleep between requests.

scripts/fetch_twitter_t1.py

What a real Phase 2 run looks like

Negative deltas are cheap to detect and cheap to discard. Positive deltas become the reply queue, ranked high to low.

social-autoposter · phase 2

The full cycle, end to end

The two phases are not the whole loop. They sit inside a longer cycle that starts at search and ends at a posted reply with a recorded cost.

1

Scanner pass

socialcrawl.py walks configured search topics and pulls a raw JSON blob of candidate tweets with likes, retweets, replies, views, bookmarks, and follower count.

2

Phase 1: score + upsert

score_twitter_candidates.py reads stdin, runs calculate_virality_score, writes into the twitter_candidates table with a batch_id, T0 engagement columns, and virality_score.

3

Sleep the half-life

The cycle waits. The 6-hour half-life in the age decay means about one hour between T0 and T1 is where delta signal is strongest without the whole set of tweets going stale.

4

Phase 2: fxtwitter re-poll

fetch_twitter_t1.py queries api.fxtwitter.com for every row tagged with that batch_id, writes T1 columns, and runs compute_delta(T0, T1) to populate delta_score.

5

Reply queue

The replies loop pulls rows ordered by delta_score DESC where delta is positive. Rows with negative delta are marked expired. No reply ever goes out to a decaying tweet.

6

Closed loop back to the posts table

Every reply posted lands in the replies table with an engagement_style, a cost, and a status. The same machinery that picks what to reply to later grades how it went.

One pass through the scorer

1

scan

Walk search topics, pull raw JSON via socialcrawl.py.

2

score T0

calculate_virality_score, upsert into twitter_candidates with batch_id.

3

sleep

Cycle waits roughly one hour, near 1/6 of the half-life window.

4

re-poll T1

fetch_twitter_t1.py hits api.fxtwitter.com/{handle}/status/{id}.

5

delta

compute_delta(T0, T1) writes delta_score; negative rows are expired.

6

reply

Top-N by delta_score enters the replies queue for engagement.

The numbers the pipeline leans on

Every constant in the Python is tunable, but the shipped defaults below are the ones on every customer install today.

0h
Age-decay half-life
0x
Retweet weight in delta
0x
Reply weight in delta
0ms
Sleep between fxtwitter calls
0
Reply count for +1x reply bonus
0x
Reply bonus cap
0h
Candidate expiry cutoff
0d
Posted/expired row pruning

S4L vs a calendar scheduler, feature by feature

The honest comparison. If a team already has a scheduler, this is what they gain by dropping an engagement decision engine in next to it.

FeatureClassic schedulerS4L
When to reply, as a conceptCalendar: pick a time slot, pick an AI captionComputed: rank by delta_score over a batch
Does the tool re-check the tweet after you schedule?No. The post queue is fire-and-forgetYes, every candidate is re-polled via fxtwitter before reply
Signal that a thread is still aliveNo equivalent, you pick from a mentions inboxPositive Δreplies and Δretweets in the second poll
Signal that a tweet is decayingNot representeddelta_score ≤ 0, row is expired and skipped
Retweet vs like weightingUsually none, engagement shown as a raw count3x retweets, 2x replies, 1x likes, /1000 on views
Half-life of the scoreNo built-in decay6 hours, implemented as math.exp(-0.1155 * age_hours)
Per-candidate data you can inspectScheduled post row + analytics rolluptwitter_candidates row: batch_id, t0/t1 engagement, delta_score, virality_score, search_topic, status

What the math lets you do that a calendar cannot

Five concrete behaviors the delta-first pipeline unlocks.

Decisions the loop makes automatically

  • Skip tweets you would have replied to an hour ago, because their delta is negative now.
  • Skip high-follower accounts whose replies are already a noise wall, because the reach multiplier taps at 1.1x, not 2x.
  • Reply to a 7K-follower account with 30 replies in the last hour, because every multiplier stacks in your favor there.
  • Reply sooner on fresh tweets, since the age_decay term rewards the first six hours hardest.
  • Never reply to something you scored four hours ago but never got back to, because the decay already ate the score.
0Phases in the scorer
0Age half-life (hours)
0xRetweet weight
0xReply weight

See the delta_score column on a real account

30 minutes on Cal. Screenshare of twitter_candidates, a live Phase 2 run, and the replies queue ordered by delta. If the architecture fits your workflow, we install it.

Book a call

Questions people ask before the first call

What makes S4L different from a scheduler like Buffer or Hootsuite?

Schedulers post on a calendar. S4L decides when to reply by running a two-phase velocity model: score each candidate tweet by age-decayed engagement velocity, wait about an hour, re-poll the same tweets through fxtwitter, and promote only the ones whose delta score is still rising. Most schedulers never look at the tweet again after you queue the post.

Why weight retweets 3x and replies 2x heavier than likes?

Likes are cheap and decay fast. Retweets are a commitment to reshare, so they track actual distribution. Replies mean the thread is alive, which is the condition under which a new reply gets read. The 3-2-1 weighting in compute_delta() reflects that retweet momentum predicts where the tweet will end up, while likes mostly track where it already is.

What is fxtwitter and why does the pipeline use it?

api.fxtwitter.com is a public metadata endpoint that returns a tweet's likes, retweets, replies, views, and bookmarks as JSON without requiring the Twitter API. S4L calls https://api.fxtwitter.com/{handle}/status/{tweet_id} for each candidate in the batch and sleeps 300ms between calls to stay polite.

What does the 6-hour half-life in the velocity formula mean?

Phase 1 multiplies the raw velocity by math.exp(-0.1155 * age_hours), which is exp(-ln(2)/6 * age_hours). A tweet 3 hours old keeps 71% of its score, 6 hours keeps 50%, 12 hours keeps 25%, 18 hours keeps about 12%. Anything older than about a day stops being a candidate because the decay eats the score.

Can I see this code or is it marketing talk?

It is checked-in Python. The Phase 1 scorer is scripts/score_twitter_candidates.py, function calculate_virality_score. The Phase 2 re-poll is scripts/fetch_twitter_t1.py, function compute_delta. The delta formula is on line 51, the fxtwitter call is on line 28, the 6-hour half-life is on line 86 of the Phase 1 file. Book a call and we will walk it line by line.

Does this replace scheduling entirely?

It replaces the part where you guess when to engage. S4L still has a posting side for your own content. The difference is that reply targets are not picked from a calendar or a mention feed, they are picked from a ranked table of twitter_candidates where every row has a delta_score column. The top of the list is what the loop replies to next.

How big is a batch, and how often does the cycle run?

A batch is whatever the scanner returned in one pass (usually 30 to 100 candidates). Phase 2 re-polls only rows tagged with that batch_id. The cycle is orchestrated by scheduler jobs on the host, so the cadence is configurable per account, but the default is aimed at about one full pass per hour so the T0 to T1 gap stays near the 6-hour half-life.