M
Matthew Diakonov

Social media post automation that spends most cron ticks refusing to post

Every guide for this keyword treats automation as a queue. Load 30 days of captions, pick the time slots, hit publish. That shape works when posting is free. It stops working the moment your platform can rate-limit you, flag the account, or penalize the subreddit. S4L is built the other way around. The cron tick starts by trying to fail. It runs five cheap guards, any one of which can abort the whole run, and only then does it spend the ~$0.44 it takes to draft the actual post with Claude.

4.9from operators running multi-platform posting loops
5 guards fire before any draft
300ms probe to /r/popular.json per run
PREFLIGHT_WAIT_BUDGET_SECONDS = 180
Floor days: 1 own community, 3 external
180s hard sleep between posts in one session

What the top pages on this keyword all describe

Look at any top-ten listicle for "social media post automation." The tools differ on UI and seat pricing. They do not differ in shape. All of them are calendar-first schedulers. They assume posting succeeds. They have no first-class concept of refusing a post because it is too soon, too similar, or too close to a rate limit.

HootsuiteBufferLaterSocialBeeSprout SocialSendibleMetricoolAgorapulsePlanolyZapierPublerPallyy

S4L replaces "when do I post" with "am I allowed to post." That is the reframe this page is built around. Below is the exact code that enforces it.

The anchor: a 300ms probe before a $0.44 spawn

This is the single most important function in the posting pipeline and the one every other post-automation listicle skips. Before any Claude session is spawned, _probe_reddit_quota makes one GET against https://old.reddit.com/r/popular.json?limit=1 and reads two response headers. X-Ratelimit-Remaining tells it how many requests are left in the rolling window. X-Ratelimit-Reset tells it how many seconds until the window rolls over. If the reset is longer than the 180-second budget, the function returns False and the runner exits.

scripts/post_reddit.py · preflight_rate_limit

One request. Two headers. One cached JSON. The decision to spend or not spend the Claude budget is made in about 300 milliseconds.

On a 429 response the probe still captures the reset header and still updates the cache, so downstream reddit_tools.py calls in the same process share the fresh state. If the probe network-fails, the runner falls back to the cached JSON at /tmp/reddit_ratelimit.json. Belt and suspenders.

PREFLIGHT_WAIT_BUDGET_SECONDS = 180probe ~300msspawn ~$0.44cache: /tmp/reddit_ratelimit.json
scripts/post_reddit.py

What the same cron tick looks like, pass vs refuse

Two runs from the same loop, copied almost verbatim from the log. The first passes every guard and posts. The second hits an empty quota window and refuses, saving the Claude spawn.

passed · posts committed
refused · cost avoided

The five guards, each can abort the run on its own

They are ordered cheapest-first. The LinkedIn cooldown file is a stat call. The probe is one HTTP round trip. The floor-days query is one indexed SELECT. The comment-blocked list is an in-memory set lookup. Only after all four do you spend LLM tokens, and only then is the 180-second intra-session sleep enforced.

guard 1 · live rate-limit probe

300ms GET to /r/popular.json. Reads X-Ratelimit-Remaining and X-Ratelimit-Reset. If reset is over 180 seconds, the whole run aborts before a single Claude token fires. Cached to /tmp/reddit_ratelimit.json.

guard 2 · own-community floor (1 day)

DEFAULT_OWN_FLOOR_DAYS = 1. Can post to your own subreddit once per rolling 24 hours. Measured by a Postgres query against posts where thread_url = our_url.

guard 3 · external floor (3 days)

DEFAULT_EXTERNAL_FLOOR_DAYS = 3. Any outside subreddit is on a 72-hour lockout after a successful original thread. Overridable per project via threads.external_floor_days.

guard 4 · platform cooldown JSON

A single file at /tmp/linkedin_cooldown.json with reason, resume_after, created_at. Any cron can consult it. Exit code 0 is clear, 1 is blocked. Auto-deletes once resume_after passes.

guard 5 · subreddit comment_blocked list

If the bot ever hits a locked or restricted sub, it writes the sub into config.json subreddit_bans.comment_blocked at runtime. Future runs never target it. Thread bans live in subreddit_bans.thread_blocked.

only then · the Claude draft

By the time the prompt is assembled, the target, floor, and cost are all known-good. A full Claude session with search and verify is around $0.44. The guards above exist to make sure that $0.44 is not wasted.

How the floor-days filter is actually computed

Pulled directly from scripts/pick_thread_target.py. One Postgres query returns a dict mapping each subreddit slug to how many days ago the account last posted an original thread there. Own community floor is 1 day, external is 3. Inside the floor window, the candidate is dropped.

scripts/pick_thread_target.py

The whole pipeline, one diagram

Every cron tick funnels through the same five guards before the Claude draft fires. Most ticks never reach the right side of this diagram at all, and that is by design.

posting pipeline

cron tick
launchd timer
systemd timer
5 preflight guards
Claude draft ($0.44)
posts table insert
180s intra-session sleep
next cron tick

The numbers that keep the pipeline cheap

Every constant here is a line in post_reddit.py or pick_thread_target.py. Changing any of them changes the shape of the automation.

0s
PREFLIGHT_WAIT_BUDGET_SECONDS
0ms
Avg latency of the rate-limit probe
0 day
DEFAULT_OWN_FLOOR_DAYS
0 days
DEFAULT_EXTERNAL_FLOOR_DAYS
0s
time.sleep between posts in one Claude session
0s
urllib probe timeout
0
Guards that can abort a run
0
State files (reddit, linkedin, config bans)

One cron tick, step by step

This is the exact order the posting pipeline runs. The first five steps are cheap. The sixth is the expensive one that only fires when the five cheap ones all agree.

1

Cron fires

A launchd or systemd timer on your box (bin/scheduler/launchd.js, bin/scheduler/systemd.js) kicks a single invocation. Nothing else runs until this one exits.

2

Platform cooldown check

For LinkedIn: python3 linkedin_cooldown.py check. Exit 1 means a previous run set a resume_after that has not passed yet. The runner logs the reason and quits.

3

Live rate-limit probe

_probe_reddit_quota fires a GET against /r/popular.json. Headers read, state file updated. If reset > 180s the run aborts, saving the Claude spawn.

4

Candidate build with floor-days

pick_thread_target.py runs a Postgres query for recent posts per subreddit. Own community is eligible if last_posted >= 1 day, external if >= 3 days. Banned subs are stripped out.

5

Weighted project sampling

If any own-community slot is open, pick it. Otherwise group candidates by project, use project.weight as the sampling weight in random.choices, then pick a random eligible sub inside the chosen project.

6

Only now: Claude drafts

The prompt is assembled with the chosen target, the engagement styles block, and the project context. The model runs. A row lands in the posts table. The runner sleeps 180 seconds and loops.

The cooldown file any cron can read

A separate, simpler guard for LinkedIn. One JSON file at /tmp/linkedin_cooldown.json. Any process on the box can consult it. Any human can write to it to pause posting without touching the code. If the resume_after has passed, the read call auto-deletes the file so you never end up in a stale-cooldown state after a reboot.

scripts/linkedin_cooldown.py

Why rejection-first beats queue-first for post automation

The cost of a refused tick is bounded and small. The cost of a wrongly queued post is unbounded. Five reasons the rejection-first design pays back faster than the calendar-first one.

Why the guards run before the draft

  • A queued post that triggers a rate-limit flag can cost you the account. One refused cron tick costs nothing.
  • A $0.44 Claude spawn that discovers it cannot search is $0.44 of your budget on a run that produces nothing.
  • Per-subreddit cooldowns measured in days keep moderators from pattern-matching your account to 'always the same sub.'
  • A shared cooldown JSON lets any cron or any human pause posting without redeploying anything.
  • Rejection-first automation is auditable. You can grep the log and see which guard fired. Queue automation is not.
0sRate-limit budget
0 dayOwn-community floor
0 daysExternal floor
0sIntra-session sleep

S4L vs a calendar-first scheduler

The design axis is not features, it is which action is the default one. Calendar-first tools default to "post." S4L defaults to "do not post unless allowed." Here is what that looks like line by line.

FeatureCalendar-first schedulerS4L (preflight guards)
Default behavior when you run itPost next item from the queue.Refuse unless every guard passes.
Cost of a wasted runWhatever a full draft costs, already spent.~$0.0003 (a single probe request).
Rate-limit awarenessRetry after 429, possibly the account is already flagged.Live header probe before the draft.
Per-subreddit cooldownsPer-account cap, not per-subreddit.SQL against posts table, floor days per sub.
Shared platform cooldown fileInternal state, not exposed./tmp/linkedin_cooldown.json, any cron can read it.
Inter-post sleep inside one sessionWhatever the UI lets you click Schedule for.time.sleep(180) hard floor.
What you read to audit a refusalA greyed-out row in a calendar.The cron log line that names which guard fired.
The one sentence this page is built around

The tool that writes the post first asks, with a 300ms HTTP call, whether writing the post is allowed right now.

That question is the thing the top search results never raise, and it is also the thing that keeps the account alive.

See the probe fire on a live account

30 minutes on Cal. Screen share of the runner, one real cron tick, the exact log line that shows remaining and reset coming back from the Reddit header.

Book a call

Questions operators ask before the first call

Why should post automation care about rate limits before it writes anything?

Because the draft is the expensive step. A Claude session that writes a Reddit post runs around $0.44 on average when it has to do search, outline, revise, and verify. If Reddit's search endpoint returns 429 partway through, that whole session is wasted. scripts/post_reddit.py runs a single 300ms probe against https://old.reddit.com/r/popular.json?limit=1 first, reads X-Ratelimit-Remaining and X-Ratelimit-Reset from the response headers, and compares the reset window to PREFLIGHT_WAIT_BUDGET_SECONDS = 180. If the reset is longer than that, the run aborts. The probe is ~$0.0003. The math is obvious.

Where exactly is the probe code?

scripts/post_reddit.py, function _probe_reddit_quota, lines 52 to 79. It uses urllib.request with a 10-second timeout, falls back to /tmp/reddit_ratelimit.json as a cached state file if the network fails, and writes the fresh state back to that file so any downstream reddit_tools.py call uses the same numbers. On a 429 response it captures the X-Ratelimit-Reset from the error headers and still updates the cache.

What are the per-subreddit cooldown floors?

scripts/pick_thread_target.py lines 32-33. DEFAULT_OWN_FLOOR_DAYS is 1 day (you can post to your own community daily). DEFAULT_EXTERNAL_FLOOR_DAYS is 3 days (any external subreddit has to wait three days between original threads). Both are overridable per project via config.json threads.external_floor_days or threads.own_community.floor_days. The recent_posts_by_sub function runs a Postgres query against the posts table filtered by thread_url = our_url and the subreddit is pulled from the URL path. If the row is inside the floor window, that subreddit is dropped from the candidate list.

What is the shared LinkedIn cooldown file for?

scripts/linkedin_cooldown.py keeps a single JSON at /tmp/linkedin_cooldown.json with three fields: reason (why we backed off), resume_after (ISO 8601 timestamp when posting can resume), and created_at. Any cron run can call 'python3 linkedin_cooldown.py check' before posting. Exit code 0 means clear to post, exit code 1 means we are still in cooldown. The file is auto-deleted when read_cooldown sees the resume_after has passed, so you never end up in a stale-cooldown state. LinkedIn will restrict an account that gets 429d a few times in a row, so this file is a defensive layer you can set manually after a 429 or an account challenge.

How is the 3-minute gap between posts enforced inside one session?

scripts/post_reddit.py line 557: 'time.sleep(180) # 3 min gap between posts within a single Claude session.' That is a hard sleep inside the runner that keeps Reddit's spam filter from flagging the account for posting several times in quick succession. It is not configurable. Three minutes is the floor, not the target. If the Claude session runs long on its own, the real gap is longer.

How does weighted project sampling actually pick which project posts next?

scripts/pick_thread_target.py, function pick, lines 130 to 145. If any 'own_community' candidates are eligible (no thread within 1 day), one is picked at random and the function returns. Otherwise the eligible candidates are grouped by project name, weights are read from each project's config.weight (default 1), and random.choices picks one project. Inside the picked project, a random eligible external subreddit is chosen. This means if you set weights {fazm: 3, s4l: 1}, fazm posts roughly three times as often when no own-community slot is open.

What if every guard fails and the run has nothing to post?

That is the intended outcome most of the time. A normal day has the own-community floor used up, external subs on cooldown, and the rate probe reporting a low remaining budget. The cron tick logs the reason, exits, and waits for the next tick. 'Do not post' is the default. The Claude draft is the exception that only fires when all the guards agree.

Does the same pipeline work on Twitter, LinkedIn, and GitHub?

The shape is the same, the details differ. LinkedIn has its own cooldown file at /tmp/linkedin_cooldown.json. GitHub uses a separate script (scripts/engage_github.py and scripts/post_github.py) with repo-level rate checks instead of subreddit floor-days. Twitter uses scripts/twitter_browser.py with CDP sessions and its own state. The common idea is: a cheap guard first, the expensive LLM second. Every platform script starts by asking 'can I post at all' before it asks 'what should I post.'