M
Matthew Diakonov
11 min read
Social media automation vs RecurPost

RecurPost cycles evergreen content. S4L refuses to recycle.

RecurPost's pitch is content recycling: drop evergreen posts into a library, set a cadence, let the tool cycle them forever. It is a good pitch, and it is exactly what the top seven SERP results for this keyword praise. S4L's scheduler, in the same category, is built on the opposite rule. Before any pick runs, pick_thread_target.py enforces a per-subreddit cooldown floor. 1 day for your own community. 3 days for every external sub. Zero reuse of a thread URL, ever. Because on Reddit, on LinkedIn, on X, reuse is the thing the algorithms were trained to quietly punish.

5.0from code-verified against scripts/pick_thread_target.py
Own floor 1 day
External floor 3 days
14 day SQL lookback
0 thread URL reuse
0 content libraries

What RecurPost actually does

Starting point, no strawman. RecurPost is a social media management tool whose distinctive feature is content recycling. You group posts into libraries, each library has a cadence, and the tool loops through the library and re-posts. There is also a calendar view, drag-and-drop scheduling, an RSS auto-import, a unified inbox, and IG DM automation. The core loop, the reason people pick it over Buffer or Hootsuite, is the recycling.

content librariesevergreen recyclingposting calendardrag-and-drop time slotsrecycle cadence per libraryunified inboxRSS auto-importIG DM automationbulk upload CSV

The part the listicles skip: what platforms do with repeat patterns

Recycling is a user-facing feature. The platform-facing effect is a repetition signal. Every major platform scores accounts on patterns that include posting cadence regularity, copy similarity across the same author, and destination-repeat frequency. A tool whose core feature is literal recycling produces exactly the pattern the scoring is designed to catch.

reddit: repeat posts trip spam filters

Reddit's site-wide anti-spam treats 'same account hitting the same sub repeatedly' as a signal. Automod rules in medium-and-up subs compound it. Shadowbans on Reddit are silent. You keep posting, nobody sees it, no error is returned.

linkedin: reach throttling

LinkedIn deboosts accounts that post with repetitive cadence or repeated copy. There is no notification. The metric you see drop is reach, specifically first-hour impressions, which then never recover on that post.

twitter: deboost on similarity

X's visibility filters downrank near-duplicate text across the same author's timeline and across accounts with correlated posting patterns. A recycling tool produces exactly that pattern by design.

the moderator cost

Even when algorithms are generous, human mods see the same account drop near-identical posts across threads and reach for the ban hammer faster than for a first-time poster with a novel take.

The anchor: two constants at the top of a file

If the whole argument of this page compresses to one thing, it is two constants at the top of scripts/pick_thread_target.py. These are the floors a candidate subreddit has to clear before it can be picked. Nothing in the file overrides them silently. Every pick, every run, on every machine, runs through these checks first.

scripts/pick_thread_target.py

The one sentence that inverts the RecurPost model

"Skip any subreddit where this account has posted an original thread within that sub's floor window."

That is line 11 of the docstring. RecurPost's recycling loop is the exact opposite of this rule. The rest of the file (recent_posts_by_sub at line 69, build_candidates at line 94, pick at line 130) is the mechanism that makes the sentence true in production.

The SQL behind the floor

The check is not a cached flag; it is a Postgres SELECT against the live posts table every time the scheduler runs. That is why missed cron runs do not create a backlog: "how recently did we hit this sub?" is always measured against now.

scripts/pick_thread_target.py (recent_posts_by_sub)

The result dict ({subreddit_slug: days_since_last}) is what build_candidates consults before admitting any sub into the pool. Two lines later, the pool is filtered down, and then a weighted random picks from what is left.

scripts/pick_thread_target.py (build_candidates)

The four numbers that define the cooldown

Own community floor

0 day

DEFAULT_OWN_FLOOR_DAYS. Your own sub can be re-used after 24 hours, but not before, so your own community does not turn into a posting board.

External subreddit floor

0 days

DEFAULT_EXTERNAL_FLOOR_DAYS. Three calendar days must pass between posts into any external sub, even if the sub is allowlisted and the content is different.

Lookback window

0 days

The SQL clamps SELECT posts to NOW() - INTERVAL 14 days. Anything older cannot block a candidate, so cooldowns always expire.

Reuse of the same thread URL

0

get_already_posted() returns the full set of thread URLs touched, ever. A thread that has received one of our replies is out of the candidate pool permanently.

One run, seven messages

This is the communication shape of a single pick. Cron fires. The script loads config, reads the posts table, does two in-process filters (floor window, blocklist), and writes one line to stdout (or exits non-zero).

pick_thread_target.py --json, one run

launchd (every 30m)pick_thread_target.pyposts (Postgres)config.jsonstdout (downstream)invoke --jsonload floor_days per projectSELECT posts WHERE posted_at > NOW() - INTERVAL 14drows -> days_since_last per subdrop subs where days_since < floordrop subs in thread_blockedPROJECT + SUBREDDIT (or exit 1)

Four steps, in order, every time

A pick is not a ranking problem, it is a filter then a coin flip. The ranking (project.weight) only runs over candidates that already cleared both filters.

  1. Load project config

    Read threads.external_floor_days per project. Default to 3 if absent.

  2. Pull recent posts

    SELECT thread_url, days_ago FROM posts filtered to the last max(own, external, 14) days.

  3. Filter by floor window

    Drop every sub where days_since_last < that project's floor. Own community uses own floor.

  4. 4

    Emit or exit

    Return one (project, sub) via weighted random, or exit 1 if the candidate set is empty.

A real invocation, with the floor doing its job

The log below is the stderr shape of one pick_thread_target.py --json run with debug output enabled. r/fazm and r/SaaS are both inside their cooldown windows and get filtered. r/s4l and r/Entrepreneur clear; own_community is preferred.

python3 scripts/pick_thread_target.py --json

And at the thread level: zero reuse, ever

The subreddit floor bounds placement repetition. But the thread-level rule is stricter: a thread URL we have ever touched is out of the candidate pool for the lifetime of the database. No cooldown window, no override. It is implemented with one set-fetch and one set-membership test, and it is why replies never double-post.

scripts/find_threads.py (get_already_posted)

Four inputs in, one decision out

Every pick is the fusion of four sources of truth. The posts table says what we did. config.json says what we are allowed to do. The blocklist says what we must never do. The own_community override says where we lean. One subreddit comes out. Often, none does, and that is also a valid outcome.

pick_thread_target.py, one invocation

posts table (Postgres)
config.json (per-project floor_days)
subreddit_bans.thread_blocked
own_community.floor_days override
build_candidates + pick (weighted random)
(project, subreddit) picked
exit 1: no candidates, no post
row inserted on successful post (feeds next pick)

The six rules the picker always applies

Nothing below is toggled on by a dashboard. They are the code path.

Active on every run

  • own_community gets a 1 day floor (DEFAULT_OWN_FLOOR_DAYS) and is preferred when present
  • external subreddits get a 3 day floor (DEFAULT_EXTERNAL_FLOOR_DAYS), overridable per project
  • subreddit_bans.thread_blocked filters candidates before floor_days is evaluated
  • if no candidate passes both filters, the script exits with code 1 and nothing posts
  • every thread URL ever touched is in get_already_posted(); matches are dropped permanently
  • weighted random runs only over the eligible set, so a banned sub cannot be picked by weight
1d / 3d

Skip any subreddit where this account has posted an original thread within that sub's floor window.

scripts/pick_thread_target.py, line 11 of the docstring

RecurPost vs S4L, feature by feature

Not a knock on RecurPost; a different product category. If your content is evergreen marketing copy and your destinations are your own owned channels (your company Facebook page, your own Twitter handle with a reach-motivated audience), recycling is a fit. If your content is engagement into communities with mods, algorithms, and human readers who remember, anti-recycling is a fit.

FeatureRecurPostS4L
Core scheduling primitiveContent libraries. Posts live in a library and the scheduler loops through them on the cadence you set.A pick function that returns one (project, subreddit) pair per run and enforces cooldowns on every call.
Per-placement cooldownNone by default. The recycling cadence is how often a post is allowed to repeat; there is no floor on how recently that destination was hit.DEFAULT_OWN_FLOOR_DAYS=1, DEFAULT_EXTERNAL_FLOOR_DAYS=3. Overridable per project via config.threads.external_floor_days and per own_community via own_community.floor_days.
Dedup on placement identityThe library item gets reused. That is the feature.get_already_posted() in find_threads.py returns every thread URL ever touched. Match = skip. No retry, no second pass.
BlocklistPer-platform account connect, with no native 'do not post here' list at the subreddit granularity.subreddit_bans.thread_blocked and subreddit_bans.comment_blocked lists are read from config.json on every pick and filter candidates before scoring.
Own vs external treatmentAll channels treated the same, since the model assumes you own the destinations (your own pages).own_community gets a shorter floor (1 day) and is preferred when present; externals get 3 days and are weighted by project.weight only after the floor passes.
What wakes the schedulerA time slot on a calendar. 9am Monday. 3pm Thursday.A launchd run fires the pick script; if the pick returns None, nothing posts and the next run is picked up by cron without any scheduling drama.
What survives a missed runThe calendar slot. It either fires next cycle or it does not, depending on the tool.The state. Cooldowns are derived from the posts table at pick time, so a missed cron run does not create a backlog. Cooldowns are always measured against 'right now'.
What you loseYou gain reach per post because the same post is seen by more people over time.You give up the scale of recycling. You gain a footprint that looks like a human account posting at human cadence, which is what most platform algorithms were trained to not punish.
0Own community floor (days)
0External sub floor (days)
0SQL lookback (days)
0Content libraries
0Recycled posts per run

Anti-recycling primitives

DEFAULT_OWN_FLOOR_DAYS = 1DEFAULT_EXTERNAL_FLOOR_DAYS = 3max_days = max(own, external, 14)thread_url == our_urldays_since_last >= floorsubreddit_bans.thread_blockedown_community.floor_days overrideweighted random over eligible setexit 1 on empty candidatesget_already_posted() dedup forever

Want to see the picker run on your config?

30 minutes on Cal. We load your project list into pick_thread_target.py, pull your account's history into the posts table, and walk through every filter the scheduler applies before anything is ever posted.

Book a call

Questions people actually ask about cooldown-based schedulers

Is S4L a RecurPost alternative?

Not in the way most listicles mean it. RecurPost is a calendar scheduler whose core primitive is the content library: you drop evergreen posts into a library and the tool cycles them on a cadence you set, forever. S4L does not have a content library, does not have recycling, and does not have a calendar UI. What it has is a pick function (scripts/pick_thread_target.py) that returns one (project, subreddit) per invocation and enforces a cooldown floor on every call. If your goal is to reuse the same post into the same destination, RecurPost is the product shape. If your goal is to never reuse a destination inside a cooldown window, S4L is.

Why does S4L refuse to recycle content?

Because platform algorithms treat reuse as a signal. On Reddit, repeat posts into the same subreddit from the same account trip spam filters and subreddit automod, and the account is silently shadowbanned (you keep posting, nobody sees it, no error is returned). On LinkedIn, reach throttling kicks in when posting cadence is too regular or copy is too similar, and the affected metric is first-hour impressions, which rarely recover. On X, near-duplicate text across the same timeline gets deboosted by visibility filters. A tool whose core feature is recycling is also the tool most likely to push an account into these states. S4L's anti-recycling rule is not ideology, it is a reaction to how the platforms score accounts.

Where is the cooldown floor actually enforced in code?

In scripts/pick_thread_target.py, specifically the build_candidates function at lines 94-127. Two constants at the top of the file (DEFAULT_OWN_FLOOR_DAYS=1, DEFAULT_EXTERNAL_FLOOR_DAYS=3) define the defaults. The recent_posts_by_sub(max_days) function runs a Postgres SELECT that computes days_since_last_our_thread per subreddit. build_candidates then drops any sub where last is not None and last < floor. The check happens twice, once for own_community (with its own floor), once for each external sub. If nothing passes, the function returns an empty list and the CLI exits 1.

What if I want to post into the same sub more often than every 3 days?

You can override the external floor per project by setting threads.external_floor_days in that project's config entry in config.json. If you set it to 1, external subs are eligible 24 hours after the last post. The logic on line 103 of pick_thread_target.py reads int(t.get('external_floor_days', DEFAULT_EXTERNAL_FLOOR_DAYS)). The default is the cautious choice, not a hard limit. For your own community (own_community), the override lives on own_community.floor_days and defaults to 1 day.

How is 'the same thread' detected?

By URL, against the posts table. get_already_posted() in scripts/find_threads.py runs SELECT thread_url FROM posts WHERE thread_url IS NOT NULL and returns the full set. The candidate-thread pipeline filters out any discovered thread whose URL is in that set before it can even be considered for a reply. Dedup is lifetime, not windowed, because replying to the same thread twice from the same account is much worse than posting into the same subreddit a second time.

What if the scheduler has no eligible subreddit to pick?

It exits 1 and prints 'no candidates' on stderr. Nothing posts. The next launchd run picks it up cleanly. This is the right behavior: if every eligible destination is inside its cooldown window, the correct action is to post nothing, not to force a post into a borderline destination. Contrast with a calendar scheduler, where 'no eligible' is not a concept because the calendar is the source of truth and it will fire whether the destination is safe or not.

How does this interact with RecurPost's content recycling if I use both?

You would not typically run both against the same account. RecurPost's recycling loop and S4L's floor_days enforcement will fight each other: RecurPost will keep firing the recycled post into a destination S4L has already cooled down, and the platform will see both patterns additively. If you are testing the difference, run them on separate accounts or at least separate subreddits with no overlap, and compare account-level metrics (first-hour reach, post visibility, automod hits) over a 30 day window.

Is there a web UI for managing the cooldowns?

No. The floor_days values live in config.json alongside the rest of the project config, and the state (when each sub was last touched) lives in the Postgres posts table. You inspect it with psql, or by reading the --show-all output of pick_thread_target.py. There is no 'manage your schedule' dashboard because there is no schedule to manage, only a pick function and a set of rules.

Why a 14 day lookback window on the SQL?

recent_posts_by_sub runs max(DEFAULT_OWN_FLOOR_DAYS, DEFAULT_EXTERNAL_FLOOR_DAYS, 14) as the INTERVAL clamp. 14 is the ceiling because no floor is larger than that in practice, so anything older cannot affect a decision. It also keeps the SELECT fast: the posts table grows unbounded over time, but the relevant window is always recent, and the index on posted_at makes a 14 day scan cheap.

How does this map to LinkedIn, Twitter, GitHub?

The floor_days pattern is reused across platforms. linkedin_cooldown.py (scripts/linkedin_cooldown.py) adds a platform-level cooldown on top, writing a JSON state file at /tmp/linkedin_cooldown.json when the account hits rate limits or checkpoint challenges. A cron run checks that file before firing. Twitter and GitHub have their own analogues. The rule is the same across all of them: the scheduler is the one asking 'is the destination cool?', not the one saying 'it is 9am, fire the post'.