Reddit marketing tool

The seven guardrails that run before S4L drafts a comment

Every reddit marketing tool review compares discovery, monitoring, and AI reply quality. They all skip the part that decides whether your account survives: the guardrails that refuse to engage in the first place. Here is what each one does, taken from the actual source files.

M
Matthew Diakonov
8 min read
4.8from 47 operators
180-day thread age cutoff
Dual HTML+JSON lock verification
180s rate-limit preflight abort
18 comment-blocked / 31 thread-blocked subs tracked separately

Why this angle exists

If you search "reddit marketing tool," the top five results read almost the same. They rank tools by discovery quality, AI reply quality, pricing per comment, and which SaaS outsources your posting to a managed account. Those are real features, but they answer the wrong question.

The question that decides whether any of those features matter is whether your account is still active in six months. Reddit does not publish its shadowban heuristics, but operators running at scale know the patterns that draw moderation attention: commenting on archived threads, engaging while rate-limited, posting the same comment shape on the same subs, replying to threads your account already touched.

S4L's Reddit pipeline is built around refusing those patterns. Seven specific guards run before a single word of a comment gets drafted, each one traceable to a line in a source file you can read.

0hHours max thread age (180 days)
0sRate-limit preflight budget
0Comment-blocked subs
0Thread-blocked subs

The guardrails, one by one

Seven checks, grouped by when in the flow they fire. Each is one or two functions in the repo. Each has a concrete failure mode it exists to prevent.

Guard 1: 180-day age cutoff

Any thread whose age_hours is over 4320 is dropped before it ever reaches the drafter. Archived threads are invisible to Reddit's comment box and commenting on them is one of the loudest signals of a scripted account.

Guard 2: dual lock check

JSON locked is not enough. Seen on r/Entrepreneur: AutoMod renders .locked-tagline in HTML while the JSON payload still says locked=false. S4L fetches the old.reddit page and regexes the tagline class as a second opinion.

Guard 3: rate-limit preflight

Before Claude even spawns, a single probe hits old.reddit.com/r/popular.json?limit=1 to read live X-Ratelimit headers. If reset is more than 180 seconds away, the run aborts. No half-finished comments, no burnt quota.

Guard 4: comment-blocked vs thread-blocked

18 subs block comments, 31 block thread creation. These are stored as separate lists in config.json because a sub like r/startups allows comments but not new self-posts, and blending the lists would leak ban targets into the wrong pipeline.

Guard 5: hard dedup by thread_url

Every posts row is keyed on thread_url. A SELECT happens before any draft. If S4L already left a comment on that thread, the entry is labeled SKIP and the drafter never sees it as a candidate.

Guard 6: reply-level dedup

Inbox backfill has a 48-hour cutoff so stale replies never trigger a response, and replies table is keyed on their_comment_id so a single parent comment only ever gets one reply from the account.

Guard 7: dynamic style tiers

engagement_styles.py reads avg_upvotes per style from the posts table and gates trust at n >= 5. Styles with fewer than 5 samples go to 'explore' instead of 'dominant', so the bot does not double down on a style that just got lucky twice.

Guard 1 in the source: the 4320-hour cutoff

The age check is two lines. It is also the single most important one, because archived threads do not render a comment form at all and commenting on them (or trying to) is a reliable signal for Reddit's abuse tooling. 4320 hours is 180 days, one full archive window ahead of Reddit's default auto-archive.

reddit_tools.py

Guard 2: trust the HTML, not the JSON

The Reddit JSON API is the cheap way to check lock status. It is also wrong often enough that S4L treats it as a first opinion, not a verdict. When r/Entrepreneur's AutoMod locks a thread for the self-promotion rule, the page renders a .locked-tagline element while the JSON still reports locked: false. A tool that trusts the JSON posts into the void.

reddit_tools.py
2

Reddit's JSON locked and archived flags sometimes miss HTML-only lock states. Concretely seen on r/Entrepreneur where AutoMod renders .locked-tagline on the thread page while the JSON payload reports locked=false.

scripts/reddit_tools.py, inline comment at line 266

Guard 3: abort before you start

Rate-limit handling in most tools means retry-with-backoff. S4L does the opposite: if the quota is near zero and the reset is more than 180 seconds out, the run aborts. The reasoning in the source comment is that a Claude spawn with five rate-limited searches costs roughly $0.44 and produces zero comments, while a single probe request costs about 300 milliseconds and returns honest quota state.

post_reddit.py

A full run, narrated

The guards do not fire in isolation. Here is a real invocation showing how the preflight, the age filter, the comment-blocked list, and the HTML recheck each trim the candidate list before any drafting starts.

post_reddit.py (one full run)

The data flow, in one picture

Left side: the signals that feed the filter layer. Center: the guardrail engine that decides which threads survive. Right side: what leaves the system.

Inputs, guards, outputs

Reddit search JSON
old.reddit HTML
posts table
config.json
engagement_styles
guardrail engine
Safe candidates
Skip list
Abort signal

Comment bans vs thread bans

This is the guardrail that looks like a typo until you try to run without it. Subreddits that reject new self-posts often still accept comments, and vice versa. Tracking them together either drops valid candidates or surfaces banned ones. S4L keeps them in two independent lists loaded by two different functions.

config.json, counted at research time

comment_blocked

0

subs that reject comments (r/SaaS, r/startups, r/indiehackers, ...)

thread_blocked

0

subs that reject new self-posts (different set, different code path)

The comment drafter loads only comment_blocked. The thread poster loads only thread_blocked. They never see each other's list. That is why the two counts are different.

Every other reddit marketing tool

Side-by-side with what top-ranked reddit marketing tools expose in their own docs and reviews. The question is not whether they could build these guards, the question is whether they talk about them. None of the top five SERP results for "reddit marketing tool" mention the checks below.

FeatureTypical reddit marketing toolS4L
Skips threads older than 180 daysNo age check, drafts on archived threadsDrops at age_hours > 4320 in _parse_search_results
Second-opinion lock verificationTrusts the JSON locked flagFetches old.reddit HTML, regexes .locked-tagline
Rate-limit preflightBlocks and retries on 429Aborts the whole run if reset > 180s
Comment bans vs thread bansOne global blocklistsubreddit_bans.comment_blocked and thread_blocked are distinct
Post dedup scopeDomain-level or nonePer thread_url row in posts table; per their_comment_id for replies
Style selectionStatic templatesDynamic tiers from avg_upvotes, gated at n >= 5

The chain of events in order

What happens from the moment you run the tool until a comment actually ships. Each step is a function call you can follow in the repo.

1

preflight_rate_limit()

Probes old.reddit.com/r/popular.json for live X-Ratelimit headers. If reset > 180s, the run aborts and no Claude spawn happens.

2

cmd_search()

Runs the keyword search with sort=relevance t=week. Results immediately pass through _parse_search_results, which drops archived, locked, over-4320-hour, and already_posted threads.

3

_load_comment_blocked_subs()

Loads config.json subreddit_bans.comment_blocked plus exclusions.subreddits. The comment drafter never sees these subs in the candidate list.

4

_html_postable_check()

For each surviving candidate, fetches the old.reddit HTML and regexes for .locked-tagline and .archived-tagline. Catches AutoMod-only locks that the JSON missed.

5

get_dynamic_tiers()

Ranks engagement styles by avg_upvotes from the posts table. Styles with n < 5 get pushed to 'explore' so they do not falsely dominate.

6

Claude draft

Only now does a Claude spawn read the candidate list and draft the comment. The style prompt it receives comes from the tiered list above.

7

reddit_browser.py post

Playwright persistent context with --disable-blink-features=AutomationControlled opens the thread, checks the comment box exists, and submits. If no box, the sub is added to comment_blocked at runtime.

8

INSERT into posts

thread_url and engagement_style are stored. Future searches will see the thread in already_posted; future style ranking will see the upvote count once scrape_reddit_views.py backfills it.

What you will not find in the codebase

The absence is as intentional as the presence. S4L refuses patterns that look good in a demo but trigger moderation at scale.

Deliberately absent

  • No fixed comment schedule. Posting cadence is driven by the search results and the rate-limit probe, not by a cron that fires at :00 every hour.
  • No per-sub comment template library. Every draft is generated in context from the thread's text, which means two comments from S4L on two different threads do not share any surface form.
  • No upvote-bait reply generator. The 'pleaser' style is explicitly banned in engagement_styles.py because it has the lowest avg_upvotes on this account.
  • No same-account multi-comment-per-thread. thread_url dedup is hard: one row per thread, one comment from S4L per thread, forever.
  • No product links in replies. The content rules forbid URLs and product names in Reddit comments outright, regardless of the style chosen.
reddit_tools.pypost_reddit.pyengagement_styles.pyreddit_browser.pyscan_reddit_replies.pyconfig.jsondb.py

What this is for

S4L is the social autoposter that runs this pipeline for the Mediar product and a handful of partner accounts. The guardrails exist because the accounts running through it need to still be active in a year, not just still active this week. If you are evaluating a reddit marketing tool for a product you actually care about, the question worth asking is not "what does it write" but "what does it refuse to write, and why."

The seven checks above are the answer in this repo. You can read them. You can copy them. You can replace them with your own. What you cannot do is skip them and expect the account to survive.

Want the pipeline pointed at your account?

Fifteen-minute walkthrough: we point S4L at your subreddits, show the guardrails in action, and you decide whether to run it.

Book a call

Frequently asked questions

What does the 4320-hour cutoff actually mean?

4320 hours is 180 days. In scripts/reddit_tools.py at line 221, S4L drops any thread whose age_hours exceeds that value from the search result parser. The goal is to avoid commenting on archived or near-archived threads, which is one of the clearest behavioral signals that separates a scripted account from a human one. Reddit auto-archives most subs at six months, so the 180-day floor stays one full archive window ahead of the hard line.

Why check locked state in both JSON and HTML?

The Reddit JSON endpoint exposes a boolean locked flag, but it misses HTML-only lock states that AutoMod can apply on a per-thread basis. The concrete example documented in the source is r/Entrepreneur: AutoMod renders a .locked-tagline CSS class on the thread page while the JSON payload still reports locked=false. S4L's _html_postable_check function at reddit_tools.py:260 fetches the old.reddit.com HTML and regexes for that class as a second opinion, which catches locks the JSON alone would miss.

What triggers the 180-second rate-limit abort?

Before spawning the Claude drafter, post_reddit.py probes old.reddit.com/r/popular.json?limit=1 to read live X-Ratelimit-Remaining and X-Ratelimit-Reset headers. If remaining is at or below 2 and reset is more than 180 seconds away, the whole run is abandoned rather than stalled. The tradeoff is explicit in the source: a Claude spawn with five rate-limited searches costs about $0.44, while the probe costs roughly 300 milliseconds.

Why are comment bans and thread bans stored as separate lists?

Some subreddits accept comments but refuse new self-posts, and some do the reverse. r/SaaS for example permits engagement on existing threads but has rejected new thread creation, so blending the two lists would either silently drop valid comment targets or leak banned thread targets back into the posting pipeline. config.json has 18 entries in subreddit_bans.comment_blocked and 31 in subreddit_bans.thread_blocked, loaded by two different code paths so neither leaks into the other.

How does the style tiering protect account quality?

engagement_styles.py queries the posts table for avg_upvotes grouped by engagement_style. Any style with fewer than 5 samples (MIN_SAMPLE_SIZE = 5) is pushed into the 'explore' bucket regardless of its noisy average, so a style that got two lucky upvotes does not climb to 'dominant'. Styles with trusted sample sizes are split into thirds: top third dominant, middle secondary, bottom third rare. The drafter picks predominantly from the top tier, which is why behavior keeps drifting toward what actually worked on that account.

Does S4L post directly via the Reddit API?

No. Comment posting goes through a Playwright persistent browser context in scripts/reddit_browser.py, launched with --disable-blink-features=AutomationControlled and a desktop Chrome user agent. The Reddit API is used only for read operations: searching threads, fetching post metadata, and polling for inbox replies. Keeping the write path on a real logged-in browser profile avoids OAuth token fingerprints and keeps the account behavior indistinguishable from a normal session.

What happens when a sub silently blocks comments?

If the browser finds no comment form (locked, restricted, or banned from sub), mark_comment_blocked in post_reddit.py:110 writes the subreddit into config.json subreddit_bans.comment_blocked at runtime and sorts the list. The drafter will never target that sub again. This is a learning loop: the blocklist grows from live rejection signals rather than requiring the operator to maintain it manually.