AI Reddit comments without shadowban: the per-sub gate is the real risk

The advice you find on this topic almost always treats shadowban as one thing and tells you to space your posts out, age the account, and avoid VPNs. That is not wrong, but it misses the failure mode that actually kills a running AI-comment pipeline: per-subreddit gates that silently eat your comment with no API error, no banner, and no permalink to retry against. Those gates are local to the sub and they are detectable before you post if you stop trying to talk to the API and start reading the DOM of the logged-in page instead.

Direct answer (verified 2026-05-12)

You cannot run AI Reddit comments with a zero-risk guarantee against shadowbans. In practice the bigger risk for any automated commenter is the per-subreddit gate (Crowd Control, AutoMod karma or age threshold, mod-approval-only, or a sub-level ban) which Reddit hides with no error. The honest pre-flight check is to load the thread on old.reddit.com logged in as your account, look for the comment textarea form in the DOM, and refuse to post if it is missing. Quarantine that sub for future drafts. Account-level pacing matters for sitewide spam detection, but it does not protect against any of the per-sub gates.

Authoritative source for Crowd Control specifically: support.reddithelp.com.

M
Matthew Diakonov
9 min

The five things people call “shadowban”

When someone in r/redditrequest or a sub-mod chat says they were shadow-banned, they almost always mean one of these. They produce identical symptoms (your comment seems to post, nobody replies, the permalink does not appear logged out) and have completely different fixes:

  1. Sitewide shadowban. Account-level Reddit anti-abuse action. Logged out, your profile is empty. This is the only one of the five that is actually called a shadowban by Reddit internally.
  2. Sub-level user ban or AutoMod author block. One sub has added your username to a ban list or to an AutoMod author: rule. Everything you write in that sub is removed before any human sees it.
  3. AutoMod karma or age gate. The sub's AutoMod config has a min_karma, min_combined_karma, or min_account_age_days threshold you do not clear. Reddit's help center lists 1 to 7 days for general subs, 30 to 60 days for professional subs, and 60 to 120 days for crypto and finance.
  4. Mod-approval-only mode. Often turned on temporarily during a sub drama. Every comment from every account is held until a mod approves it.
  5. Crowd Control. Reddit's built-in safety setting collapses comments from accounts the community has not interacted with positively yet. Your comment is technically visible, but reordered into the bottom of the thread and grey-collapsed for everyone else.

For a pipeline that drafts and posts AI Reddit comments at scale, only the first one is genuinely about the account. The other four are per-subreddit. Treating them as a single problem is the reason most published advice does not survive contact with a real posting loop.

The signal you can actually read at posting time

Reddit does not expose an “is this account allowed to comment in this sub” endpoint. The new.reddit React app does not give you a clean DOM either; gate state is buried inside an event stream. Old.reddit.com is different. When a sub gates a logged-in user (any of mechanisms 2 through 4 above, and sometimes 5), the page renders normally but the comment area silently omits the textarea form. The rest of the thread is there. There is no error banner. There is no tooltip. The page just does not let you type.

The honest version of “avoid shadowbans” is to look for that absence before you draft anything. In scripts/reddit_browser.py the pipeline runs this locator on every thread before it touches the model:

# scripts/reddit_browser.py, line 413
has_comment_form = page.locator(
    ".commentarea .usertext.cloneable, "
    ".commentarea > form.usertext"
).count() > 0
if not has_comment_form:
    return {"ok": False, "error": "account_blocked_in_sub"}

That single check absorbs sub-level user bans, AutoMod karma and age gates, and mod-approval-only mode. It does not distinguish between them, and it does not need to. From a posting orchestrator's perspective, “your comment will not survive in this sub” is the only information that matters.

What happens when the form is missing

1

Load thread

Logged-in old.reddit.com page

2

Check locator

.commentarea form count

3

Form absent

account_blocked_in_sub

4

Mark row failed

permanent=True, no retry

5

Append to bans

subreddit_bans.comment_blocked

What the pipeline does next, and why it is irreversible

Detection is half the work. The other half is making sure the next cycle does not waste a candidate on the same sub. Two writes happen when account_blocked_in_sub comes back:

  1. Candidate marked permanent failed. In scripts/post_reddit.py the constant _PERMANENT_CDP_ERRORS includes account_blocked_in_sub. The candidate row gets status='failed' with permanent=True, so Phase 0 salvage skips it forever.
  2. Subreddit appended to subreddit_bans.comment_blocked. The function mark_comment_blocked in the same file writes a record {subreddit, reason, added_at, project} into the project's config.json. Every future drafting cycle reads that list and refuses to surface threads from those subs as candidates.

The design choice that matters here is the irreversibility. We do not mark the row pending and try later. We do not write a soft signal that a retry might recover. A sub that gated us once will gate us again at worse pacing, because the underlying state (karma, age, user ban) is unlikely to change in the next 24 hours. The cost of being wrong (one missed sub) is dramatically smaller than the cost of looping on a gated sub (3 wasted Claude sessions, polluted backfill, fake permalinks in the post log).

What it looks like end to end

A real run on a thread in a sub the account has never touched, where the sub turns out to have a karma gate we do not clear:

post_reddit.py --phase post

The cycle keeps going. The next two candidates in this batch are in different subs and post normally. The blocked sub is now gone from the drafter's universe for this project.

The actual size of the per-sub kill list

On this project, the live config.json has the following counts as of the date this page was written. The ratio is what you should plan around: the per-sub gate list is more than twice the size of the new-post block list, because comments attract Crowd Control and karma gates that thread posts often bypass.

0subs in comment_blocked
0subs in thread_blocked
0auto-detected via DOM check
0sitewide shadowbans on the account
2.5x

The per-sub comment block list is two and a half times the size of the new-post block list. Comments attract Crowd Control and karma gates that submissions skip.

config.json subreddit_bans, S4L, 2026-05-12

What pacing actually buys you

Pacing is the standard recommendation in every other guide on this topic, and it does buy two real things: it keeps the account under Reddit's sitewide spam classifier (which scores posting velocity, link ratio, and duplication), and it keeps you under the rate-limit ceiling (HTTP 429 from the API, a transient state that unwinds in 60 seconds or so). The pipeline waits 3 minutes between posts within a single Claude session and pre-checks Reddit's quota before spawning a new session at all (_probe_reddit_quota in post_reddit.py).

What pacing does not buy you is anything against the per-sub gates. You can comment once a month and a sub with a 500-karma AutoMod rule will still filter you. You can rotate IPs and use mobile data and a sub that has banned your username will still hide every reply. The spam classifier and the per-sub gate are independent systems with independent fixes. Treating them as the same problem is what produces the “I slowed down and still got shadow-banned” reports you see on r/help every week.

A small thing other guides also miss

When a comment goes through the Reddit API and the sub has a karma gate, the POST returns a 200 with a permalink that points at a comment only you can see. From the API client's perspective, the post succeeded. The post log fills up with green checkmarks while every comment is actually filtered. We hit this in 2024 and built the DOM check specifically because it is the only signal at posting time that distinguishes “your comment landed” from “your comment was filtered before anyone saw it.” A second guard, no_permalink, covers the post-submit case where the comment we just sent does not show up under our username in the rendered DOM. Both errors are classified as permanent so the queue never retries them.

Want this pipeline running against your subs?

A short call covers the per-sub gate list for your project, what a healthy account looks like for the subs you care about, and the realistic comment cadence those subs accept.

FAQ

Frequently asked questions

What does 'shadowban' actually mean in this context?

Reddit users use the word for at least five different things and conflating them produces a lot of bad advice. (1) Sitewide shadowban: Reddit's safety team has flagged the account; logged out, your profile shows no posts. (2) Sub-level shadow: a moderator put your username in AutoMod's `author:` list or banned you from the sub without notice; everything you write in that sub is hidden. (3) Crowd Control: comments from accounts the community has not yet trusted are collapsed by default; not invisible, but ranked into oblivion. (4) AutoMod karma or age gate: your reply is removed by the bot before any human sees it because you do not clear `min_karma`, `min_combined_karma`, or `min_account_age_days`. (5) Mod-approval-only: every comment from anyone is held until a mod manually approves it. Only (1) is a real shadowban. The other four are what an AI-comment pipeline actually trips on, and they are per-subreddit, not per-account.

Can I detect a sitewide shadowban from inside my own session?

Not directly. The whole point of a sitewide shadowban is that the banned account still sees its own content normally. The standard test is to open the user's profile in a private window and see if their posts and comments load; if they 404 logged out but render logged in, the account is shadow-banned. There are also community tools that do this lookup against the public JSON endpoint. The pipeline assumes the account is healthy at the sitewide level and concentrates its budget on per-sub gates, which are the failure mode that actually shows up.

Why does S4L check old.reddit.com instead of www.reddit.com or the official API?

Three reasons. First, the official API does not surface 'this comment was filtered by AutoMod' as a real error; the POST returns a fake permalink and the comment is invisible. Old.reddit returns clean HTML where you can read the DOM directly. Second, mod-approval-only and AutoMod karma gates manifest as a missing textarea in the comment area on old.reddit before you try to post; on www.reddit the page is React and the gate signal is buried in mutations. Third, old.reddit is what mod tooling actually uses, so it tends to be the most honest surface about restriction state. The locator the pipeline uses is `.commentarea .usertext.cloneable, .commentarea > form.usertext` (scripts/reddit_browser.py:413).

What does the pipeline do once it detects a gate?

Two things, both irreversible by design. First, the candidate row in `reddit_candidates` is marked `status='failed'` with the error string `account_blocked_in_sub`, and the permanent flag is set so Phase 0 never re-salvages it. The cycle does not waste a retry on a sub that already hid us once. Second, the subreddit is appended to `subreddit_bans.comment_blocked` in `config.json` with an audit record `{subreddit, reason, added_at, project}`. Every future drafting cycle reads that list and refuses to surface threads from those subs as candidates. There is no half-state: a gated sub is dead for this project until a human edits the list out.

Does this also apply to top-level posts (submissions), not just comments?

Yes, but it is tracked in a different list. `subreddit_bans.comment_blocked` covers comment-level gates; `subreddit_bans.thread_blocked` covers subs where the bot's attempt to submit a post has been rejected (link-only sub, text disabled, self-promo banned, mod-applied user ban, automatic 403). The two lists are intentionally separate because a sub that allows comments but bans new posts is common, and conflating them would over-block. As of writing this, the S4L config has 115 comment-blocked subs and 45 thread-blocked subs.

What about the model 'sounding like AI' and getting reported?

That is a different failure mode and it has its own pipeline. The 'sounds like a bot' callout comes from human readers and shows up in replies to the bot's comment, not in any platform signal at posting time. The pipeline handles it with a 9-pattern regex over the parent comment text (META_CALLOUT_KEYWORDS in scripts/engage_reddit.py) plus a hard ban on undisclosed invented first-person specifics. That sits on a different page: /t/reddit-marketing-without-ai-flagging.

Will simply slowing down stop shadowbans?

Slowing down helps with rate-limit removals (HTTP 429 from Reddit, which is a transient state) and with the spam classifier (which scores posting velocity). It does not help with the four per-sub failure modes described above, because those are local rules set by community moderators. A 1-comment-per-week account in a sub whose AutoMod requires 500 karma will still be silently removed. Pacing and per-sub gate detection are independent problems and you need both.

s4l.aibooked calls from social
© 2026 s4l.ai. All rights reserved.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.