Promoting open source AI projects on Reddit and X in April 2026: the ban list a real autoposter maintains.

Every promo guide for open source AI sends you to r/LocalLLaMA, r/MachineLearning, r/SaaS, r/startups, r/indiehackers. The autoposter that actually ships open source AI projects to Reddit and X has all five of those on its runtime ban list, auto-grown the moment a comment attempt gets rejected. Here is the list, the function that mutates it, and the comment-first cadence that does work.

M
Matthew Diakonov
10 min read
4.9from real source on disk
S4L self-promotion is structurally disabled in config.json with a documented anti-bot reason
subreddit_bans: 38 comment_blocked + 31 thread_blocked entries on April 24, 2026
scripts/post_reddit.py:110-135 auto-appends a sub to the list when comments are rejected
Twitter cycle: 5-minute momentum window with min delta_score >= 1 to filter dead tweets

The honest opening

The autoposter cannot autopost about itself on Reddit. Here is the line that says so.

The S4L config has a project entry for every product the system promotes. Every entry has a threads block. Every other project sets threads.enabled to true. S4L alone ships with threads disabled, and the disabled_reason is in plain English in the file. I did not paraphrase this.

config.json (project: S4L)

The autoposter ships open source AI tools to Reddit and X for other people, every day. It does not ship itself to Reddit because the cost of getting flagged on a single shared identity exceeds whatever attention a self-promo post would buy. That choice is the entire premise of this page.

threads.enabled = false

Posting about an autoposter on Reddit triggers anti-bot backlash and risks accounts. Promote via other channels.

config.json, projects[name=S4L].threads.disabled_reason

The actual ban list, alphabetised

Five of the seven subs every promo guide recommends are on it.

comment_blocked stops the drafter from picking a thread in these subs. They are the live record of subs whose comment forms have been gone, locked, or restricted enough times that the runtime gave up. Read across this list and pick out the subs you already planned to post in.

r/accelerater/androiddevr/askprogrammingr/ClaudeAIr/codexr/degoogler/ecommercer/experienceddevsr/firefoxr/homedefenser/homesecurityr/humanresourcesr/indiehackersr/introvertr/iosprogrammingr/kitchenconfidentialr/lawfirmr/legaladvicer/localllamar/macappsr/machinelearningr/mcpr/Meditationr/mindfulnessr/propertymanagementr/qualityassurancer/recruitinghellr/restaurantownersr/SaaSr/singularityr/softwaretestingr/startupsr/taxprosr/uipathr/uxdesignr/vibecodedevsr/vibecodingr/vipassana

And the second list. thread_blocked stops pick_thread_target.py from selecting a sub for an original announcement post. A sub can be on one list and not the other. A few are on both.

r/ClaudeAIr/cursorr/degoogler/EndTippingr/entrepreneurr/firefoxr/GeekSquadr/indiehackersr/InsuranceAgentr/InsuranceProfessionalr/LocalLLaMAr/lovabler/macappsr/mcpr/Meditationr/noder/propertymanagementr/QualityAssurancer/reactjsr/RestaurantOwnersr/SaaSr/sidehustler/singularityr/smallbusinessr/socialmediar/SoftwareEngineeringr/softwaretestingr/taxprosr/texasr/vibecodingr/vipassana

Total: 0 comment-blocked subs and 0 thread-blocked subs. The intersection includes the five subs almost every guide recommends first: r/LocalLLaMA, r/SaaS, r/indiehackers, r/ClaudeAI, r/mcp.

The function that grows the list

mark_comment_blocked rewrites config.json on every rejection.

When the reddit-agent navigates to a candidate thread and finds no comment form, this is what runs. It pulls the sub name out of the URL, opens config.json, appends the sub to subreddit_bans.comment_blocked, sorts alphabetically, and writes the file back. The next pick_thread_target run sees the new entry and excludes it. No human in the loop.

scripts/post_reddit.py (lines 110 to 135)

The rejection feedback loop

One bad nav and the sub is gone from rotation forever.

rejected comment -> regex parse -> mutated config -> excluded from next run

twitter-cycle.plist
reddit-search.plist
engage.plist
post_reddit.py
config.json
config.json
pick_thread_target.py

The middle of the diagram is where the list grows. Three different schedulers all funnel through one Python function that owns the runtime ban-list state.

What advice misses vs what the runtime sees.

The left column is what almost every guide for promoting open source AI says today. The right column is what one autoposter, running every 30 minutes for months across many AI-related projects, has actually learned. Both columns are true. The right column is the one nobody publishes.

FeatureStandard 2026 promo adviceRuntime ban-list reality
r/LocalLLaMA (the canonical local-LLM sub)Recommended in nearly every promo guide for new model and tool releasesOn thread_blocked. Self-promotion threads from new accounts get removed; comment-only is the live channel
r/MachineLearning (academic and applied ML)Recommended for research-grade releasesOn comment_blocked. Comment attempts have been rejected enough times that the system stopped trying
r/SaaSRecommended for B2B AI toolsOn both lists. Heavy moderation against self-promo, plus comment forms removed for new accounts
r/startupsRecommended for founder-voice postsOn comment_blocked. The Self-Promo Saturday rule is the only door, and it is narrow
r/indiehackersRecommended for indie projectsOn both lists. The cross-posted IH community on Reddit is not the same vibe as indiehackers.com
r/singularity, r/ClaudeAI, r/mcpRecommended as 'AI-friendly' subsAll three on both lists. Loose-feeling subs are often the most aggressive about removing tool posts
r/SideProject, r/AlphaAndBetaUsers, r/coolgithubprojectsRarely mentioned, treated as low-tierActive comment surfaces, not blocked, and the audience reads the descriptions
r/LocalLLM, r/Ollama, r/AI_AgentsLumped together with the bigger AI subsSmaller and more permissive in 2026 than the canonical LLM sub. The dedup floor still applies

The numbers from one install on April 24, 2026

A small list, an intentional list.

0comment_blocked subs
0thread_blocked subs
0subs in active rotation
0voice archetypes per reply

133 candidate subs in the rotation pool, minus the bans, equals roughly 64 subs that are actually safe to comment in on a given day, after per-sub floor_days dedup is applied. Not the 200 your spreadsheet has. Not the five your guide mentioned.

What rejection looks like in the log file.

A normal Reddit run that ends in a sub getting added to the ban list. Notice the trace: probe, spawn, navigate, no form, mark_comment_blocked, config write, rotate to next project. Three seconds, no human.

skill/logs/reddit-search-2026-04-24_120000.log

What works instead

Comment-first, momentum-filtered, voice-rotated.

The autoposter does not announce open source AI projects on Reddit. It comments under existing threads where the project is genuinely on-topic. On X, the same principle plus a momentum window. Five steps, all of them visible in source.

1

Comment, do not announce.

Original threads from a new-ish account get killed by automod or community rules in the big AI subs. Comments under existing threads with momentum survive. The whole point of S4L's reddit pipeline is comment-driven, not announcement-driven.

2

Filter for momentum, not popularity.

On X, S4L pulls candidate tweets at t=0, sleeps 300 seconds, refetches, and only replies to tweets whose engagement delta is at least 1 over those 5 minutes. A tweet at 50 likes that gained zero in five minutes is dead. A tweet at 8 likes that gained 3 is rising.

3

Use a per-sub cooldown, not a daily quota.

scripts/pick_thread_target.py applies floor_days per subreddit. Default is 3 days for external subs and 1 day for your own community. If you posted in a sub yesterday, you do not post there today, regardless of how good your daily numbers look.

4

Rotate seven voices, not one.

scripts/engagement_styles.py defines critic, storyteller, pattern_recognizer, curious_probe, contrarian, data_point_drop, snarky_oneliner. The drafter is forbidden from repeating its last archetype. Variance is what stops a campaign from looking like a single bot.

5

Audit your own ban list weekly.

subreddit_bans grows on its own. Once a month, open it, remove subs that recently changed mods or rules, and let the system retry. Half the value of the list is institutional memory; the other half is updating it as things change.

The X side of the cadence: 5-minute momentum, min delta 1.

The Twitter cycle does no announcement posts. It pulls candidate tweets at t=0, sleeps 300 seconds, refetches, scores by engagement delta, and only replies to tweets whose delta is at least 1 over the window. The 50-like tweet that gained nothing in 5 minutes is dead. The 8-like tweet that gained 3 is rising. The replies go to the rising ones.

skill/run-twitter-cycle.sh (abridged)

Why the design ends up here.

Self-promotion is structurally disabled

The S4L project entry sets threads.enabled to false with the literal disabled_reason 'Posting about an autoposter on Reddit triggers anti-bot backlash and risks accounts. Promote via other channels.' The autoposter cannot post about itself on Reddit because the maintainer will not let it. Every other project in the same config has threads.enabled true.

comment_blocked vs thread_blocked

Two separate runtime lists. comment_blocked stops the drafter from picking a thread in those subs. thread_blocked stops pick_thread_target.py from selecting them for an original post. A sub can be on one list and not the other.

Auto-grow on rejection

When the reddit-agent finds no comment form, mark_comment_blocked rewrites config.json and appends the sub. Future runs literally cannot retarget it without a human review.

Floor days per subreddit

pick_thread_target.py treats every external sub with a 3 day cooldown by default and your own community with 1 day. Spamming the same sub two days in a row is impossible without overriding the config.

Per-platform momentum filter

On X, the cycle script sleeps 300 seconds between t1 and t2 fetches and only keeps tweets with delta_score gte 1. Reply work is concentrated where the conversation is actually growing.

Seven voices, no repeats

engagement_styles.py rotates through critic, storyteller, pattern_recognizer, curious_probe, contrarian, data_point_drop, snarky_oneliner. The prompt forbids repeating the last archetype. Style variance is part of the runtime, not an afterthought.

Want this set up for your open source AI project?

30 minutes, screen-share, walk through your config.json, your candidate subs, and your first 50 candidate threads. Only useful if you ship the code yourself.

Frequently asked questions

Why does S4L disable its own Reddit self-promotion in source?

Because every time a multi-platform autoposter shows up in r/LocalLLaMA, r/SaaS, r/MachineLearning, or any AI-tooling community, the comment thread immediately turns into accusations of botting, the post gets removed, and the underlying Reddit account picks up a soft mark. The disabled_reason is literally 'Posting about an autoposter on Reddit triggers anti-bot backlash and risks accounts. Promote via other channels.' The maintainer is the same person who built the bot. He decided that promoting the bot with the bot is a faster way to lose the bot's accounts than to grow them. Other open-source AI projects can use S4L on Reddit, but S4L itself goes everywhere except Reddit for self-promotion.

Where is the ban list and how big is it?

It lives in config.json under the subreddit_bans key. As of the install I read on April 24, 2026, comment_blocked has 38 entries and thread_blocked has 31 entries. comment_blocked includes accelerate, androiddev, ClaudeAI, ecommerce, experienceddevs, indiehackers, localllama, machinelearning, mcp, SaaS, singularity, startups, vibecoding, plus 25 others. thread_blocked includes ClaudeAI, cursor, entrepreneur, indiehackers, LocalLLaMA, mcp, SaaS, sidehustle, smallbusiness, socialmedia, SoftwareEngineering, plus 20 others. Five of the seven subs every promo guide recommends for AI projects sit on at least one list.

How does a sub end up on the list?

Two ways. First, manually: the maintainer adds it after a public removal or shadowban report. Second, automatically: when the reddit-agent navigates to a candidate thread and finds no comment form (sub is locked, restricted to approved users, or rate-limiting new accounts), scripts/post_reddit.py at lines 110 to 135 calls mark_comment_blocked(thread_url). That function regex-matches /r/([^/]+)/ on the URL, opens config.json, appends the lower-cased sub name to subreddit_bans.comment_blocked, sorts the list alphabetically, and writes the file back. The next pick_thread_target run sees the new entry and excludes it.

Why is r/MachineLearning hostile to self-promotion in 2026?

Mod policy. r/MachineLearning explicitly disallows project announcements outside the [P] tag, requires a level of empirical contribution most indie tools cannot meet, and removes posts that read as marketing. Comments are technically allowed but the moderation queue is aggressive enough that comment forms disappear from threads where the OP has even hinted at concerns about self-promotion. The drafter encounters that empty comment form and the runtime takes the sub out of rotation. None of this is in mod logs you can read; it is the lived experience of every account that tries to comment there.

What about Twitter / X for promoting open source AI?

X works much better for replies than for announcements. The S4L twitter cycle does no announcement posts at all for any project; it pulls candidate tweets matching project search topics, sleeps five minutes, refetches them, and only replies to tweets whose engagement delta is at least 1 over the window. That filter discards both dead tweets and tweets that already peaked. Reply work concentrates where the conversation is still growing. The seven-style bandit chooses voice. The full snippet is on this page; it lives in run-twitter-cycle.sh.

What subs DO work for open source AI projects in April 2026?

From the not-blocked side of the same config: r/SideProject, r/AlphaAndBetaUsers, r/coolgithubprojects, r/buildinpublic, r/selfhosted, r/Automate, r/LocalLLM, r/Ollama, r/AI_Agents, r/n8n, r/opensource, r/aipromptprogramming, r/makers, r/workflows. None of these are huge. All of them have an audience that reads project descriptions and clicks GitHub links. The trick is to comment first for two or three weeks before you ever post original, so your account is not a one-shot promo throwaway when you do post.

How do I verify these numbers myself?

git clone https://github.com/m13v/social-autoposter, then: 'python3 -c "import json; c=json.load(open(\"config.json\")); print(len(c[\"subreddit_bans\"][\"comment_blocked\"]), len(c[\"subreddit_bans\"][\"thread_blocked\"]))"' returns 38 and 31. 'sed -n 110,135p scripts/post_reddit.py' shows the mark_comment_blocked function exactly as quoted above. 'jq ".projects[] | select(.name==\"S4L\").threads" config.json' shows the disabled_reason field. The page is built around facts that take five commands to confirm.

Is the ban list specific to one Reddit account, or universal?

It is the lived experience of one Reddit account that has been posting for months, but the policies it is reacting to are the subs' rules, not just one account's reputation. A fresh Reddit account would hit roughly the same wall in the same subs. The exact size of the list would differ, but the top names (LocalLLaMA, MachineLearning, SaaS, startups, indiehackers, ClaudeAI, mcp, singularity, vibecoding) are the same names other indie maintainers report as comment-hostile in 2026.

Why call this 'April 2026' instead of an evergreen guide?

Because subreddit policies, anti-bot heuristics, and the self-promo culture of individual subs change every quarter. The ban list grew six entries in the last month alone. A guide that promises 'these subs always work for open source AI' would be wrong by July. Tagging the date is honesty: this is what's true today, on this account, at this install. Use the verification commands above on the day you read this and the answer might be different.

s4l.aibooked calls from social
© 2026 s4l.ai. All rights reserved.