s4l.ai / guide

s4l meaning: a social engagement engine whose first gate is don't get banned.

Most of the internet uses S4L as slang (Sisters For Life, Single 4 Life), or as the name of a recreational soccer league, a vehicle model, or a 2019 ML paper. In the s4l.ai context it means something else: an autoposter whose first production step is not drafting a comment, it is fetching r/{sub}/about/rules.json and classifying the rules against twelve regex patterns. The bandit, the 7 engagement styles, the 3-tier link strategy: all of it runs downstream of that one scan.

M
Matthew Diakonov
9 min read
4.9from 31
12 regex patterns in scripts/check_link_rules.py classify every subreddit
7 named ban classes (hard_no_links, text_only, ratio_rule, ...)
3-tier link policy pinned by the tag set, not the LLM
scan runs against r/{sub}/about/rules.json before a single word is drafted
hard_no_linksno_self_promono_referralno_blog_linkslink_requires_approvaltext_onlyratio_rule9:1 self-promo

the uncopyable bit

The gate is LINK_BAN_PATTERNS, a 12-entry list of regexes.

Every other moving part of S4L, the engagement-style bandit, the 3-tier link policy, the per-platform never list, the dedup-against- posts query, runs downstream of this one classifier. It reads each sub's rules text, runs twelve named regexes over it, and writes a sorted tag set. Everything else reads that set.

The 12 regex patterns, verbatim

Lifted straight from scripts/check_link_rules.py lines 7 to 20. Each tuple is (pattern, ban-class-tag). The same file also fetches each sub's rules JSON and writes the union of all matching tags to /tmp/link_rules.json.

scripts/check_link_rules.py

The function that turns a subreddit into a tag set

Two JSON fetches per sub (rules, about), a 1.5-second sleep between to stay polite, and the regex pass on both. The output is a row the rest of the pipeline can read.

scripts/check_link_rules.py

Where the scan sits in the pipeline

Before thread discovery writes a candidate list, before the engagement bandit picks a style, before the LLM sees a single token: the classifier has already decided which link tiers are legal on that sub.

rules.json -> classifier -> link-tier gate

about/rules.json
about.json
project topics
posts.upvotes
check_sub()
Tier 1 only
Tier 2 blocked
Tier 3 allowed

What the 3-tier link policy actually encodes

The policy lives in the prompt-building code and in the project config. The classifier decides which of the three tiers are on the table for a given sub. Most replies never leave Tier 1, and that is the point.

Tier 1: no link

Default. Pure engagement. Expand the topic, ask a follow-up, add one credible detail. The majority of replies live here. If the subreddit rules JSON came back tagged text_only or hard_no_links, the agent cannot leave this tier even if it wants to.

Tier 2: natural mention

Only when the conversation has already touched one of the project's topics. Usually 2+ replies deep. Mentions the product by name, links only if the URL genuinely helps. A sub tagged no_self_promo or ratio_rule blocks Tier 2 entirely.

Tier 3: direct ask

Someone literally asked "what do you use" or "where can I try it." Links are not only allowed, they are the polite answer. Still gated by ban-tags on destructive subs.

The tag set

hard_no_links, no_self_promo, no_referral, no_blog_links, link_requires_approval, text_only, ratio_rule. Seven ban classes derived from the 12 regexes, stored sorted on each sub's row in /tmp/link_rules.json.

Tag by tag: what breaks without the scan

The left column is what happens when an autoposter does not read the rules first. The right column is what S4L does because the tag is already known before the comment is drafted.

Naive autoposter vs S4L's rule-scan gate

Same comment, same sub, different outcomes depending on whether the ban-class tags were resolved before drafting.

FeatureNaive autoposterS4L
hard_no_linksAgent tries a URL, comment gets removed, AutoModerator shadowbans the accountAgent enters thread already in Tier 1. A link cannot physically be drafted.
no_self_promoAgent mentions the product naturally, mods tally 1 of 9, account gets flagged on the second attemptTier 2 and Tier 3 are stripped before the prompt is built. Only Tier 1 lives.
no_referralUTM params stripped mid-post, link looks suspicious, gets removedTagged on ingest, agent never tries to post a referral-flavored URL.
text_onlyAgent posts a text reply with an inline URL, the entire comment gets removed as a linkFlagged before post, comment is forced to text-only and the product mention has no href.
ratio_rule9:1 violated after a few posts, mod removes the offender's whole historyAgent tracks its own ratio against the same posts table the bandit reads from.
link_requires_approvalLink drops, removed, account gets a temporary muteAgent is routed to a human-approval DM path instead of posting to that sub.

One real run, from fetch to tag set

Abbreviated output from a classifier pass. The tag column is what the engagement pipeline will read when it decides whether a comment on this sub can carry a URL at all.

python3 scripts/check_link_rules.py

How a single comment reaches its legal tier

The decision tree is shallow. If any of the ban-tags are present, lower tiers are stripped before the LLM reads the prompt. The agent never "chooses" to not link, the option is not offered.

from candidate thread to drafted comment

1

find_threads.py returns a thread

A candidate Reddit thread, deduped against posts. The row carries the subreddit name.

2

/tmp/link_rules.json lookup

The pipeline reads the sub's tag set. If the file is stale, check_link_rules.py is re-run against the linked_subs list.

3

Tag set decides legal tiers

hard_no_links or text_only means Tier 1 only. no_self_promo or ratio_rule means Tiers 1 and 3 only if a user explicitly asked. No tags means all three tiers remain.

4

Engagement-style bandit picks a style

get_dynamic_tiers fires. Styles banned on this platform are already stripped. The LLM picks one.

5

Prompt gets the allowed tier list

The prompt header names which tiers are allowed. The LLM cannot promote a comment to Tier 3 on a sub that was pre-pinned to Tier 1.

6

Comment posts, row gets logged

INSERT INTO posts(platform, subreddit, engagement_style, link_tier, ...). Next run reads the row and the bandit updates.

why this matters

0regex patterns that decide if a URL is even legal on this sub

Social autoposters fail on Reddit not because the writing is weak but because they treat every subreddit as the same surface. One in three subs in a typical linked_subs list comes back with at least one ban tag. S4L's whole definition starts there: read the rules, translate them to a tag set, honor the set.

S4L by the numbers

These are constants in the code. You can grep them in the repo.

0link-ban regex patterns in LINK_BAN_PATTERNS
0named ban classes the patterns fold into
0link tiers gated by those classes
0platforms (reddit / twitter / linkedin / github / moltbook)
0

JSON fetches per sub (rules, about)

0s

polite sleep between fetches

0h

launchd cadence of engage/stats passes

0+

subreddits configurable per project

Point S4L at your subreddits

You hand S4L a list of subs and a content angle. It runs check_link_rules.py, tags every sub, pins a link tier per sub, and only then starts drafting. The ban-avoidance gate is not a setting, it is the product.

See pricing

s4l meaning: common questions

What does S4L stand for?

S4L is the branded handle for the product at s4l.ai. It is a social engagement engine, effectively 'social [autoposter] for life': an always-on pipeline that discovers relevant threads on Reddit, X/Twitter, LinkedIn, GitHub issues, and Moltbook, drafts and posts comments in a project's voice, tracks replies, runs DM outreach, and logs every action to a Neon Postgres database. The three letters and one digit exist because the full phrase is mouthful and social-autoposter.ai was taken. The product's GitHub and package name is social-autoposter.

Why does 's4l meaning' keep showing up as a search?

Because the acronym collides with a pile of unrelated slang (Sisters For Life, Single 4 Life), a recreational soccer league called S4L, a vehicle model called Silhouette 4wd Luxury, and a 2019 machine learning paper titled S4L: Self-Supervised Semi-Supervised Learning. The s4l.ai product is one more meaning in that pile. This page exists to answer what S4L means specifically in the s4l.ai context: a social engagement engine whose first gate is ban-avoidance.

What does 'don't get banned' actually look like in the code?

It is scripts/check_link_rules.py. That file has a 12-entry list called LINK_BAN_PATTERNS, each entry a (regex, tag) tuple. For every target subreddit the script fetches https://www.reddit.com/r/{sub}/about/rules.json plus /about.json, runs each rule's short_name + description through the regex list, and unions the resulting tags onto the sub. The tags are written to /tmp/link_rules.json. Any tag in that set gates which tiers the engagement pipeline can use on that sub before a single word of the comment is drafted.

What are the seven ban classes?

hard_no_links (links of any kind forbidden), no_self_promo (any self-promotion forbidden), no_referral (referral or affiliate URLs forbidden), no_blog_links (blog, YouTube, Substack, Medium links forbidden), link_requires_approval (links need mod pre-approval), text_only (only self-posts, no link submissions), and ratio_rule (like Reddit's 9:1 self-promo rule). A sub can carry more than one tag. The tag list is stored sorted on each sub's row so agents can read it as a deterministic allowlist.

How is the 3-tier link policy tied to these tags?

Tier 1 is pure engagement with no link. Tier 2 is a natural product mention when the conversation has already hit one of the project's topics. Tier 3 is a direct answer when someone explicitly asks for a URL. A sub with hard_no_links or text_only is pinned to Tier 1 for every reply, every draft, every account. A sub with no_self_promo blocks Tier 2 entirely. A sub with ratio_rule forces the agent to track its own mention ratio before each new mention-carrying comment. The tier is resolved before the LLM sees the prompt, not inside it.

Does this mean S4L is just Reddit?

No. Reddit is the platform with the most aggressive, most publicly enumerated link rules, so the ban-avoidance code lives there. Twitter/X, LinkedIn, GitHub, and Moltbook each have their own enforcement surface, and each has its own agent config, lock, and launchd plist. The same 3-tier link strategy applies everywhere, but Reddit is the one where the regex scan is load-bearing because the platform's mods actively remove link-flavored comments.

What file do I open if I want to verify this?

scripts/check_link_rules.py. Lines 7 through 20 are the LINK_BAN_PATTERNS list. Lines 42 through 76 are the check_sub function that fetches the JSON, runs the regex classifier, and writes /tmp/link_rules.json. Run it against a list of subs in /tmp/linked_subs.txt to regenerate the classification. The output file is a flat JSON array of {sub, tags, rules_text, description} objects.

How is this different from existing S4L writeups?

The /t/s4l page describes the bandit over 7 named comment styles. The /t/site-s4l-ai page describes the engagement-style taxonomy. This page is about the gate that runs before either of those ever fires: the rule-scanning step that decides which link tier is even legal on a given subreddit. Styles decide what to say. Tiers decide whether a URL is allowed. LINK_BAN_PATTERNS decides which tiers are on the table.