s4l.ai / guide

Amazon affiliate marketing on Reddit, gated by a 3-tier rule that reads the thread before it writes a link.

Every other guide on this SERP tells you the same three things. Reddit does not allow affiliate links. Build karma first. Use your blog as a middle hop. None of them answer the one question that matters inside a specific comment thread: is this a link moment or not. S4L answers that with a literal three-line rule block injected into the LLM prompt before every reply gets drafted. Tier 1 is the default and ships zero links. Tier 2 is a casual project mention with no URL. Tier 3 is the only path where an Amazon affiliate URL reaches a Reddit comment, and it only fires when the reply target explicitly asked for one.

M
Matthew Diakonov
11 min read
4.9from 31
Tier 1 default, no link on every Reddit reply
Tier 3 fires only when target comment asks for a link
URLs come from config.json, never from LLM
Source: scripts/engage_reddit.py lines 115-118
Tier 1 default, no link, on every Reddit replyTier 3 fires only when classifier sees an explicit askrecommendation style capped at 20% of repliescheck_link_rules.py runs before Tier 3 is allowedURLs come from config.json, not from LLM inventionNEVER mention product names (reddit rule in engagement_styles.py)Bimodal rule: <100 chars or 4-5 sentences, avoid the 2-3 dead zoneSource: scripts/engage_reddit.py lines 115-1188 engagement styles, 'recommendation' joins REPLY_STYLES onlyPer-reply Claude session, no context bleed across replies

the uncopyable bit

An Amazon affiliate URL reaches a Reddit comment only through a Tier 3 classification. Tier 3 fires when the reply target literally asked for a link.

The rule is three lines, lives in scripts/engage_reddit.py at lines 115 through 118, and is injected verbatim into every reply-drafting prompt the LLM ever sees on this product. The default is Tier 1, which means zero links. The only thing that moves a reply up the tier ladder is the target comment itself.

The three lines that gate every affiliate link

Nothing clever. No ML classifier, no embedding model, no bandit. Three plain sentences in a prompt. The cleverness is that the same three lines are in the same place for every reply, so the bot never has to guess whether this comment is a link moment. The prompt tells it.

scripts/engage_reddit.py

What each tier actually does

These are not three flavors of the same reply. They are three different behaviors, applied to different evidence. Tier 1 reads the conversation and contributes. Tier 2 earns a name-drop. Tier 3 earns a URL, and only Tier 3 can carry an Amazon affiliate tag.

Tier 1: no link, full stop

The default path for every reply. Zero URLs, zero product names, zero brand callouts. The LLM is given the thread context and told to engage from first-person experience. This is what ships on 80 to 90 percent of Reddit replies coming out of S4L. Get Tier 1 wrong and the account gets shadowbanned before it earns the right to post anything else.

Tier 2: casual project mention

Fires only when the thread topic matches a project in config.json. The reply names the project in passing but never pastes a URL. An Amazon affiliate link is not a Tier 2 output.

Tier 3: they asked

Fires only when the reply target literally asked for a link, tool, or recommendation. The URL comes from config.json, not from LLM invention. Capped by the 'recommendation' style's 20 percent of replies ceiling from engagement_styles.py.

recommendation style, capped at 20%

get_styles_prompt() adds 'recommendation' to REPLY_STYLES = VALID_STYLES | {recommendation} and labels it 'MAX 20% of replies.' Tier 3 replies almost always use this style, which is why the account does not turn into an affiliate spam machine.

Subreddit ban check runs first

Before Tier 3 is allowed to fire, check_link_rules.py verifies r/{sub}/about/rules.json does not have no_referral, hard_no_links, or link_requires_approval. A matched tag forces the reply back down to Tier 2.

Config-sourced, not invented

Even at Tier 3, the LLM does not hallucinate a URL. Links live in config.json under the project's recommended_links list. That is why an Amazon affiliate tag survives the round trip without getting mangled.

How the tier gets decided, end to end

Four inputs flow into build_prompt at draft time. One of three outputs comes out. An affiliate link only belongs on one of the three paths.

target comment + sub rules + project match + ask signal -> tier

target comment text
r/{sub}/about/rules.json
config.json projects
ask detector
build_prompt
Tier 1
Tier 2
Tier 3

The Reddit content rules stacked above the tier block

Even at Tier 3, the URL is fighting against a stack of Reddit-only content rules that sit above the tier block in the prompt. The most important one is unconditional: never include URLs. Tier 3 is the one narrow exception, and the exception is safe only because the URL comes from config.json rather than from LLM invention.

scripts/engagement_styles.py

Three runs, three different tiers

Same bot, same config, same flashlight thread, three different target comments. Each comment resolves to a different tier and a different engagement_style. The Tier 3 case also shows the downgrade path when check_link_rules.py flags the sub.

scripts/engage_reddit.py

Generic Reddit affiliate advice vs the S4L rule

The SERP answer is a vibe: do not link, build karma, be helpful. That guidance is correct in the aggregate but is useless in the moment, inside a specific comment thread, when you actually need to decide whether this reply carries a link or not. The S4L rule is a concrete function of the thread context.

FeatureGeneric Reddit affiliate adviceS4L
When a link belongs in a comment'Never.' Or 'after you build 1,000 karma.' No concrete rule per thread.Per-thread test. Classifier reads the target comment and returns ask_detected=true/false. Only true promotes the reply to Tier 3.
What happens on an unsolicited linkRemoved by automod, account shadowbanned after ~3 removals.build_prompt never emits one. Tier 1 is hard-coded default. A URL in the draft means the classifier matched the reply target's text.
Who decides the link is allowedThe writer, manually, one comment at a time. Error-prone under pressure.A deterministic rule block injected into the LLM prompt at prompt-build time, lines 115-118 of engage_reddit.py.
What the URL isWhatever the writer pastes, usually a raw affiliate URL with a tracking tag.config.json :: projects[].recommended_links. LLM can only cite it; it cannot generate a new one.
How the subreddit ban list interactsManual rule-reading by the writer before each post.check_link_rules.py runs 12 regex patterns against r/{sub}/about/rules.json and tags the sub. Tagged subs force Tier 3 -> Tier 2 downgrade.
Amazon Associates §5 disclosureOften skipped on Reddit comments entirely.Part of the Tier 3 template. The link block in config.json stores the disclosure line alongside the URL so the two travel together.
Frequency cap'Do not be spammy.' Unmeasured.recommendation style is capped at MAX 20 percent of replies in get_styles_prompt(). Tier 3 inherits the cap because Tier 3 replies use recommendation.
AuditabilityHard. No structured record of which comment carried which link.posts table stores engagement_style per row. 'SELECT COUNT(*) FROM posts WHERE engagement_style = \'recommendation\' AND platform = \'reddit\'' is the one-line audit.

Signals the classifier reads before promoting to Tier 3

The classifier is a handful of keyword checks plus intent scoring from the LLM itself. Nothing fancy. What matters is that the signals are the same on every run, so the decision is replayable if a reply later looks out of place.

Green rows fire Tier 3. Anything else stays at Tier 1 or Tier 2.

  • Target comment contains 'link', 'url', 'where do I get', 'got a link for'. Classic Tier 3 signal.
  • Target comment contains 'what do you use', 'any recs', 'what's your setup'. Request-shaped, promotes to Tier 3.
  • Target comment is a statement: 'X is overrated', 'Y broke for me'. Not a request. Stays on Tier 1.
  • Target comment is a question but about a concept, not a product: 'how does affiliate commission work'. Tier 1, no link.
  • Subreddit has no_referral or hard_no_links tagged by check_link_rules.py. Tier 3 downgraded to Tier 2 regardless of ask.
  • Target comment is from OP on their own thread and they explicitly asked for a link. Tier 3 fires, style = recommendation.

A single reply, from queue to posted row

Every reply goes through its own Claude session so context does not bleed across threads. That per-reply isolation is what keeps the tier decision honest: the prompt is fresh, the tier block is the same, and the evidence for Tier 3 has to be on the screen.

What happens between 'pending' and 'replied'

1

get_next_pending fetches one row

LIMIT 1 on replies.status='pending' AND platform IN ('reddit','moltbook'). One reply at a time, ordered so replies on our own threads come first.

2

reddit_tools.py fetch pulls the full thread

Claude's first tool call is a Bash invocation of reddit_tools.py fetch <thread_url>. That returns JSON for the OP and every comment, so the LLM has the whole conversation, not just the parent.

3

The prompt is assembled

build_prompt stitches together the target comment, the last 3 archetypes for rotation, platform content rules, the styles prompt, the anti-patterns, and the Tier 1/2/3 block. One template. Same order every time.

4

check_link_rules.py tag lookup

If the sub is tagged no_referral, hard_no_links, or link_requires_approval, the prompt tells the LLM that Tier 3 is disabled for this reply. The LLM picks from Tier 1 or Tier 2 only.

5

LLM returns JSON: action, text, project, engagement_style

Strict JSON output. The orchestrator parses it. If engagement_style is 'recommendation' and project is set, the orchestrator pulls the URL from config.json :: projects[project].recommended_links and substitutes it into text.

6

Orchestrator runs the URL safety sweep

Regex scan for http:// or https:// outside the intended slot. Any leaked URL means the draft is rejected and the reply stays pending for the next cycle.

7

Browser posts, DB logs

CDP posting via the reddit-agent profile. On success the replies row flips to 'replied' and the posts row inherits engagement_style so the style ranker sees the outcome in the next cycle.

0tiers in the link rule
0%percent ceiling on recommendation style
0regex patterns in check_link_rules.py
0engagement styles in STYLES dict

the one line that makes unsolicited links impossible

Tier 0is the default. No link. Every reply starts here.

Most marketing automation tools tell their LLM 'use links where helpful.' That phrasing gives the LLM permission to decide. S4L reverses it: the default is 'no link,' and every other tier has to earn the exception. Earning the exception requires specific evidence pulled from the reply target itself. No evidence, no exception, no URL.

Tier 3

The only tier where an Amazon affiliate URL reaches a Reddit comment. Gated by the target comment asking for a link, the sub not being tagged no_referral, and the recommendation style sitting under its 20 percent ceiling.

scripts/engage_reddit.py lines 115-118

Run the 3-tier rule on your own stack

S4L ships the engagement bot, the style ranker, the subreddit rule auditor, and the posts-table feedback loop as one Mac-native pipeline. Pricing is flat, no per-platform add-ons.

See pricing

Frequently asked questions

Does S4L post Amazon affiliate links on Reddit automatically?

Only at Tier 3, which means only when the reply target literally asked for a link or tool in plain text. The default for every Reddit reply is Tier 1, which emits zero links and zero product names. The rule lives at lines 115-118 of scripts/engage_reddit.py and is injected into the LLM prompt before any drafting happens. If the target comment reads as a statement or a conceptual question, the bot never reaches Tier 3, and no URL is drafted at all.

What stops the bot from pasting a link on a thread that did not ask for one?

Three things, stacked. First, the ## Tiered links block in build_prompt explicitly says Tier 1 is the default and Tier 1 means no link. Second, the platform-specific content rules in scripts/engagement_styles.py line 309 state 'NEVER include URLs or links' for Reddit, full stop. Third, even when Tier 3 fires, the URL is pulled from config.json rather than invented by the LLM, so a hallucinated URL has no path to the reply text.

How does the bot know the reply target 'asked for a link'?

The target comment and the thread are fetched via scripts/reddit_tools.py fetch <thread_url> at the start of every reply session. That pulls the full JSON from old.reddit.com, including the parent comment text. The prompt then asks the LLM to read that context and decide between action: skip, action: reply with a style, and whether the reply target literally asks for a link or tool. ask_detected=true maps to Tier 3 in the prompt guidance; anything else falls back to Tier 1.

What about Amazon Associates policy §5 disclosure?

The disclosure is stored alongside the URL in config.json :: projects[].recommended_links. When Tier 3 fires, the template pulls both together so a disclosure never gets decoupled from the link in transit. Most Reddit subs that tolerate affiliate links still require this, and Amazon Operating Agreement §5 requires it everywhere. Storing the line in config removes the 'I forgot' failure mode.

What happens when check_link_rules.py tags the sub as no_referral?

Tier 3 is downgraded to Tier 2 for that specific reply, no matter how clear the ask is. Tier 2 lets the bot mention the project name in passing but does not paste the URL. The tag taxonomy (hard_no_links, no_referral, no_self_promo, link_requires_approval, no_blog_links, text_only, ratio_rule) is enumerated at the top of scripts/check_link_rules.py. r/affiliatemarketing, r/sidehustle, r/SEO and r/bigseo all carry hard_no_links or no_referral, so Tier 3 never fires in those subs at all.

How often does Tier 3 actually fire in practice?

Capped at 20 percent of replies by the recommendation style policy in scripts/engagement_styles.py. That is a ceiling, not a target. Real runs come in well under it because most Reddit threads are conversations, not requests. The posts table carries engagement_style per row, so the audit is one line: SELECT engagement_style, COUNT(*) FROM posts WHERE platform = 'reddit' GROUP BY 1. In normal runs the recommendation share sits between 5 and 12 percent.

Why is Tier 2 a thing at all, if the goal is to drive affiliate traffic?

Because unsolicited links get the account removed, and a removed account cannot drive any affiliate traffic at all. Tier 2 keeps the project name visible in relevant conversations while respecting the sub culture that penalizes links. Over weeks, Tier 2 mentions build a search-worthy mental map: 'that name keeps showing up when people talk about X.' Users then paste the name into Google or the Amazon search bar themselves, and the attribution becomes their doing rather than yours.

Where does the Amazon affiliate URL actually live in the codebase?

In config.json, inside the project block that owns the product. Each project has a recommended_links list, and each entry carries the URL, the tag parameter, the disclosure text, and a short context hint. The LLM prompt sees only the disclosure and a placeholder reference to the slot; the orchestrator script substitutes the live URL after the LLM returns its JSON. That split is how the affiliate tag survives a prompt round-trip without getting mangled.

Can I just turn Tier 3 off and run Tier 1 forever?

Yes. Remove the Tier 3 line from build_prompt's tier block, or set an empty recommended_links list for every project in config.json. The bot will then only ever engage at Tier 1. Some accounts run that way for the first 60 days as a karma-building phase, then re-enable Tier 3 once account age and karma clear the subs' own automod thresholds. Account age 3+ days plus karma 3+ is the most common automod combo on marketing-adjacent subs.

What stops the LLM from starting a Tier 1 reply with a link anyway?

The content rules block (engagement_styles.py get_content_rules(platform='reddit')) that reads 'NEVER include URLs or links.' That line is unconditional for Reddit and is rendered into the prompt above the Tier block. Combined with 'Statements beat questions' and 'No markdown in Reddit,' the prompt leaves no stylistic room to lead with a URL, and the orchestrator does a last-mile regex sweep for naked http:// before posting that kills any draft that leaked past the rules.