A reddit marketing service that picks its own voice from your upvote history

Type the keyword into Google and you get agencies. Humans in a Discord, writing Reddit comments for you at 300 dollars a week. S4L is the other kind of service: a process you run, with 7 named comment styles, and a 5-post rule that decides which ones your account is allowed to lean on.

M
Matthew Diakonov
9 min read
4.8from 42 reviews
7 named styles
5-post cold-start rule
Live SQL against your posts table

Two different things the phrase can mean

When people type reddit marketing service into a search bar, they usually mean one of two products:

  1. A managed service. Humans in a Slack writing comments for you. You send them a doc, they show you a Notion board, karma appears on your account.
  2. A running service you operate. You install it, point it at an account, and it drafts, schedules, and posts. You own the code, the account, and the data trail.

S4L is the second kind. This page is about one specific piece of the second kind: how the pipeline picks which of 7 voices to use on any given thread, and why the decision is recomputed from your own upvote history on every run.

The 7 styles in rotation

They are declared in a single Python dict at the top ofscripts/engagement_styles.py. Six are active on Reddit. One is hard-excluded by policy. One more is reply-only.

criticstorytellerpattern_recognizercontrariandata_point_dropsnarky_onelinerrecommendation (reply-only)curious_probe (reddit: never)

critic

Point out what is missing, flawed, or naive. Reframe the problem. Never just nitpick; offer a non-obvious insight.

storyteller

First-person narrative with specific details (numbers, dates, names). Lead with failure or surprise, not success. Never pivot to a product pitch.

pattern_recognizer

Name the pattern or phenomenon. Authority through pattern recognition, not credentials.

contrarian

Take a clear opposing position backed by experience. Must have credible evidence. Empty hot takes get destroyed.

data_point_drop

Share one specific, believable metric. Let the number do the talking. No links. Numbers must be believable, not impressive.

snarky_oneliner

One short, sharp, emotionally resonant observation that validates a shared frustration. Never in small or serious subs.

curious_probe (banned on Reddit)

Single follow-up question about the most interesting detail. Listed in PLATFORM_POLICY['reddit']['never'], so the tiering engine strips it before traffic is assigned.

recommendation (reply only)

Casually recommend a project from config. Tier-independent: governed by the Tier 1/2/3 link strategy in the surrounding prompt, and capped at 20% of replies.

The anchor fact, in one line of Python

Everything downstream depends on the constant below. If you remember only one thing from this page, remember the number 5 and why it is there.

scripts/engagement_styles.py

Below 5 successful posts, a style's average upvote number is too noisy to rank against anything else. One lucky result on a joke posted to a sub with 2 million members can warp the average for a month. The 5-sample rule is the smallest threshold that still lets a new style break in within a week of honest usage, without letting one outlier decide the tier.

0Named comment styles
0Min sample size before trust
0%Dominant-tier traffic
0Tiers per run

What actually flows through the pipeline

The function that computes tiers is get_dynamic_tiers(platform). It runs on every pipeline invocation. It takes three inputs and produces three outputs.

Tier classifier

posts table
STYLES dict
PLATFORM_POLICY
get_dynamic_tiers()
dominant
secondary
rare

The actual SQL, not an abstraction

Competitor pages talk about machine learning and signal weighting. S4L's style classifier is one SELECT statement.

_fetch_style_stats in engagement_styles.py

Three filters matter here. First, status = 'active' excludes deleted and shadow-banned rows so the average is not poisoned by comments Reddit silently removed. Second, LENGTH(our_content) >= 30 drops trivial one-word replies that happen to have a high upvote ratio because the account that got them in got lucky. Third, upvotes IS NOT NULL waits for Reddit to report a real number before the row can vote on tier placement.

Walk through a single run

1

Read your subreddit policy

The run starts by loading PLATFORM_POLICY['reddit']. For Reddit this strips curious_probe from the candidate set regardless of how it performs, because one-question replies pattern-match to low-effort solicitation.

2

Query the posts table

A single SELECT groups your past Reddit comments by engagement_style and computes COUNT(*) and AVG(upvotes). Only posts with status='active', our_content length >= 30, and a non-null upvote count participate.

3

Apply the 5-sample gate

Any style with fewer than MIN_SAMPLE_SIZE (5) successful posts gets routed to secondary. This exists so one lucky outlier on 1 post cannot push a style to the top of the rotation.

4

Sort and split trusted styles

Styles that pass the gate are sorted by avg_upvotes descending and split into thirds: top third dominant (~60% of traffic), middle secondary (~30%), bottom rare (~10%). Untrusted styles ride along inside secondary.

5

Compose the styles prompt

get_styles_prompt() builds a multi-section block: PRIMARY styles (use ~60% of the time), SECONDARY (30%), RARE (10%). Every style carries its own description, example, best_in subs, and note.

6

Claude picks the match

For a given thread, the model reads the tiers prompt plus the thread text and decides which style fits. The prompt also carries an AVOID block for the pleaser/validator style, which consistently has the lowest engagement.

7

Post and log

On success, the new row in posts records engagement_style, our_content, platform, and (once Reddit reports it) upvotes. That row feeds the next run's tier computation.

What it looks like in the terminal

Below is a trimmed log from a three-reply run. Notice that the tiering line prints the dominant, secondary, and rare lists that the classifier produced before any Claude session started. Notice also that the curious_probe exclusion is logged, not silent. If something in PLATFORM_POLICY changes, you see it immediately.

engage_reddit.py
5 samples

The value is not that it writes comments. Everyone can write a comment. The value is that it decides which voice to use on the thread, based on what has worked on this account, not on what the vendor thinks sounds good.

Design note, engagement_styles.py

Cold start vs trained account

The same pipeline run on a fresh Reddit account and on an account with 35 posts logged behaves differently. This is the difference that the MIN_SAMPLE_SIZE gate produces.

Same run, two different accounts

Cold start. No trusted performance data exists yet, so get_dynamic_tiers() returns empty dominant and empty rare lists.

  • dominant = [] (empty)
  • rare = [] (empty)
  • Every non-banned style routed to secondary
  • Prompt tells Claude to explore; no PRIMARY section emitted
  • Higher variance style mix for the first ~35 posts

Against a reddit marketing agency, side by side

Both products exist because Reddit rewards a specific kind of reply. The difference is where that reply comes from and how it is picked.

FeatureTypical reddit marketing agencyS4L running service
Who writes the commentA freelancer or pod member in a shared SlackA Claude session spawned by the pipeline from your own configured account
How comment style is chosenA brand voice doc or a weekly editor callLive SQL against your posts table, tiered by avg_upvotes every run
Cold-start handling for new accountsNot a technical concern; same playbook applied day 1MIN_SAMPLE_SIZE = 5 constant parks every untrusted style in secondary until proven
Style taxonomy you can auditInformal, held in the agency's heads7 named styles declared in scripts/engagement_styles.py STYLES dict
What happens when a style underperformsBad comments stay in rotation until someone noticesBottom third of trusted styles drops to rare (~10% of runs) automatically
Reddit tone constraintsEnforced by editorial reviewPLATFORM_POLICY['reddit']['never'] hard-removes forbidden styles before tiering
Account ownershipRisk of shared logins, shared IP, suspension cascadesPipeline runs on your infra, your account, your rate limits
Rate-limit handlingN/A (a human does not care about API limits)preflight_rate_limit() probes quota; inline waits cap at 90 seconds or raise

Rules that live in the code, not in a brand doc

Every one of these is enforced in get_content_rules("reddit") and shipped into the prompt alongside the tier block. If you want to change any of them, you change one file.

Reddit content rules

  • Go bimodal: either 1 punchy sentence under 100 chars, or 4-5 sentences of real substance. Avoid the 2-3 sentence dead zone.
  • Start with 'I' or 'my'. First-person experience beats prescriptive 'you should do X'.
  • No markdown in Reddit comments. No headings, no bullet lists, no bold.
  • Never mention product names unless the recommendation style was picked. Never include URLs outside the Tier 1/2/3 link strategy.
  • Statements beat questions. No 'anyone else experience this?' openers.
  • Avoid the pleaser/validator style. It has the lowest average upvotes across every platform we have data for.

Who this is for

If you want a human to take your brand brief and post comments on Reddit, hire an agency. They exist, they are good at what they do, and this page is not for you.

If you want a running service you operate, with 0 clearly named comment styles that rebalance against 0-sample evidence from your own account, and you want to read the exact SELECT that makes the decision, this is the product. You get the tier engine, the guardrails that decide whether a thread is worth engaging with in the first place (covered in the reddit-marketing-tool guide), and a posts table that learns over time instead of a retainer that bills for time.

Want to see the tier engine decide in front of you?

30 minutes, live account, a real run. We pick three pending threads and walk through which style gets selected and why.

Book a call

Frequently asked questions

Is a reddit marketing service the same thing as a reddit marketing agency?

Not in the sense S4L uses the word. An agency is humans ghostwriting comments on your account for a monthly retainer. S4L is a running service you operate: scripts/engage_reddit.py reads pending threads, scripts/engagement_styles.py picks a style tier from your own upvote history, and a spawned Claude session drafts the comment. You keep the account, the infra, and the decision log. If the keyword 'service' in your head means 'someone else does it for me,' that is a different product.

Why is 5 the minimum sample size before a comment style is trusted?

The constant is MIN_SAMPLE_SIZE at scripts/engagement_styles.py line 129. Below 5 successful posts, the average upvote number is too noisy to rank against other styles, so the style is parked in secondary (explore) instead of dominant or rare. This specifically prevents a single outlier, like one joke that caught a large subreddit on a slow day, from pushing a style to 60% of the traffic. Once the style has 5 or more posts in the posts table with upvotes logged, it enters the trusted pool and gets sorted by avg_upvotes descending.

What are the seven comment styles and where are they declared?

They are declared in the STYLES dict at the top of scripts/engagement_styles.py: critic, storyteller, pattern_recognizer, curious_probe, contrarian, data_point_drop, and snarky_oneliner. Each entry carries a description, an example, a best_in map of recommended subreddits per platform, and a note that encodes the failure mode (for example, snarky_oneliner notes 'never in small or serious subs'). A separate recommendation style exists for reply pipelines only and is capped at 20% of replies. curious_probe is excluded from Reddit runs by PLATFORM_POLICY['reddit'].never.

How is the 60 / 30 / 10 traffic split enforced?

It is a prompt-level split, not a hard scheduler. get_dynamic_tiers() returns three lists (dominant, secondary, rare). get_styles_prompt() then emits three headed sections that instruct Claude to use the PRIMARY styles about 60 percent of the time, SECONDARY 30 percent, and RARE 10 percent. Because Claude draws its choice from that prompt, the long-run distribution tracks those weights, but any individual thread gets the style that best matches the conversation rather than a style forced by a counter.

What happens to a freshly signed-up Reddit account with zero history?

That is the cold-start case. _fetch_style_stats returns an empty dict because the posts table has no rows for the account yet. get_dynamic_tiers detects this and returns empty dominant and rare lists with every non-banned style routed to secondary. The prompt that ships to Claude therefore has no PRIMARY section at all; the model is told to explore. After your account logs 5 successful posts for any given style, that style graduates into the trusted pool and starts competing for the dominant and rare tiers on subsequent runs.

Can I change the styles or the thresholds?

Yes, this is plain Python. The STYLES dict and the MIN_SAMPLE_SIZE constant both live in scripts/engagement_styles.py. PLATFORM_POLICY holds the per-platform never list and the tone note shown at the top of the prompt. If you want to forbid snarky_oneliner on Reddit the same way it is forbidden on LinkedIn, you add it to PLATFORM_POLICY['reddit']['never']. If you want to raise the sample threshold to 10 before trusting a style, you edit one line. The pipeline reloads these on the next run; no build step.

How is this different from the reddit-marketing-tool guide on this site?

That guide covers the pre-flight guardrails (age cutoff, lock check, rate-limit preflight) that refuse to engage in the first place. This page covers the step that runs after those guardrails pass, when the pipeline has a thread it is willing to reply to and needs to decide which of 7 voices to use. Both are parts of the same scripts/engage_reddit.py run; this one is about the style-selection layer.