site:s4l.ai

What site:s4l.ai returns, and the 7-style engagement taxonomy none of those pages mention.

A site: search for s4l.ai surfaces three pages: the homepage, the under-the-hood walkthrough, and the CPC calculator. They tell you what S4L does. None of them tell you how S4L picks a voice for each comment. The answer is a file called scripts/engagement_styles.py, seven named styles, and a per-platform allow/deny list.

See S4L in action
M
Matthew Diakonov
9 min read
4.8from 23
7 named engagement styles
per-platform allow/deny list
logged per post to the posts table

What a site: search actually returns

If you type site:s4l.ai into Google right now, three pages come back. They describe the product at three altitudes: pitch, mechanism, and pricing proxy. That is the whole public surface area, which is why this guide exists: everything below the surface (style selection, platform rules, per-post logging) lives in the code, not on the site.

s4l.ai/ — pay per booked call
s4l.ai/under-the-hood — how the agent browses and posts
s4l.ai/calculator — CPC vs organic reply cost
this guide — the style taxonomy behind every comment

The 7 engagement styles, straight from the file

Each style has a description, an example opening, a map of where it tends to land, and a one-line constraint (the note field). A comment can only belong to one style.

critic

Points out what is missing, flawed, or naive in the thread. Reframes the problem. Never just nitpick; must offer a non-obvious insight.

storyteller

First-person narrative with concrete details (numbers, dates, names). Leads with a failure or a surprise, never pivots to a pitch.

pattern_recognizer

Names the pattern out loud. Authority through pattern recognition, not credentials. Good in engineering threads.

curious_probe

A single follow-up question about the most interesting detail, prefixed with 'curious because...'. One question only. Never multiple.

contrarian

A clear opposing position backed by experience. Must carry credible evidence, empty hot takes get destroyed.

data_point_drop

One specific, believable metric. The number does the talking. No links. Numbers must be believable, not impressive.

snarky_oneliner

Short, sharp, emotionally resonant. One sentence max. Validates a shared frustration. Banned on LinkedIn and in serious subs.

Source of truth: scripts/engagement_styles.py

The full file is ~270 lines. Here is the top of the STYLES dict and the bottom (snarky_oneliner), because snarky_oneliner is the one style with a platform-level ban.

scripts/engagement_styles.py

Per-platform style rules

The PLATFORM_WEIGHTS dict decides which styles dominate, which are secondary, and which are hard-banned for each network.

FeatureNever list / notesDominant picks
Redditcurious_probe (never), everything else alloweddominant: contrarian, storyteller, snarky_oneliner
X / Twitterall 7 styles allowed, no 'never' listdominant: snarky_oneliner, critic, data_point_drop
LinkedInsnarky_oneliner (never)dominant: storyteller, pattern_recognizer, critic
GitHubsnarky_oneliner (never)dominant: critic, pattern_recognizer, data_point_drop
Moltbookall 7 allowed; agent voice ('my human')dominant: storyteller, pattern_recognizer, critic
scripts/engagement_styles.py

The picker reads this dict before every comment. A style in never is stripped from the candidate set before the LLM is even prompted. This is why a LinkedIn reply will never read like a Reddit shitpost.

How the style picker sits in the pipeline

Inputs on the left, the style-aware generator in the middle, destinations on the right. The middle node is what every run-*.sh script calls before drafting.

thread + context → style-aware generator → platform

Reddit thread
Twitter thread
LinkedIn post
GitHub issue
engagement_styles.py
Drafted comment
posts table row
Engagement A/B

What actually gets logged

After a comment is posted, the pipeline writes a row to the Neon Postgres posts table. The engagement_style column is required, not nullable. That is how the A/B test stays clean: no style, no insert.

posting a Reddit reply in the 'contrarian' style

The taxonomy, by the numbers

Counts pulled directly from scripts/engagement_styles.py.

0named styles
0platforms with weights
0platforms that ban a style outright
0engagement_style per row, always
0%

of picks land on a dominant style

0%

on a secondary style

0%

on a rare style

0%

cap on 'recommendation' style in reply pipelines

Floor rules every style inherits

The get_content_rules helper returns these constraints regardless of which style was picked. They exist to prevent the pleaser/validator voice that consistently lands at the bottom of the engagement charts.

Shared content floor

  • No em dashes, commas and periods only
  • Never say 'I built' or 'we built' unless recommending
  • Never start with 'exactly', 'yeah totally', '100%', 'that's smart'
  • Concrete numbers, dates, timeframes over vague claims
  • Contractions, lowercase, fragments allowed (humans write like that)
  • Reddit: no product names, no links in top-level comments
  • LinkedIn: softer critic framing, no snark, no sarcasm

Run this on your product

S4L applies the same 7-style taxonomy to whichever product you point it at. You brief the angle, we pick the thread, the style, the reply, and log the row.

See pricing

Common questions about site:s4l.ai

What pages show up for site:s4l.ai right now?

Three pages are indexed: the homepage (s4l.ai), /under-the-hood (how the agent browses and posts), and /calculator (a CPC estimator that compares Google Ads cost per click to the implied cost per organic reply). This guide adds a fourth covering the engagement-style taxonomy none of those three describe.

Why does S4L have 7 engagement styles instead of one 'house voice'?

One voice across Reddit, X, LinkedIn, GitHub, and Moltbook fails on most threads. A critic tone lands in r/Entrepreneur and dies in r/Meditation. A snarky one-liner is the top style on Twitter and is explicitly banned on LinkedIn. Splitting into 7 labeled styles lets S4L A/B test which style wins per platform, per subreddit, per project.

Where exactly are the 7 styles defined?

In scripts/engagement_styles.py in the social-autoposter repo. The file exposes a STYLES dict (7 entries, each with description, example, best_in map, and a note), a PLATFORM_WEIGHTS dict with dominant/secondary/rare/never tiers per platform, and get_styles_prompt / get_content_rules helpers that every pipeline (post_reddit, engage_reddit, run-twitter, run-linkedin, engage-linkedin, post_github, moltbook_post) calls to assemble the prompt.

Which styles are banned on which platform?

LinkedIn bans snarky_oneliner (explicit 'never' entry). GitHub bans snarky_oneliner. Reddit bans curious_probe (statements beat questions, 'anyone else experience this?' reads as low-effort). Twitter and Moltbook have no 'never' list. The rules live in PLATFORM_WEIGHTS['<platform>']['never'].

Does the style get logged per post?

Yes. Every row written to the Neon posts table has an engagement_style column, and the INSERT statement in the workflow explicitly requires it to be non-null. There is also a separate recommendation style used only in reply pipelines, capped at ~20% of replies so that product mentions stay below the organic-engagement threshold.

How does S4L pick between the 7 styles for a given thread?

The picker reads the thread plus top comments, runs the platform-specific prompt block (get_styles_prompt) which biases toward dominant styles (~60%), secondary (~30%), and rare (~10%), and excludes anything in the platform's 'never' list. The LLM then picks one style and drafts the comment under that style's rules (length, first-word constraints, markdown allowed, links allowed).

Why is there a content rule against starting with 'exactly' or '100%'?

Those opening tokens correlate with the 'pleaser/validator' pattern that the engagement-style system explicitly flags as lowest-performing across all platforms. The shared get_content_rules helper forbids them, so every pipeline inherits the same floor.

Can a comment mix two styles?

No. Each post is tagged with exactly one engagement_style in the DB so the A/B signal stays clean. Blending styles would make the per-style engagement numbers un-attributable.