s4l.ai / guide / social media automation platforms

Social media automation platforms all solve the wrong problem if you run more than one product.

Hootsuite, Buffer, Sprout, Later, SocialBee, Eclincher. Every one of them assumes a single brand's calendar and helps you spray it across accounts. S4L starts from the opposite assumption: you run three to five products through one pipeline, and the hard question is not when to post, it is which of your own projects has earned the next slot today.

M
Matthew Diakonov
10 min read
4.9from 33
Deficit-weighted project selection in scripts/pick_project.py
7 named engagement styles, live-ranked per platform
One Claude session per reply, zero shared context
URL dedup on Reddit, author dedup on LinkedIn

Projects that share one pipeline right now

S4LMediarTerminatorFazmAppmakerAssrtPaperback ExpertAnalytics

Each project sits in one config.json with its own weight, its own topic lists per platform, and its own landing pages. The scheduler is the posts table; the scheduler's job is picking which of these gets to speak next.

The gap on the first page of Google

Search "social media automation platforms" and you get a stack of listicles. They cover scheduling, AI caption drafting, approval workflows, content calendars, analytics dashboards, integrations with 150+ tools. What they do not cover, at all, is the problem of fair share between multiple products inside one operator's pipeline.

If you run three products through one account (or three accounts under one operator), every post is a zero-sum choice: the slot you spend on Project A is a slot you did not spend on Project B. You need an allocation rule. No scheduler exposes one because schedulers were built for marketing teams who already decided what they were posting.

The anchor fact: deficit = target_share minus actual_share

This is the single line of code that separates S4L from the scheduling category. Every project in config.json has a weight. Every day, S4L recomputes actual share from the posts table. The project that has fallen furthest behind its weight target wins the next slot. Ties inside a 5% band are broken randomly so the same underdog doesn't monopolize the queue.

scripts/pick_project.py

The five lines that matter are 83 through 87. target_share is a ratio of weights. actual_share is a ratio of today's posts. deficit is the gap. Everything else is just sort-and-sample.

0named engagement styles
0projects in the weighted rotation
0%tie band for deficit sampling
0platforms with their own pipeline

One posts table, many specialist pipelines

There is no single "engine" in S4L. There is a shared posts table (the memory) and a set of specialist scripts, one per platform. Each one knows how that platform's feed works, how to dedup on it, and how to sign into it. The hub is the database.

Scheduler, drafters, posters

config.json
pick_project.py
top_performers.py
find_threads.py
posts table
engage_reddit.py
engage_twitter.py
engage_linkedin.py
engage-dm-replies.sh

What happens when one post fires

1

Read today's actual shares from the posts table

A SELECT groups today's posts by project_name and platform. This is the 'actual_share' side of the deficit equation. Nothing is estimated.

2

Compare against target shares in config.json

Each project has a weight. target_share = weight / total_weight of projects that actually have topics for this platform. Projects without topics for this platform are dropped before the ratio is computed.

3

Sort projects by deficit, descending

deficit = target_share minus actual_share. The project that has fallen furthest behind today wins the slot. Ties within 5% of the top deficit are broken randomly to avoid always picking the same underdog.

4

Fetch the top-performers report for this platform

top_performers.py queries posts where status = 'active', engagement_style IS NOT NULL, and content length >= MIN_CONTENT_LEN. Rows with product mentions or URLs are filtered out before the averages are computed.

5

Spawn a one-shot Claude session

The engagement shell runs a fresh claude -p with the reddit-agent / linkedin-agent / twitter-agent MCP config attached. The performance table, the chosen project, and the archetype exclusion list are injected into the prompt.

6

Record the draft, the style, the archetype, the depth

After the reply posts, a row lands in posts with engagement_style, archetype, depth, project_name, and thread_url (or thread_author, on LinkedIn). Tomorrow's deficit math reads these rows directly.

The style table is filtered before the model sees it

A common failure in AI-assisted social tools: the loop optimizes for whatever got upvotes, including the posts that worked because they broke the rules (dropped a link, named a product, begged for DMs). S4L strips those rows out of the training table before computing per-style averages.

scripts/top_performers.py

Dedup is not one rule, it is one rule per platform

Reddit has stable thread URLs, so dedup is URL-keyed. LinkedIn surfaces the same post under different activity URLs depending on how the feed found it, so dedup is author-keyed. A single shared rule would silently double-post on LinkedIn or miss legitimate replies on Reddit.

scripts/find_threads.py

What holds the pipeline together

One Claude session per reply

engage_reddit.py and engage-dm-replies.sh start a fresh claude -p invocation for every single comment. Context bleed between replies is impossible because there is no shared context. Slower per call, cleaner per output.

7 named engagement styles

critic, storyteller, pattern_recognizer, curious_probe, contrarian, data_point_drop, snarky_oneliner. The drafting prompt is conditioned on the style, not on generic 'write a comment'.

Per-platform browser profiles

twitter, reddit, and linkedin each have their own persistent Playwright profile under ~/.claude/browser-profiles/. Session state stays with the platform that earned it.

Archetype rotation

Before drafting, the orchestrator reads the last N reply archetypes used on that platform and excludes them. Two consecutive snarky_oneliners are blocked at the selection layer, not left to the model's taste.

Depth-aware reply ordering

replies.depth distinguishes top-level comments from nested replies. The engagement loop prioritizes replying to your own original posts first, then to top-level comments, then to nested ones.

Anti-pattern filtering in the feedback loop

top_performers.py strips rows with product mentions or links before computing per-style averages. High-upvote posts that broke the rules don't get to train the next draft.

S4L vs generic social media automation platforms

FeatureGeneric schedulerS4L
Who the product is built forOne brand, many channelsOne operator, many products, many channels
How the next post's topic is chosenFrom a content calendar you filled inHighest deficit between weight target and today's actual share
How tightly replies are isolatedOne long-running job, shared contextOne claude -p invocation per reply, no shared state
How duplicate threads are detectedOne dedup rule for everythingURL-based for Reddit / Moltbook, author-based for LinkedIn
How drafting style is chosenManual template libraryLive rolling average of upvotes per engagement_style on that platform
What the system learns fromAll prior content, equally weightedTop performers excluding posts with links or product mentions
Conversation memoryInbox with threadingposts + replies + dm_conversations with tier and interest_level enums
Where posting decisions liveScheduler UIconfig.json weights + posts table counts, reconstructable with one SELECT

By the numbers

0

engagement styles, each with its own draft prompt

0%

deficit band inside which ties are broken randomly

0

Claude session per reply (no batching, no context reuse)

Who should care about this

If you run one brand, pick Buffer. The UI is finished, the team features are real, and a deficit equation would feel like overkill. If you run a portfolio (indie founder, solo studio, multi-project operator), the equation is actually the product. You stop deciding what to post; the math decides, based on what is behind today and what has been working on that platform this week.

S4L is the engine, not a polished marketing app. The code is Python and shell. The state is a Postgres table. The UI is whatever you write on top. This page is for people who want to know exactly how the loop works before they adopt it, not people looking for a drag-and-drop calendar.

Frequently asked questions

What makes S4L different from the other social media automation platforms?

Every other platform on the top SERP page (Hootsuite, Sprout Social, SocialBee, Buffer, Later, Eclincher) is built around one brand posting to many channels. S4L is built around one operator running many products through one pipeline. The concrete consequence is scripts/pick_project.py, which picks the next project to mention by computing deficit = target_share minus actual_share across today's posts and sampling from projects within 5% of the top deficit. No scheduler in the category does this because they don't need to: they assume the calendar was already filled in by a human.

How does the deficit-based project picker actually work?

Each project in config.json has a weight. S4L computes target_share as weight divided by total weight of projects that have topics for the current platform. actual_share comes from a SELECT on the posts table grouped by project_name for today. deficit = target_share minus actual_share, sorted descending. Candidates within 5% of the top deficit are sampled randomly to break ties. You can read the full scoring loop in scripts/pick_project.py, lines 75 through 97.

Why does every reply run in its own Claude session?

Context bleed. If you run 20 replies in one long-lived process, each draft sees the previous drafts and starts to rhyme with them. The voice homogenizes, the archetypes collapse, and you end up with twenty comments that all sound like the same person had a long conversation with themselves. engage_reddit.py and engage-dm-replies.sh solve this by invoking claude -p --strict-mcp-config fresh for every reply. Slower, but the outputs stay distinct.

Why does LinkedIn use author-based dedup instead of URL-based?

LinkedIn's search endpoints return the same post under several different activity URLs depending on how the feed surfaced it. A URL check would let the system reply twice to one post under different URL fingerprints. S4L sidesteps that by keying on thread_author: if we have any active row for a given author on LinkedIn, we skip new threads from that author. Reddit doesn't have this problem because thread_url is stable, so Reddit stays URL-based. The branching lives in scripts/find_threads.py.

What are the 7 engagement styles, and what is the feedback loop?

critic, storyteller, pattern_recognizer, curious_probe, contrarian, data_point_drop, snarky_oneliner. Each post that goes out is tagged with one style. top_performers.py runs an SQL aggregation over active posts per platform and returns avg_upvotes grouped by engagement_style. That table is injected into the next drafting prompt. Styles with negative averages on a platform (curious_probe on Reddit, historically) get flagged as anti-patterns in the failure annotator. The drafting loop reads the live numbers before every reply.

How does S4L avoid learning from 'great' posts that broke the rules?

has_anti_pattern in scripts/top_performers.py (lines 54-64) filters out posts whose content mentions any PRODUCT_NAMES or any http:// / https:// / www. substring before the top-performers table is built. A comment that scored well because it dropped a link is not what we want the next draft to imitate. Removing those rows from the aggregation keeps the avg_upvotes table honest about what the rules actually allow.

What lives in the posts table besides content?

engagement_style, archetype, depth, project_name, thread_url, thread_author, platform, status, upvotes, our_url, our_content, posted_at, link_edited_at, link_edit_content. The deficit math reads project_name + platform + posted_at. The style feedback loop reads engagement_style + upvotes. The dedup logic reads thread_url or thread_author. Everything is in one table by design, because the scheduler, the drafter, and the link-edit pass all need to see the same row.

How does archetype rotation interact with the 7 engagement styles?

Style is the high-level voice (critic vs storyteller). Archetype is the specific reply shape used inside that voice (a specific hook, a specific payoff structure). Before drafting, engage_reddit.py fetches the last several archetypes used on that platform and passes them to the prompt as an exclusion list. Two consecutive snarky_oneliners of the same archetype shape are blocked at the selection layer. The model doesn't have to remember its own history, because the orchestrator does.

Can S4L replace Hootsuite or Buffer?

Only if what you need is a multi-product engagement pipeline, not a calendar. The scheduling primitive in Hootsuite / Buffer / Later is 'post this piece of content to this account at this time'. The scheduling primitive in S4L is 'the project with the highest deficit gets the next slot, styled by whatever is working on this platform right now, running in its own Claude session'. Those are different problems. If you run one brand, Buffer is simpler. If you run 3 to 5 products and need fair share across them, S4L's deficit loop is the thing.

Is the config.json weight a hard quota?

No, it is a target. If one project has weight 3 and another has weight 1, over the long run you should see roughly 75% / 25% share. On any given day the actual shares drift because deficit is computed from today's posts only. The 5% randomness band around the top deficit also smooths out hard ordering, so you don't get long runs of the same project just because it started the day at zero. The math is in scripts/pick_project.py.

Run one pipeline across every product you own

S4L is the social media automation platform for operators who post from three or more projects. The project rotation is math, not a calendar. The style selection is live performance data, not a template library.

See S4L