s4l.ai / guide / auto social media posting

Auto social media posting that never touches a platform API.

Every other tool on the SERP for this phrase is a cloud scheduler wrapped around Twitter, Reddit, and LinkedIn OAuth. S4L is not. S4L drives a real Chromium profile that is already logged into your real accounts, attaches to it over Chrome DevTools Protocol, and writes into the live DOM. No client_id, no refresh tokens, no scope-review waits. The runtime is a folder in your home directory and 37 launchd jobs on your Mac.

M
Matthew Diakonov
10 min read
4.9from 33
Zero platform API tokens required for Reddit, Twitter, or LinkedIn
Persistent Chromium profile per platform under ~/.claude/browser-profiles/
37 launchd plist files orchestrate the full posting schedule
UNIQUE(platform, thread_url) Postgres dedup stops duplicate comments before the DOM is touched

the uncopyable bit

The entire auth model is one directory on disk.

For Reddit, the session lives at ~/.claude/browser-profiles/reddit. For Twitter, ~/.claude/browser-profiles/twitter. For LinkedIn, ~/.claude/browser-profiles/linkedin. Each is a full Chromium userDataDir: cookies, service workers, IndexedDB, the works. You log in once. Every subsequent launchd run reuses the directory, the fingerprint, and the session. When the scheduler wants to post, it connects via Chrome DevTools Protocol to a Chromium process started against that userDataDir and drives the real DOM. That is the whole auth story.

One post, end to end, without an API call

The shapes of the arrows in a typical Buffer-style tool: your scheduler, their cloud, the platform API. The shapes here: your launchd, your python, your Chromium, the platform DOM. Nothing leaves your machine except the final HTTP request the browser itself makes.

The full write path for one Reddit comment

1

launchd fires the plist

com.m13v.social-reddit-threads at 00:10, 06:10, 12:10, 18:10. StartCalendarInterval, not a cloud scheduler.

2

skill/run-reddit.sh acquires the lock

mkdir /tmp/social-autoposter-reddit.lock. If busy, wait up to 3600s. If older than 3h, reap and retry.

3

claude -p --strict-mcp-config spawns

Child agent sees exactly one MCP: reddit-agent-mcp.json, pinned to @playwright/mcp with --user-data-dir=~/.claude/browser-profiles/reddit.

4

find_threads.py hands the agent candidates

Neon Postgres already deduped the list against thread_url on the platform. The agent only sees untouched threads.

5

Python attaches via CDP

reddit_browser.py scans ps aux for --remote-debugging-port=<N> on a chromium that is already on old.reddit.com, then calls into it over CDP.

6

DOM write: type comment, click submit

No API call. The browser submits the form exactly like a human click does. Platform rate limits apply the same way they do to your own account.

7

log_post.py runs UNIQUE(platform, thread_url)

If the row already exists, return DUPLICATE_THREAD and skip the write. Otherwise INSERT INTO posts with status='active', posted_at=NOW().

The pipeline, at one glance

On the left, every input the scheduler has. On the right, every surface it writes to. In the middle, a single locked, strict-MCP child process per platform. Nothing else gets in.

launchd → strict claude agent → browser profile → platform DOM

launchd plist
skill/lock.sh
find_threads.py
.env config
claude -p --strict-mcp-config
Reddit DOM
Twitter DOM
LinkedIn DOM
Neon posts table

The CDP attach, in the actual source

This is the top of the reddit browser helper. Note that the lock lease is 300 seconds, matched exactly by the shell wrapper. The wait budget is 45 seconds; if the scheduled engagement job is holding the browser, the post is skipped on this fire and retried on the next.

scripts/reddit_browser.py

One profile directory per platform. One lock per profile.

The pipeline runs five platforms at once. A hang on one cannot block another, because every piece of shared state is namespaced by the platform string.

Reddit (old.reddit.com)

Profile at ~/.claude/browser-profiles/reddit. Posts via CDP attach to the running reddit-agent MCP Chromium. Lock file ~/.claude/reddit-agent-lock.json, 300s expiry. Scan + comment cadence fires 4x/day at :10 past the hour.

Twitter / X

Profile at ~/.claude/browser-profiles/twitter. The cycle plist runs every 1800 seconds: scan + enrich, sleep 300s, then re-score + post. Two-phase so the same invocation does not both find and act on a candidate.

LinkedIn

Profile at ~/.claude/browser-profiles/linkedin. Separate MCP config, separate lock file. A stuck LinkedIn run cannot starve Reddit or Twitter because locks are namespaced per platform.

Moltbook

Optional API-driven platform. Uses MOLTBOOK_API_KEY from .env rather than a browser profile. Included to prove the pipeline can mix browser-based and API-based targets under the same scheduler and same posts table.

GitHub issues + Octolens

The engage-github and octolens-* jobs search Octolens mentions, filter to eligible threads, then open issues or post comments. Also runs through the locked, strict-MCP-config pattern, just with a different child tool set.

Neon Postgres

One posts table shared across all platforms. Every row logs platform, thread_url, our_url, our_content, posted_at, upvotes, engagement_style. Dedup is a single UNIQUE constraint on (platform, thread_url).

Dedup runs before the browser, not after

A duplicate comment on Reddit costs reputation, not just a wasted write. The helper short-circuits the entire call chain if a row exists with the same (platform, thread_url), even if that row came from a crashed prior run. The browser never opens.

scripts/log_post.py

A lock, step by step

Concurrency in this codebase is plain POSIX. Two-level locking: one lock for the shell pipeline, one lock for the actual CDP connection. Both are just files. Neither needs a coordinator.

How skill/lock.sh acquires a platform lock

1

Try to mkdir /tmp/social-autoposter-<platform>.lock

mkdir is atomic on POSIX. First caller wins, every other caller gets EEXIST and falls into the wait loop.

2

Examine the holding pid file

If the pid is dead (kill -0 returns nonzero), the lock is stale. Rm -rf the directory and retry.

3

Check the lock age

If the directory is older than 3 hours (10800s), reap it regardless of pid state. This catches Mac-specific zombie cases.

4

Poll every 10 seconds, up to the timeout

Default timeout is 3600s. If exceeded, print 'Previous run still active' and exit 0. Do not stack runs.

5

Write $$ to <lockdir>/pid

The first thing the winner does. Now future waiters can tell whether the holder is alive.

6

Push the lockdir onto _SA_LOCK_DIRS

Global bash array of held locks. A single trap on EXIT|INT|TERM|HUP releases every lock in reverse order.

The full lock routine, in bash

No flock dependency, no external daemon. The entire concurrency story is 45 lines of portable shell.

skill/lock.sh

What must be true for a comment to land

  • Chromium profile at ~/.claude/browser-profiles/<platform>/ exists and is logged in
  • ~/social-autoposter/.env has a valid DATABASE_URL pointing at your Neon instance
  • The launchd plist for this platform is loaded (~/Library/LaunchAgents)
  • Platform shell lock /tmp/social-autoposter-<platform>.lock is free or stale
  • CDP agent lock ~/.claude/<platform>-agent-lock.json is free or expired (300s)
  • A Chromium process with --remote-debugging-port is running on the platform URL
  • The candidate thread has no existing row in posts with the same (platform, thread_url)
  • --strict-mcp-config accepts the per-platform MCP JSON

The two-phase Twitter cycle

Every 1800 seconds. Phase A scans and enriches. Sleep 300s. Phase B re-scores with the latest engagement data and posts. Two separate claude child processes, same strict MCP config, same lock namespace. A split like this prevents one prompt from both finding and acting on a candidate within the same token context.

skill/run-twitter-cycle.sh

Real numbers from the codebase

Each of these is a constant you can grep for in the source tree. None of them come from marketing.

0launchd plist files under ~/social-autoposter/launchd/
0Python helpers in scripts/ handling posts, engagement, analytics
0tables in schema-postgres.sql (posts, threads, replies, dms, ...)
0primary platforms driven by browser profile (reddit, twitter, linkedin, moltbook)
0

second expiry on the per-platform CDP lock

0

seconds between twitter-cycle fires (30 min)

0

seconds before a stuck lock dir is force-reaped (3 h)

0

platform API tokens required for reddit, twitter, linkedin

The full job list

Every label below resolves to a plist file under ~/social-autoposter/launchd/. All 37 are independent. Pause one with launchctl unload; the rest keep running.

com.m13v.social-twitter-cycle (1800s)
com.m13v.social-engage (21600s)
com.m13v.social-stats (21600s)
com.m13v.social-reddit-threads (6h)
com.m13v.social-reddit-search
com.m13v.social-linkedin
com.m13v.social-engage-linkedin
com.m13v.social-engage-twitter
com.m13v.social-moltbook
com.m13v.social-github
com.m13v.social-github-engage
com.m13v.social-octolens-reddit
com.m13v.social-octolens-twitter
com.m13v.social-octolens-linkedin
com.m13v.social-dm-outreach-twitter
com.m13v.social-dm-outreach-reddit
com.m13v.social-dm-outreach-linkedin
com.m13v.social-dm-replies-twitter
com.m13v.social-dm-replies-reddit
com.m13v.social-dm-replies-linkedin
com.m13v.social-scan-reddit-replies
com.m13v.social-scan-moltbook-replies
com.m13v.social-audit-twitter
com.m13v.social-audit-reddit
com.m13v.social-audit-linkedin
com.m13v.social-audit-moltbook
com.m13v.social-stats-reddit
com.m13v.social-stats-twitter
com.m13v.social-stats-linkedin
com.m13v.social-stats-moltbook
com.m13v.social-link-edit-reddit
com.m13v.social-link-edit-linkedin
com.m13v.social-link-edit-github
com.m13v.social-link-edit-moltbook
com.m13v.social-daily-report
com.m13v.social-serp-seo
com.m13v.social-gsc-seo

why the cadence is coarse

0reddit-threads fires per day (00:10, 06:10, 12:10, 18:10)

Twitter runs every 30 minutes because the feed decays in minutes. Reddit runs every 6 hours because a thread's first-day vote signal is what decides whether a comment deserves a link at all. Engagement and stats roll up every 6 hours, audits on their own staggered schedule. All of this is visible in the plist XML; none of it lives in an external scheduler UI.

00:10
launchd fire
06:10
launchd fire
12:10
launchd fire
18:10
launchd fire

One Reddit pass, as it appears in the log

Abbreviated output from a real run. Two threads considered, one posted, one rejected at log_post.py because a prior run had already written the row.

skill/run-reddit.sh, one iteration

API-based schedulers vs S4L's browser-profile approach

Almost every tool on the first page of Google for 'auto social media posting' is a cloud app that holds your platform OAuth tokens. S4L is the shape-opposite design: the tokens do not exist, the scheduler lives on your machine, the writes look like your own real browser, and the scheduler state survives API deprecations because it does not depend on any API.

FeatureTypical cloud schedulerS4L
How the post reaches the platformPlatform OAuth API (needs client_id, scopes, review)CDP attach to a real logged-in Chromium, posts via the live DOM
Session managementRefresh tokens, re-auth flows, scope upgradesA userDataDir on disk at ~/.claude/browser-profiles/{platform}/
Where the scheduler livesCloud control plane you do not ownlaunchd on your Mac, 37 plists, cadence per job set locally
Concurrency controlQueue in the vendor cloudmkdir /tmp/social-autoposter-<platform>.lock, stale-lock sweep
Tool isolation per platformSingle backend holds all tokensclaude -p --strict-mcp-config, one platform's MCP per child
Duplicate-post guaranteeScheduler remembers, sort ofUNIQUE(platform, thread_url) in Neon Postgres, returns DUPLICATE_THREAD before the DOM is touched
What happens if a run hangsVendor restarts, you waitLock dir older than 3 hours is reaped by the next launchd fire
Access revocation when a platform tightens its APIYou lose automation until a new token type shipsStill logged in on the browser, still posting; you only react when the DOM changes

Auto social media posting: common questions

How does S4L do auto social media posting without platform APIs?

Each platform has a persistent Chromium userDataDir at ~/.claude/browser-profiles/{platform}/ that you log into once. @playwright/mcp spawns that exact profile on every run, so the cookies, fingerprint, and 2FA state are all already there. The Python helpers in scripts/ attach to the running browser via Chrome DevTools Protocol (CDP) and drive the real DOM: type into the comment box, click submit, read back the URL. Nothing in the loop touches the Twitter API, Reddit API, or LinkedIn OAuth.

What prevents two runs from posting at the same time into the same profile?

Two layers. First, skill/lock.sh acquires /tmp/social-autoposter-<platform>.lock with a plain mkdir (atomic on POSIX), writes the holding pid, and waits up to the timeout (default 3600 seconds) polling every 10 seconds. Second, reddit_browser.py, twitter_browser.py, and linkedin_browser.py each grab their own JSON lock file at ~/.claude/<platform>-agent-lock.json with a 300-second expiry, so even a background engagement script cannot collide with the scheduled posting cycle on the same Chromium.

Where are the posts actually tracked?

Neon Postgres, one posts table across all platforms. Every insert goes through scripts/log_post.py, which validates the URL starts with http and then runs a SELECT against (platform, thread_url) before the insert. If a row exists, the helper prints {"error": "DUPLICATE_THREAD"} and exits without calling the browser. The constraint is also enforced by the schema at the DB layer, so a crashed run cannot leave a half-written duplicate row.

What does --strict-mcp-config actually do?

The launchd shell scripts invoke claude -p with --strict-mcp-config --mcp-config $HOME/.claude/browser-agent-configs/<platform>-agent-mcp.json. strict-mcp-config tells the child Claude agent to ignore every MCP tool not listed in that one JSON file. The JSON points at @playwright/mcp pinned to one userDataDir. Net result: a Twitter run cannot accidentally call Reddit tools, cannot reach the API MCPs, cannot escape to the shell. The agent is locked to the one browser profile it was spawned for.

How many scheduled jobs does the full pipeline run?

37 launchd plist files under ~/social-autoposter/launchd/. They split into five groups: posting cycles (twitter-cycle every 30min, reddit-threads 4x/day, linkedin), engagement passes (engage-twitter, engage-linkedin, engage-reddit every 6h), DM flows (outreach + replies per platform), audits and stats (stats-<platform> every 6h), and a link-edit family that retroactively appends a URL only after the community has upvoted the original comment.

What if the platform changes its UI and a selector breaks?

That is the real cost of driving the browser instead of an API: a UI change can break a run. The mitigation in this codebase is scripts/reddit_browser.py's CDP-attach model, which prefers old.reddit.com URLs where the markup is nearly frozen, plus a per-run terminal log and a status='active' check before every write. When the DOM shifts, the bad run logs the failure to Neon, the lock is released, and the next launchd fire tries again after the selectors are patched. The row is not written unless the comment went through.

Does S4L need API tokens for anything?

Only for opt-in side channels. MOLTBOOK_API_KEY is read from .env if you use the Moltbook platform (which does have an API). RESEND_API_KEY and NOTIFICATION_EMAIL are used for DM-escalation email. DATABASE_URL is your Neon connection string. The core social posting surface area (Reddit, Twitter, LinkedIn) needs zero platform credentials beyond your normal human login stored in the browser profile.

Can I run this on Linux, or is macOS mandatory?

The launchd plists are Mac-only, but the scripts are plain Python 3.9+. The README points at setup/SKILL.md Step 7 for equivalent cron snippets. What you lose on Linux is launchd's StartCalendarInterval (exact wall-clock fires) and StartInterval-based auto-restart. The lock directory, the browser profile, and the strict-MCP-config wrapper all work unchanged because they rely on mkdir, files on disk, and the Claude CLI.

How do I stop all auto-posting at once?

One command: launchctl unload against each plist, or use the skill/dashboard Pause All button which does the same thing. Because every job is an independent launchd label (com.m13v.social-*), there is no orchestrator to kill. The next cron does not fire until you load the plist back. Existing locks age out in 3 hours and the next run starts clean.

Why not just use Buffer or Hootsuite?

Those ship the same pattern everyone ships: a hosted control plane that holds your OAuth tokens and schedules posts against platform APIs. S4L is for the opposite case: you want the scheduling and auditing discipline of a proper automation tool, but you want the writes to look like real human activity originating from your real browser, and you do not want a third party holding tokens for your accounts. The whole runtime lives in ~/social-autoposter/ and ~/.claude/, and the data lives in your Neon DB.

Want the posts to come from your real accounts, not an API app?

Point S4L at your Neon DB, log into each platform once in the per-platform Chromium profile, load the launchd plists. The scheduler posts on its own after that. Every decision writes a row to posts; every row can be replayed or audited with one SELECT.

See pricing