The open source social autoposter that drives your real browser, not platform APIs (April 2026)

Every other open source autoposter on the SERP this month publishes through platform OAuth or APIs. S4L is the one that runs as you, on your laptop, with your existing logged-in Chromium. Per-platform Playwright MCP profiles, 40 launchd jobs, a 600 second browser lock with kill -KILL on stuck process groups, and an orphan Chrome sweep before the next pipeline can claim the lock.

M
Matthew Diakonov
11 min read
4.9from real source on disk
40 launchd plists, 65 Python helpers, 8 run-*.sh scripts
skill/lock.sh: 600 second hold ceiling on browser locks
Per-platform Playwright MCP profiles under ~/.claude/browser-profiles/
--strict-mcp-config on every child Claude process

The actual SERP, April 2026

Every result on page one publishes through OAuth.

I ran the search for you. Here are the first ten organic results that show up today. Different licenses, different stacks, different self-host stories. They share one architectural decision: they all hold a vendor token and POST to a platform endpoint.

PostizMixpostdlvr.itBit SocialAuto Poster AIsocial-networks-auto-poster (PHP)justaman045/Auto-PosterWordPress Auto PosterHootsuiteBufferSprout SocialPublerLaterSocialBee

Postiz writes to platform APIs. Mixpost writes to platform APIs. dlvr.it writes to platform APIs. Bit Social pushes from a WordPress hook through platform APIs. Auto Poster AI is a paid SaaS over platform APIs. The open source PHP and Python repos in the GitHub results do the same. None of them describe what to do when a browser session pins a profile, because none of them ever open a browser.

600s ceiling

A run that hasn't released the lock in 10 minutes is either stuck on a hung MCP call or has orphaned its Chrome; either way the right move is to kick it out so the next pipeline can proceed.

skill/lock.sh, line 64-66

The architecture in one diagram

launchd to Claude to Playwright MCP to your real Chrome.

Every cadence in the system is a macOS launchd plist. Every plist runs one shell script. Every shell script acquires a per-platform browser lock, then spawns a child Claude with --strict-mcp-config pointing at exactly one platform's MCP server. Each MCP server holds one persistent Chromium userDataDir.

40 launchd plists -> 8 run scripts -> 3 isolated browser profiles

twitter-cycle.plist
reddit-search.plist
linkedin.plist
engage-twitter.plist
audit-reddit.plist
skill/lock.sh
twitter-agent MCP
reddit-agent MCP
linkedin-agent MCP

The hub is the lock script. Without it, two pipelines that both touch twitter-agent collide on the same Chromium userDataDir and one of them dies on a CDP navigation. With it, every platform has at most one live consumer.

The lock script is the part nobody else has.

Here is the branch of skill/lock.sh that runs only when the holder is alive, the lock has been held over 10 minutes, and the lock name is one of twitter-browser, reddit-browser, or linkedin-browser. This is what protects the next pipeline from a wedged predecessor.

skill/lock.sh (lines 67 to 90)

And here is what runs immediately after the new acquirer wins the lock, before it spawns its own MCP. Any Chrome still mapped to the platform's userDataDir is, by definition, an orphan, because the previous owner just got killed. Sweep it.

skill/lock.sh (lines 116 to 122)

What it looks like when a pipeline gets stuck

The rescue sequence, end to end.

Stuck twitter-browser holder, next pipeline acquires after 612 seconds

next launchd joblock.shstuck PID 47218Chrome (orphan)fresh MCPacquire_lock twitter-browser 3600mkdir /tmp/social-autoposter-twitter-browser.lock failslock_age = 612s, > 600s ceilingkill -TERM -$pgid + pkill -P + kill -TERM $pidsleep 2kill -KILL -$pgid + pkill -KILL -P + kill -KILL $pidprocess group gonepkill -f "user-data-dir=.*browser-profiles/twitter"orphan Chrome killed, profile releasedlock acquired (3 seconds after force-kill)spawn claude --strict-mcp-config twitter-agent-mcp.json

What the rescue looks like in the log file.

Each run writes to ~/social-autoposter/skill/logs/. When a run waits 10 minutes for a sibling that wedged, you see exactly this:

skill/logs/twitter-cycle-2026-04-21_143000.log

The numbers that fall out of this design

A small system, on purpose.

0launchd plists in launchd/
0Python helpers in scripts/
0sbrowser-lock ceiling
0isolated browser profiles

Counts pulled from the install: ls launchd/*.plist | wc -l returns 0, ls scripts/*.py | wc -l returns 0. The ceiling is the literal [ "$lock_age" -gt 600 ] branch in skill/lock.sh. Three isolated profiles map to twitter, reddit, linkedin under ~/.claude/browser-profiles/.

The MCP wiring that keeps platforms apart.

Each platform has a pair of files: an MCP server config that points at one Playwright instance, and a browser config that points at one userDataDir. The twitter pair looks like this. The reddit and linkedin pairs are the same shape with different paths.

browser-agent-configs/{twitter-agent-mcp,twitter-agent}.json

And here is how a run script spawns its child Claude. The--strict-mcp-configflag is the part that matters: that child can only see the one MCP file passed to it, so it physically cannot reach into another platform's session.

skill/run-twitter-cycle.sh

API-based vs browser-driven, side by side

The architectural choice every other tool on the SERP made the other way.

FeatureAPI-based autopostersS4L (browser-driven)
Auth modelPlatform OAuth (X/LinkedIn/Reddit dev approval)Your existing browser session, persisted under userDataDir
Account approvalRequired (X paid API tier, LinkedIn community manager review, Reddit script app)Not required, you already log in once per platform
Rate limits you hitAPI quotas, 401 on key revoke, 429 on tier exhaustionThe same rate limits a human would hit, no more
Where the runtime livesVendor server or self-hosted Docker containerYour laptop, scheduled by macOS launchd
Concurrency safety on a shared identityStateless per-requestskill/lock.sh per-platform locks, 600s hold ceiling, orphan Chrome sweep
Browser process isolation per platformN/A (no browser)Three persistent profiles in ~/.claude/browser-profiles/{twitter,reddit,linkedin}
Per-subprocess MCP scopingN/A--strict-mcp-config, each child Claude sees only one platform's MCP
Lines of code you can readClosed source or large monoreposskill/lock.sh = 124 lines, 40 launchd .plist files, 65 Python helpers

Why each piece exists, in one grid.

The lock script, the strict MCP scoping, the per-platform profiles, and the launchd cadence are all answers to a real failure mode that hit a previous version of the code. They are in the source because they had to be.

No OAuth approval, ever

Every API-based autoposter starts with a dev portal application. X has tier pricing and reviews. LinkedIn requires a Community Manager API audit. Reddit script apps still need a scope and a token to keep alive. S4L skips all of it because you already have a session cookie in Chrome from being a normal human on the internet.

Per-platform browser isolation

Each platform has its own Playwright userDataDir, its own MCP server, and its own lock. A stuck Reddit run can never wedge a Twitter post.

--strict-mcp-config

Every child Claude process is started with --strict-mcp-config and a single platform MCP file. It cannot accidentally call into the wrong agent.

10 minute hold ceiling

Browser locks force-kill the holder after 600 seconds. The next pipeline never waits more than 10 minutes for a wedged predecessor.

Orphan Chrome sweep

After acquiring a fresh lock, the script runs pkill -f "user-data-dir=.*browser-profiles/$platform" so a reparented Chrome from a dead MCP cannot pin the profile.

Schedules are wall-clock

Cadence lives in launchd plists with StartInterval (twitter-cycle = 1800s) or StartCalendarInterval. No always-on Node server, no PM2, no cron daemon to babysit.

The 9 invariants the runtime relies on.

These are the properties skill/lock.sh and the run scripts assume in order to be safe across overlapping launchd jobs. Break any of them and a concurrent run will eventually wedge a profile.

Concurrency invariants

  • Per-platform Playwright MCP profile under ~/.claude/browser-profiles/$platform
  • skill/lock.sh acquires platform-browser locks BEFORE pipeline-specific locks (avoids deadlock)
  • Stale-lock detection by missing PID file, dead PID, or 3 hour age cap
  • 600 second force-kill ceiling for browser locks (10 min)
  • Process group kill: kill -TERM -$pgid then kill -KILL -$pgid after 2s wait
  • pkill -P $pid catches direct children that reparented away
  • After acquire, pkill -f "user-data-dir=.*browser-profiles/$platform" sweeps orphan Chrome
  • Each child Claude is spawned with --strict-mcp-config and ONE platform MCP file
  • Cleanup trap on EXIT INT TERM HUP releases all stacked locks at once

Quickstart

Install in five steps.

From npx init to the first scheduled cycle

1

npx social-autoposter init

bin/cli.js copies scripts/, skill/, browser-agent-configs/, schema-postgres.sql into ~/social-autoposter/, writes a blank .env template, installs psycopg2-binary if missing, and symlinks the skill into ~/.claude/skills/.

2

Drop in your DATABASE_URL

Bring your own Neon Postgres, apply schema-postgres.sql once. Posts, replies, candidates, and stats land in your own DB. No vendor lock-in on the data.

3

Log in once per platform

First run on each platform opens its own Playwright Chromium against ~/.claude/browser-profiles/{twitter,reddit,linkedin}. You log in once, the userDataDir keeps the cookie.

4

Symlink and load the launchd plists

ln -sf ~/social-autoposter/launchd/com.m13v.social-twitter-cycle.plist ~/Library/LaunchAgents/, then launchctl load. Forty plists are generated with your real HOME and PATH so they survive reboots.

5

Watch the dashboard at localhost:3141

Pause All / Resume All button manages every plist in one click. tail -f ~/social-autoposter/skill/logs/*.log shows the lock waits and force-kills as they happen.

And the terminal trace of the install through the first scheduled run:

install + first launchd cycle

Want help wiring this into your own machine?

Walk through the install, the lock primitive, and the launchd cadence with the maintainer. 30 minutes, free.

Book a call

Frequently asked questions

Why call this 'open source social autoposter, April 2026' instead of just 'open source social autoposter'?

Because the SERP for the unqualified phrase is dominated by tools that have been around for years (Postiz, Mixpost, dlvr.it, the politsturm PHP repo, the justaman045 Python desktop app, the wordpress.org Bit Social plugin). They are mature, but they all share one architectural choice: they publish through platform OAuth or write to APIs. As of April 2026 there is still no widely-indexed open source autoposter that drives your real browser through Playwright MCP, scoped per platform, with a battle-tested lock primitive for the case when a session pins a profile. S4L is that one. The 'April 2026' qualifier is honest: this is what's true about the space right now, and you can verify it on a clean SERP.

Why does S4L drive your real browser instead of using platform APIs?

Three reasons. First, no OAuth approval. X (Twitter) ended free API access in 2023 and the cheapest paid tier in April 2026 is still $200/month. LinkedIn's Community Manager API requires a partner application review. Reddit's script app flow is the easiest of the three but still requires a registered token, scopes, and refresh handling. Browser-side, you already have a cookie because you logged in like a human. Second, you stay inside human rate limits, not API quotas, because you are literally typing into the same text boxes a human would. Third, every platform UI change becomes a Playwright snapshot/selector update instead of an API contract you have to wait for the vendor to ship.

What is the 600 second browser-lock ceiling, exactly?

skill/lock.sh tags 'twitter-browser', 'reddit-browser', and 'linkedin-browser' as platform-browser locks via a case statement. The acquire_lock loop checks the lock directory's mtime every 10 seconds. If lock_age > 600 (ten minutes) and the holder is still 'alive', the script reads the PID from $lock_dir/pid, finds the process group via ps -o pgid=, sends kill -TERM to the negated PGID (which kills the whole tree: shell, claude, npx MCP, Chrome), runs pkill -TERM -P on direct children, sleeps 2 seconds, then escalates to kill -KILL on the same set. Then it removes the lock dir. Then, after the next acquire_lock call succeeds, it runs pkill -f "user-data-dir=.*browser-profiles/${platform}" to sweep any orphaned Chromium that reparented to PID 1 when its parent MCP died. The whole rescue takes about 3 seconds.

Why force-kill the process GROUP and not just the PID?

Because the comment in skill/lock.sh says it directly: 'Bare kill -TERM $pid only kills the shell; its Claude/MCP children get reparented to init and keep holding the MCP-hook lock for minutes, blocking the next pipeline.' A run-twitter-cycle.sh launches bash -> claude -p -> npx @playwright/mcp -> chromium. If you only kill the bash, every descendant survives. The fix is to grab the PGID once with ps -o pgid=, then kill -TERM -$pgid (the negative sign tells kill to target the entire process group). pkill -P $pid is a belt-and-suspenders pass over direct children in case any of them set their own pgid.

Why isolate browser sessions per platform with separate Playwright MCP servers?

Two reasons. One, cookie hygiene: a single Chromium profile that signs into Twitter, Reddit, and LinkedIn at once accumulates cross-site state that Twitter's anti-automation sees and dings. Three profiles under ~/.claude/browser-profiles/{twitter,reddit,linkedin} look like three normal users on three normal devices. Two, fault isolation: a wedged Reddit MCP can never block a Twitter post. Each lock is per-profile, each MCP child is spawned with --strict-mcp-config pointing at exactly one of browser-agent-configs/{twitter,reddit,linkedin}-agent-mcp.json, so the children are mechanically incapable of reaching into another platform's session.

Forty launchd plists feels like a lot. What's the breakdown?

Run jobs (twitter-cycle every 1800s, reddit-search every 1800s, reddit-threads on calendar, linkedin, moltbook, github), audit jobs (audit-twitter, audit-reddit, audit-reddit-resurrect, audit-linkedin, audit-moltbook, audit-dm-staleness), engagement jobs (engage.sh, engage-twitter, engage-linkedin, github-engage), DM outreach (dm-outreach-{twitter,reddit,linkedin}), DM reply scanning (dm-replies-{twitter,reddit,linkedin}, scan-{reddit,moltbook}-replies), stats jobs per platform (stats-{twitter,reddit,linkedin,moltbook}), link-edit jobs (link-edit-{twitter,reddit,linkedin,moltbook,github}), octolens (octolens, octolens-{twitter,reddit,linkedin}), SEO jobs (gsc-seo, serp-seo), plus precompute-stats, daily-report, deploy-status. Each gets its own .plist with its own StartInterval or StartCalendarInterval so a slow audit run never delays a 30-minute Twitter cycle.

Does the runtime work outside macOS?

The Python and JS helpers in scripts/ and the lock primitive in skill/lock.sh are POSIX bash and run anywhere with bash, mkdir, ps, pkill, kill. The launchd plists are mac-only by design (the README links to setup/SKILL.md Step 7 for the cron snippets). On Linux you replace launchd with cron or systemd timers and keep the rest verbatim. The Playwright MCP profile dirs work on Linux Chromium too. There's no macOS API in the hot path.

What happens to the database, and why Neon Postgres specifically?

Posts, replies, candidates, batches, projects, and stats all live in your DATABASE_URL. The schema is in schema-postgres.sql and you apply it once. Neon is the recommended host because the free tier is generous and the connection string is just a URL, but anything that speaks Postgres (RDS, Supabase, a docker container) works. Dedup queries (SELECT thread_url FROM posts) and feedback queries (the avg-upvotes ranker that conditions the writer) all assume Postgres syntax. There is no vendor lock-in on the data.

How do I verify the 600 second ceiling and the 40 plists for myself?

Clone the source: git clone https://github.com/m13v/social-autoposter (or npx social-autoposter init for the installed copy). Then: 'ls launchd/*.plist | wc -l' returns 40 today. 'sed -n 67,90p skill/lock.sh' shows the force-kill block exactly as quoted on this page. 'sed -n 116,122p skill/lock.sh' shows the orphan Chrome sweep. 'cat browser-agent-configs/twitter-agent.json' shows the userDataDir pointing at ~/.claude/browser-profiles/twitter. 'grep --strict-mcp-config skill/run-*.sh' shows every run script spawning a child Claude with one platform's MCP. The page is built around facts you can verify in five commands.