Most Claude skills for social posting are one process.
S4L is one skill that spawns one child Claude per platform.
Each child gets a strict MCP allowlist, a single browser profile, and a session UUID that lands in a Postgres cost ledger. The parent skill never touches the browser. This page walks the wiring from SKILL.md down to /tmp lock dirs, with file paths you can grep.
anchor file 1 of 4
~/social-autoposter/SKILL.md is the front door.
The frontmatter declares S4L as a user-invocable skill. Type /social-autoposter in any Claude Code session and this loads. The body of SKILL.md (342 lines) is the reading material, not the engine. The engine is the shell scripts under skill/.
Where the fan-out actually happens
On the left, every entry point that can fire a posting cycle. In the middle, the wrapper that owns session identity and cost logging. On the right, the four MCP allowlists a child Claude can be constrained to. A single child sees exactly one of them.
entry point -> wrapper -> scoped child Claude
anchor file 2 of 4
scripts/run_claude.sh: the only place that calls claude
Every shell script in skill/ that needs an LLM goes through this wrapper. It pre-assigns a UUID, exports it as CLAUDE_SESSION_ID so downstream Python loggers can stamp it on database rows, runsclaude --session-id "$SESSION_ID"with whatever flags the caller passed, then fires the cost logger best-effort on exit. The wrapper never swallows the wrapped exit code, even if logging fails.
The wrapper is the seam. Every cost row in the claude_sessions table came through it. Every transcript at ~/.claude/projects/<encoded-cwd>/<uuid>.jsonl was written because of it. If you want to add a new pipeline, you write a new shell script in skill/ that calls this wrapper, and the cost ledger and session identity wire themselves automatically.
MCP servers a single child Claude can see
A reddit-search child has reddit-agent and nothing else. A linkedin child has linkedin-agent and nothing else. The flag that enforces it is --strict-mcp-config, which tells Claude Code to ignore the user's globally configured MCP servers and use only the JSON file passed via--mcp-config. There is no path by which a Reddit drafter Claude could, accidentally or otherwise, drive Twitter.
anchor file 3 of 4
~/.claude/browser-agent-configs/reddit-agent-mcp.json
Twelve lines. One MCP server. The Playwright MCP is launched with a sub-config that pins it to the reddit-agent persistent browser profile under ~/.claude/browser-profiles/reddit. That is the only tool the child Claude has access to in a Reddit run.
The same shape exists for linkedin-agent-mcp.json and twitter-agent-mcp.json. The two outliers: no-agents-mcp.json (used by skill/stats.sh step 4 and by the link-edit pipelines for GitHub and Moltbook, which talk to APIs and need no browser at all), and all-agents-mcp.json (used by skill/octolens.sh and the cross-platform DM-replies pipeline, which legitimately span every network).
anchor file 4 of 4
What a single platform script actually runs
skill/run-linkedin.sh acquires the LinkedIn browser lock, then shells the wrapper with the LinkedIn MCP and an inline prompt. The prompt itself just says "you are the social autoposter, read SKILL.md, read config.json, do one cycle." All the actual policy lives in those two files. The shell script is a thin host.
One cycle, traced end to end
What launchd, the lock layer, the wrapper, the scoped child, and the cost logger actually say to each other when a cron tick fires a Reddit posting cycle.
cron fires run-reddit-search.sh
Why the lock layer is part of the contract
Several pipelines share the same browser profile. The Reddit comment pipeline, the Reddit thread pipeline, the Reddit DM outreach, the Reddit DM reply scanner, the Reddit stats pipeline, the Reddit audit, and the Reddit link-editor all drive the same persistent Chrome under ~/.claude/browser-profiles/reddit. Without serialization, two cron-fired pipelines collide on the profile and the child Claude aborts mid-CDP-call with "Reddit browser locked by session <uuid>." The lock layer is what lets launchd fire any of these jobs without coordinating cadences by hand.
What the parent skill is actually doing
The parent /social-autoposter session does five things and then hands off. None of these steps require a browser, and none of them spend Claude tokens drafting post text.
Read SKILL.md
Parent Claude loads the workflow, content rules, anti-AI-detection checklist, and database schema. 342 lines of policy.
Read config.json
Pulls accounts, projects, subreddits, and content_angle. Everything per-user lives here, never hardcoded in scripts.
Pick a target
Calls scripts/pick_thread_target.py or scripts/pick_project.py for weighted, per-floor-day project rotation. Returns one platform, one project.
Hand off to skill/run-<platform>.sh
Shell script acquires the browser lock, then calls scripts/run_claude.sh with the right --mcp-config. Parent never touches the browser itself.
Wait, exit
Child Claude does the actual draft + post + DB log. Parent skill returns when the shell exits. The session UUID and cost row land asynchronously via log_claude_session.py.
properties enforced by this skill shape, not by reviewer discipline
- A Reddit drafter cannot post on Twitter, the tool literally is not in its MCP surface.
- A LinkedIn run cannot collide with another LinkedIn run, the lock dir prevents two pipelines from holding the profile.
- Every child spawn has an addressable session UUID, recoverable from the transcript on disk.
- Every spawn lands a row in claude_sessions with model-aware Opus / Sonnet / Haiku token cost.
- If a hung holder dies without releasing, watchdog_hung_runs.py SIGTERMs it and the bash trap clears the lock.
- Adding a new platform is a new shell script + a new MCP json, no edits to the wrapper or the parent skill.
Every script that goes through scripts/run_claude.sh
Each chip is a distinct script tag the wrapper logs into the claude_sessions table, so cost is itemised per cron-fired job. Same wrapper, same UUID lifecycle, different MCP scope.
Confirm the architecture in four commands
None of this requires running the pipeline. The frontmatter, the spawn count, the MCP config list, and the live lock dirs are all on disk.
Want this fan-out shape on your own social pipeline?
Walk through the wrapper, the MCP allowlists, and the lock layer with the person who built it. 30 minutes.
Frequently asked questions
What does it actually mean for S4L to be a Claude skill?
It means ~/social-autoposter/SKILL.md has user_invocable: true in its frontmatter and a description that maps to phrases like 'post to social' or 'find threads to comment on.' When you type /social-autoposter in a Claude Code session, the parent skill loads. That parent does not draft any post text itself. It reads config.json, picks a platform, and shells out to skill/run-reddit-search.sh, skill/run-linkedin.sh, skill/run-twitter-cycle.sh, or skill/run-moltbook.sh. Each of those scripts spawns its own claude -p subprocess with a tightly scoped MCP config. So the user-facing thing is one skill, but every actual post gets drafted by a fresh child Claude session that only knows how to drive one browser profile.
Why spawn child Claude sessions per platform instead of doing it all in one process?
Three reasons, all enforced by code. First, MCP allowlist sandboxing: a Reddit run loads only reddit-agent-mcp.json, so the child Claude has no Twitter or LinkedIn tools available. It cannot accidentally post on the wrong network because the tool simply does not exist in its session. Second, browser-profile locking: skill/lock.sh holds a /tmp lock dir per platform with a pid-alive check and a 3-hour stale ceiling, so two pipelines that share the reddit-agent profile serialize cleanly. Third, cost accounting: each spawn gets one UUID via run_claude.sh, and log_claude_session.py reads that session's transcript at ~/.claude/projects/<encoded-cwd>/<uuid>.jsonl to compute Opus or Sonnet or Haiku cost and write a row to the claude_sessions Postgres table. Bundling everything in one process would lose all three of those properties.
How is the child Claude prevented from doing something the parent skill did not authorise?
Through --strict-mcp-config. Every spawn from skill/*.sh passes both --strict-mcp-config and --mcp-config <path-to-platform-mcp.json>. With strict-mcp-config, Claude Code ignores any other MCP servers configured globally on the machine and uses only the file passed on the command line. The Reddit pipeline therefore boots a child whose entire MCP surface is one Playwright server pinned to ~/.claude/browser-agent-configs/reddit-agent.json. There is no Twitter MCP, no LinkedIn MCP, no isolated browser, no global tool sprawl. Same shape for the LinkedIn run-linkedin.sh, the Twitter run-twitter-cycle.sh, and the link-edit and DM scripts. Five distinct MCP configs total: reddit-agent-mcp.json, twitter-agent-mcp.json, linkedin-agent-mcp.json, no-agents-mcp.json (used for stats and link-edit on github/moltbook where no browser is needed), and all-agents-mcp.json (used for octolens and DM-replies where the run genuinely spans all three networks).
What is in the per-session cost log and why does it matter?
scripts/log_claude_session.py opens the JSONL transcript Claude Code wrote for the session UUID, walks every assistant turn, sums input_tokens, output_tokens, cache_read_input_tokens, and the two cache-write buckets (ephemeral_5m_input_tokens and ephemeral_1h_input_tokens), then multiplies by a per-model price table at Opus, Sonnet, or Haiku rates. It inserts one row into claude_sessions keyed on session_id (ON CONFLICT DO NOTHING). That gives the operator an itemised ledger: cost per script tag (run-linkedin, run-reddit-threads, engage-twitter-phaseB, dm-outreach-linkedin, etc.) for every cron firing. Without the wrapper there would be no way to know whether the LinkedIn pipeline or the Reddit pipeline burned more tokens last week, and no way to decide which script to move from Opus to Sonnet.
Where exactly do the locks live and what happens if a run dies mid-post?
Lock directories live at /tmp/social-autoposter-<name>.lock with a pid file inside. Names include reddit-browser, twitter-browser, linkedin-browser, plus pipeline-specific names. acquire_lock in skill/lock.sh tries mkdir; if the dir exists, it checks whether the holder pid is still alive (kill -0). Dead holder, lock removed. Lock older than 3 hours, also removed (a watchdog backstop, since the watchdog at scripts/watchdog_hung_runs.py SIGTERMs hung holders well before that). On acquire, browser locks additionally pkill any Chrome whose user-data-dir points at the matching profile, which sweeps orphan Chromes left behind when a previous playwright-mcp parent died and the child got reparented to PID 1. A bash trap on EXIT INT TERM HUP releases the lock cleanly when the run finishes or is killed.
Is the parent skill itself written as Claude prompt, or is it shell?
Both. The parent SKILL.md is a markdown file that Claude reads when /social-autoposter fires. It documents the workflow, content rules, anti-AI-detection checklist, and database schema, but the actual orchestration (cron schedules, lock acquisition, project rotation, rate-limit probes) lives in the skill/*.sh shell scripts and scripts/*.py helpers. The parent Claude session reads the SKILL file, reads config.json, calls a shell script, the shell script calls run_claude.sh, run_claude.sh spawns a fresh child Claude with a different MCP config, and that child does the actual browser work. The reason it splits like that is that the parent Claude's MCP surface (when invoked interactively) is whatever the user has configured globally, while every child needs a known minimal allowlist for that one platform.
How do I run any of this without using Claude Code interactively?
Cron, via launchd plists. com.m13v.social-reddit-threads.plist fires skill/run-reddit-threads.sh four times a day. Other launchd jobs fire skill/run-linkedin.sh, skill/run-twitter-cycle.sh, skill/run-moltbook.sh, skill/engage*.sh, and skill/audit*.sh on their own cadences. Each script acquires its locks, then calls scripts/run_claude.sh with the right MCP config and a -p prompt that tells the child Claude exactly what to do. The session UUID is logged, the cost row is written, the trap fires, the lock dir is removed. The next cron tick acquires the lock again and does it for the next platform. None of it requires you to be at the keyboard. The only thing /social-autoposter typed manually adds is the ability to read the same SKILL.md interactively and run a single ad-hoc cycle when you want one.