s4l.ai / guide / social media marketing automation software

Most social media marketing automation software stops at scheduling. S4L also automates the decision of what to post.

The first SERP page for this keyword is scheduling tools, caption drafters, and analytics dashboards. Every one of them assumes a human already decided the topic. S4L does not. A launchd job ticks every 600 seconds, pulls real Google Search Console queries for each of your domains in parallel, and runs one SQL statement to pick the next keyword.

M
Matthew Diakonov
11 min read
4.9from 47
Topic picked by one SQL query in seo/run_gsc_pipeline.sh
Launchd StartInterval 600 seconds, one lane per product
5 impression floor, in_progress / pending / done / skip state machine
Same database feeds landing page generation and social posts

What classic automation software automates

Scheduled postsCaption generationApproval workflowAnalytics dashboardInbox unificationMulti-channel publishingContent calendarHashtag suggestionsCompetitor trackingTeam roles

What S4L also automates

GSC demand intakeNext-keyword SQL pickParallel product lanesAuto page generationPost promotion loopPer-project locksForbidden-keyword guardStatus-machine queriesImpression gatingIdempotent ticks

The gap on the first page of Google

Search for "social media marketing automation software" and the first SERP page is a mix of product homepages and listicles. Hootsuite, Sprout Social, SocialPilot, Zapier, Gumloop, Buffer, SocialBee. Every one of them covers the same surface: multi-channel scheduling, AI caption drafting, approval workflows, analytics dashboards, inbox unification.

None of them describe a topic selector. You, the marketer, arrive at the UI already knowing what you want to post. The software accepts that input and fans it out across accounts. The automation ends at the calendar.

That is the opening. The expensive decision in marketing is not when a piece of content goes out, it is which piece of content is worth writing in the first place. S4L is built around that decision. Below is the exact line of code where it gets made.

The anchor fact: one SQL statement picks your next topic

This is the single query that separates S4L from the scheduling category. It lives in seo/run_gsc_pipeline.sh between lines 125 and 139. Impressions must be at least 5. Status must be pending. Order by impressions descending, then by clicks descending. Take the first row. That row is the keyword for the next page.

seo/run_gsc_pipeline.sh

Five predicates. Zero human input. The keyword falls out of the data Google already gives you, not out of a brainstorming meeting.

0slaunchd tick interval
0minimum impressions to qualify
0sstale-lock timeout
0states in the query machine

The launchd plist is the whole scheduler

There is no queue service, no cron daemon, no hosted worker pool. StartInterval 600 in one plist is enough. launchd fires the bash script every 10 minutes. The script does the rest.

launchd/com.m13v.social-gsc-seo.plist

One tick, one lane per product

Eligibility is computed fresh from config.json every tick. Any project with landing_pages.repo and landing_pages.gsc_property set is eligible, and each eligible project gets its own background lane. This replaced an older weighted-sampling design that starved every product except the one with the biggest backlog.

seo/cron_gsc.sh

One database, many specialist pipelines

The gsc_queries table is the hub. Inputs on the left write into it; specialists on the right read from it. No in-memory queue connects the two halves, which is what makes the loop safe to restart anytime, anywhere.

Inputs → hub → posters

Google Search Console
config.json
fetch_gsc_queries.py
reap_stuck.py
gsc_queries
generate_page.py
project website repo
social posting queue
link-edit-*.sh

What happens inside one 10-minute tick

1

Launchd fires cron_gsc.sh

Every 600 seconds, the com.m13v.social-gsc-seo launchd job wakes up and runs one bash script. No web UI, no worker pool, no queue service. The scheduler is the OS.

2

Stuck rows get reaped

reap_stuck.py resets any row whose status has been in_progress for too long back to pending. This makes the loop idempotent: a crashed lane never leaves a keyword stranded.

3

Eligible products are enumerated

Any project in config.json with a landing_pages.repo and a landing_pages.gsc_property is eligible. The list is rebuilt from disk every tick, so adding a product to config.json is the entire adoption step.

4

One lane per product spawns in parallel

For each eligible product, cron_gsc.sh forks a background lane. A 0 to 29 second jitter avoids simultaneous GSC API bursts. Every product gets attention every tick, which stops the largest queue from starving the others.

5

Each lane takes its own lock

run_gsc_pipeline.sh writes .locks/gsc_{product}.lock. If a previous tick's lane is still working, the new lane exits quietly. A 1800 second stale-lock timeout guarantees no product is blocked forever.

6

GSC queries are refreshed into Postgres

fetch_gsc_queries.py pulls the current week's search performance rows from the Search Console API for that product's gsc_property and upserts them into the gsc_queries table with a pending status.

7

The next keyword is chosen by one SQL query

ORDER BY impressions DESC, clicks DESC LIMIT 1. Impressions must be at least 5. Status must be pending. This is the topic picker. Nothing asks the operator for input.

8

Forbidden patterns are skipped

db_helpers.check_forbidden runs the chosen query against the product's content-policy blocklist. Matches are marked skip so the same keyword does not come back next tick.

9

The keyword is handed to generate_page.py

The unified generator writes a new Next.js page in the product's website repo, runs tsc, commits, pushes, and waits for Vercel to reach READY. On success the row flips to done. On failure it flips back to pending and the next tick retries.

What the log looks like in practice

Logs under skill/logs/cron_gsc-*.log read like a narrow slice of a build server. One line per decision, every line timestamped, no UI involved. If something looks wrong, grep is the debugger.

skill/logs/cron_gsc-2026-04-20_143000.log

The architectural decisions that keep the loop honest

The launchd plist IS the scheduler

StartInterval 600 in com.m13v.social-gsc-seo.plist. There is no Sidekiq, no Celery, no Temporal. When the Mac is awake, the loop ticks. The only external orchestrator is the OS.

5 impressions is a hard floor

Queries with fewer than 5 impressions in the window never get picked. This filters out one-off weird searches so pages are only generated for terms Google already decided to show you for.

Parallel lanes, not a priority queue

The earlier design picked one product per tick via weighted sampling and starved every product except the one with the biggest backlog. Now every product gets its own lane every 10 minutes.

Status is a four-state machine

pending, in_progress, done, skip. reap_stuck.py guarantees the machine is unstickable. Every decision the pipeline makes is reconstructible with one SELECT against gsc_queries.

Per-product locks, 30 minute timeout

seo/.locks/gsc_{product}.lock prevents a slow lane from double-booking itself. If the lock is older than 1800 seconds it is treated as stale and cleared, so a crashed lane heals automatically.

Forbidden-keyword guard

Content policy is per-project. Vipassana forbids technique-instruction pages, for example. db_helpers.check_forbidden runs a pattern match before generation and flips the row to skip so the same query is not re-picked.

By the numbers

0s

between ticks, set by StartInterval in one plist

0

minimum GSC impressions before a query is eligible

0

SQL statement decides what you post next

S4L vs a generic social media marketing automation software

FeatureClassic schedulerS4L
Who decides the next topicA human fills in a content calendarORDER BY impressions DESC, clicks DESC LIMIT 1
Where the topic signal comes fromTrends panel or AI suggestionReal Google Search Console impressions for your domain
How often the next-post decision recomputesWhen the calendar is editedEvery 600 seconds, per product, in parallel
What happens when a keyword is blockedItem sits in the calendarRow flips to skip with the matching pattern in notes
What happens when a run crashesManual retryreap_stuck.py resets in_progress to pending on next tick
How you add a new brandClick through 5 settings screensPaste a block into config.json with repo + gsc_property
How the engine is orchestratedProprietary cloud worker poolOne launchd plist with StartInterval 600
Where the audit log livesActivity feed inside the SaaSskill/logs/cron_gsc-*.log plus the gsc_queries row history

Adoption is four files, in this order

Onboarding is not a wizard. It is four artifacts you touch directly. If you can edit a JSON file and run launchctl load, you have the full operator surface.

  1. 1

    Install and log into Search Console

    Verify the domain you want S4L to learn from. The gsc_property string will be sc-domain:yourdomain.com.

  2. 2

    Add your project to config.json

    A landing_pages block with repo, base_url, and gsc_property is all the pipeline needs to start a lane for you.

  3. 3

    Load the launchd plist

    launchctl load com.m13v.social-gsc-seo.plist. The StartInterval of 600 seconds begins immediately.

  4. 4

    Read skill/logs to watch the loop

    cron_gsc-*.log shows tick-by-tick fanout. Per-lane logs live under seo/logs/gsc_{product}/.

Who should use this, and who should stick with Buffer

If you run one brand and already know exactly what you want to post every day, the traditional social media marketing automation software is more polished than S4L. Buffer, Sprout, and Hootsuite have finished UIs, team roles, and approvals. The demand-driven topic picker would feel like overkill.

If you run three or more domains, cannot afford to brainstorm each one's calendar, and already have them verified in Search Console, S4L's loop is the interesting part. The topic selection is free (it is one SQL query). The orchestration is free (it is one launchd plist). The distribution is free (the same database feeds the social pipelines). What you trade is a UI for a transparent code path.

Want to see the 10-minute tick run live?

Book a 20 minute walkthrough. We run one tick for your domain, read the log together, and show you the row that wins the next slot.

Book a call

Frequently asked questions

How is this different from the social media marketing automation software on the first SERP page?

Hootsuite, Sprout Social, SocialPilot, Zapier listicles, Buffer, SocialBee, and the rest all share the same shape: you fill in a content calendar, the software schedules it, some of them draft captions or pick times. The decision about what to write is still human. S4L automates that decision by treating Google Search Console as the topic queue. Every 600 seconds, a launchd-triggered bash script fans out one lane per project, pulls this week's GSC queries, and runs a single SQL statement to pick the highest-impression pending term with at least 5 impressions. That keyword becomes the next page, and the next page becomes the next piece of social content. You can verify this end to end by reading seo/run_gsc_pipeline.sh lines 125 through 139.

Why 5 impressions as the floor?

Below 5 impressions, the keyword is noise. A single accidental query, a typo, a one-off crawler. Above 5 impressions in the GSC window, Google has already decided your domain is relevant enough to show. S4L only spends generation time on terms that crossed that line. The threshold lives in one place: the WHERE clause of the selection SQL in seo/run_gsc_pipeline.sh. You can change it to 10 or 20 without touching anything else.

What happens if two ticks overlap for the same product?

The second tick takes the lock path in run_gsc_pipeline.sh. seo/.locks/gsc_{product}.lock is written at the top of the script with $$. If another run is holding it, the new one exits with 'Pipeline already running'. If the lock is older than 1800 seconds it is treated as stale and forcibly removed, so a crashed lane from an earlier tick does not leave the product blocked forever. This is why the whole loop is safe to run on a 10 minute cadence without any external orchestration.

Why parallel lanes instead of a single weighted picker?

The earlier design picked one product per tick with weighted-random sampling, which starved every product except the one with the largest backlog. Fazm would eat every slot and s4l would get zero. The parallel-lane design in cron_gsc.sh gives every GSC-configured product its own lane every tick, with 0 to 29 seconds of jitter to spread the GSC API calls. This trades raw throughput for fairness, and fairness is what actually matters when you run a portfolio of products.

What stops the loop from regenerating the same page every 10 minutes?

The status column on gsc_queries. On the first tick a row is pending. The pipeline flips it to in_progress when it picks it, then to done when the generator commits and Vercel reports READY. done rows are ignored by the selection SQL forever. If the generator fails, the row flips back to pending and the next tick retries. If a row gets stuck in in_progress for too long, reap_stuck.py resets it. Every pending-done transition is reconstructible from the row history.

How does the generated content actually get to social channels?

Generation is only the first half. Once a page lands in the product's website repo and Vercel reports READY, the post-publishing pipelines take over: skill/run-reddit-threads.sh, skill/run-twitter-cycle.sh, skill/run-linkedin.sh, skill/run-github.sh. They pick from the same config.json weights, draft copy that references the new landing page, and post to their respective channels with their own per-platform browser agents. When the landing-page URLs later change, skill/link-edit-*.sh batch-updates the links on the live posts so nothing rots. The GSC pipeline and the social pipelines share the same database, which is how the topic signal reaches the posters.

Does this work if I only run one brand?

Yes, and the parallel fanout collapses to a single lane. The value proposition changes though: with one brand, traditional scheduling software is polished enough that the demand-driven topic picker is nice-to-have instead of load-bearing. The design really earns its keep when you run three or more domains and cannot afford to sit down and plan each one's content calendar. One launchd tick does the planning for all of them.

Is S4L a SaaS?

No. It is a set of bash scripts, Python files, and launchd plists that live in a repo on your own machine or server. The state is a Postgres database you host. There is no hosted tenant. The advantage of that architecture is total visibility: the 'next topic' decision is one SQL query you can re-run by hand, and every pipeline decision leaves a log file under skill/logs/. The disadvantage is no drag-and-drop calendar. If you want a UI, you write one on top of the posts and gsc_queries tables.

What if a GSC query is off-brand or something I refuse to write about?

Every project has a forbidden-pattern list in config.json. Before generation, seo/db_helpers.py check_forbidden runs the chosen query against that list. Any match marks the row skip with the offending pattern saved in notes, and the same keyword is never re-picked. The Vipassana project uses this to block technique-instruction pages, for example. You tune it per product, not globally, because forbidden is context-dependent.

How do I verify all of this is real?

Three files tell the whole story. launchd/com.m13v.social-gsc-seo.plist has the StartInterval of 600 seconds. seo/cron_gsc.sh has the eligibility list and the fanout loop. seo/run_gsc_pipeline.sh has the selection SQL at lines 125 through 139 and the lock logic at lines 38 through 50. The generator it calls is seo/generate_page.py. Everything else in S4L is glue around those four files.