s4l.ai / guide

Auto-posting on Instagram, taken seriously, is a 344-line Python file.

Every page that ranks for this question is selling a SaaS scheduler. Underneath, all of them are wrapping the same three calls to graph.instagram.com/v22.0: create a Reels container, poll its status until FINISHED, then publish. S4L runs that pipeline against our own account on a launchd timer, five fires a day, and the whole thing lives in one Python file you can read end to end. This guide walks through exactly what it does.

direct answer (verified 2026-05-08)

How does auto-posting on Instagram work?

Send a POST to graph.instagram.com/v22.0/{IG_USER_ID}/media with media_type=REELS, a public video URL, and a caption. Poll the returned container's status_code until FINISHED. POST the creation_id to /{IG_USER_ID}/media_publish. That is auto-posting on Instagram. Every SaaS scheduler is wrapping exactly that flow. The full Meta documentation lives at developers.facebook.com/docs/instagram-platform/content-publishing.

M
Matthew Diakonov
11 min read
4.9from 34
Instagram Graph API v22.0, three HTTP calls, no browser
344-line Python file: mixer/post_to_ig.py in S4L's repo
5 fires/day on launchd: 09, 12, 15, 18, 21 local time
4 organic + 1 product per 5 posts (sliding-window picker)

the anchor fact

The Instagram side of S4L is one Python file: mixer/post_to_ig.py, 344 lines.

It does the GCS upload, the Graph API calls, the container polling, the publish, and the Postgres write-back. It runs from one shell wrapper, skill/run-instagram-daily.sh (94 lines), which is fired by one launchd plist, com.m13v.social-instagram-daily.plist, at five fixed times of day. Pre-rendered video assets live in mixer/remotion/out/. Every Reel that lands writes a sibling .posted.json so the same file is never published twice. None of these are exotic libraries. It is urllib.request, hmac, and a JSON file. That is the entire stack.

graph.instagram.com/v22.0POST /{IG_USER_ID}/mediaGET /{container_id}?fields=status_codePOST /{IG_USER_ID}/media_publishappsecret_proof = HMAC-SHA256(token, app_secret)Public video URL hosted on GCSContainer poll deadline: 300s5 fires/day at 09, 12, 15, 18, 214 organic + 1 product per 5 posts

The three calls every Instagram auto-poster makes

The sequence is the same whether you write it yourself or pay a SaaS to write it for you. It is not a stream upload, it is a container-then-publish handshake. The middle step (the poll) is where most first-time implementations get surprised: Reels processing on Meta's side is not synchronous and can take from 10 seconds to over a minute. We deadline at 300.

caller -> graph.instagram.com -> instagram feed

your scriptgraph api v22meta video workerig feedPOST /{IG_USER_ID}/media (REELS, video_url, caption)fetch video from public URL, render{ id: creation_id }GET /{creation_id}?fields=status_code (loop){ status_code: IN_PROGRESS } ... FINISHEDPOST /{IG_USER_ID}/media_publish (creation_id)publish reel to feed{ id: media_id, permalink: ... }

What the code actually looks like

This is the Instagram-talking part of mixer/post_to_ig.py in S4L's repo, lightly trimmed. It is intentionally boring.urllib.request for HTTP, hmac for the proof, a while loop for the poll. The whole file (344 lines) adds the GCS upload, env loading, Postgres write-back, and CLI flags around this core.

# /Users/m13v/social-autoposter/mixer/post_to_ig.py  (excerpt, lines 117-172)
#
# The full Instagram-side of the pipeline. Three HTTP calls, one
# polling loop, no browser, no third-party SDK. The total file is
# 344 lines including GCS upload, Postgres write-back, and CLI flags.

GRAPH_BASE = "graph.instagram.com/v22.0"  # served over https

def appsecret_proof(token: str, secret: str) -> str:
    return hmac.new(secret.encode(), token.encode(), hashlib.sha256).hexdigest()

def ig_call(method, path, params, token, secret):
    proof = appsecret_proof(token, secret)
    params = {**params, "access_token": token, "appsecret_proof": proof}
    # ... urllib.request, returns parsed JSON

def ig_post_reel(user_id, token, secret, video_url, caption):
    # 1. create container
    c = ig_call("POST", f"/{user_id}/media", {
        "media_type": "REELS",
        "video_url": video_url,
        "caption": caption,
        "share_to_feed": "true",
    }, token, secret)
    cid = c["id"]

    # 2. poll status_code until FINISHED, deadline 300s
    deadline = time.time() + 300
    while time.time() < deadline:
        s = ig_call("GET", f"/{cid}", {"fields": "status_code,status,id"}, token, secret)
        if s.get("status_code") == "FINISHED":
            break
        if s.get("status_code") in ("ERROR", "EXPIRED"):
            raise SystemExit(f"container failed: {s}")
        time.sleep(5)
    else:
        raise SystemExit("timeout waiting for container")

    # 3. publish
    pub = ig_call("POST", f"/{user_id}/media_publish",
                  {"creation_id": cid}, token, secret)
    media_id = pub["id"]
    return ig_call("GET", f"/{media_id}",
                   {"fields": "id,permalink,media_type,timestamp"}, token, secret)

What goes in, what comes out

One fire of the auto-poster has five inputs on the left, gets funneled through one wrapper, and produces three concrete outputs on the right. The middle box is where every SaaS scheduler also sits, just hidden behind a UI.

inputs -> mixer/post_to_ig.py -> instagram feed

video file
caption.txt
IG_USER_ID
long-lived token
app secret
post_to_ig.py
Reel on the feed
.posted.json
media_posts row

One fire, end to end

The five steps a single launchd fire performs. Anything that should be a no-op is a no-op (the lock is taken, the queue is empty, dry-run is set). Anything that fails fails loudly and non-zero so launchd's stderr log catches it.

run-instagram-daily.sh, step by step

  1. 1

    1. Acquire file lock

    The shared skill/lock.sh module writes /tmp/social-autoposter.instagram-poster.lock with a 30-second timeout. A second concurrent fire exits cleanly without posting; the picker is read-then-post and a race could pick the same row twice.

  2. 2

    2. Pick the next video

    scripts/ig_post_type_picker.py reads the last 5 rows of media_posts, applies the 4:1 sliding window, and returns one JSON line: {post_type, video_path, post_number, reason}. If the queue is exhausted (no draft rows of either type) the picker exits 2 and the harness logs and quits cleanly.

  3. 3

    3. Upload the file to public storage

    post_to_ig.py uploads the .mp4 to gs://mk0r-media-temp via the storage.googleapis.com upload endpoint. The bucket has public read so Meta's video worker can fetch the URL during container processing. Auth uses matt@mediar.ai's gcloud refresh token, not user ADC, so the script does not fight whatever project your global ADC is pointing at.

  4. 4

    4. Container, poll, publish

    Three Graph API calls. POST /media to create the container, GET /{creation_id} on a 5-second tick until status_code is FINISHED (300s deadline), POST /media_publish with the creation_id. Every call carries appsecret_proof. On success, GET the published media_id for permalink and timestamp.

  5. 5

    5. Write back

    Sibling .posted.json next to the source .mp4. UPDATE media_posts SET status='posted', posted_at=NOW(), posted_urls=jsonb, post_type=COALESCE(...) WHERE video_path=?. The picker on the next fire reads from this table; the two layers must agree before any video is treated as fresh.

300s

Reels on Meta's side are not a synchronous upload. The container can sit in IN_PROGRESS for 10 to 60 seconds. Most failed first attempts at IG auto-posting are scripts that publish before status_code is FINISHED.

S4L design note

The cadence is one launchd plist

On macOS, the timer is a launchd plist with five StartCalendarInterval entries. On a Linux box you would use a systemd timer or a plain crontab line. There is nothing magical about the times themselves; they are the windows where our IG audience is awake. What matters is that they are spaced and that the plist is the only thing that drives a fire.

# /Users/<you>/Library/LaunchAgents/com.m13v.social-instagram-daily.plist
#
# The cadence. Five fires per day. Every fire runs run-instagram-daily.sh,
# which acquires a file lock, calls the picker, and posts one Reel.

<key>Label</key>
<string>com.m13v.social-instagram-daily</string>

<key>ProgramArguments</key>
<array>
    <string>/bin/bash</string>
    <string>/Users/&lt;you&gt;/social-autoposter/skill/run-instagram-daily.sh</string>
</array>

<key>StartCalendarInterval</key>
<array>
    <dict><key>Hour</key><integer>9</integer></dict>
    <dict><key>Hour</key><integer>12</integer></dict>
    <dict><key>Hour</key><integer>15</integer></dict>
    <dict><key>Hour</key><integer>18</integer></dict>
    <dict><key>Hour</key><integer>21</integer></dict>
</array>

What you get from a SaaS vs. what you get from doing it yourself

Both routes call the same Graph API endpoints. The difference is who owns the operational tax: token refresh, public-URL hosting, error logging, retries, and the picker logic that decides what goes out next.

The same three Graph API calls, two delivery models

You upload the file in a UI, write a caption, click 'Schedule.' Their backend handles GCS-or-equivalent hosting, calls graph.instagram.com on the schedule, refreshes long-lived tokens, retries failures, and shows you a green check or a red X.

  • Calendar UI for scheduling, no code needed
  • They host the public video URL during container processing
  • They store and refresh your IG long-lived token
  • Their picker = the calendar slot you assigned
  • Failure modes are surfaced as their UI's error toast
  • You pay $15-90/mo and you do not own the rules

Five details every SaaS hides from you

None of these are deal-breakers. All of them are the difference between a script that works the first time and a script that silently fails for two days before you notice.

The video URL must be reachable from Meta's network

This is not a placeholder URL the API archives, it is a live fetch by Meta's video worker. Localhost will not work. A signed URL that expires in 60 seconds will not work. Public read on a real bucket is the safe shape.

appsecret_proof is non-optional in production

Once your app has 'Require App Secret' enabled, every call without the proof returns OAuthException. The proof is one line of HMAC-SHA256 over your token, but tutorials skip it and beginners get 400s with a confusing message.

Container status goes through IN_PROGRESS, not just FINISHED

If you publish before FINISHED you get an error that names the container as 'not ready'. The other terminal states are ERROR and EXPIRED, both of which mean the video worker rejected your file (codec, duration, size).

Long-lived tokens still expire (60 days)

The 'long-lived' user token is a 60-day token. Refreshing it is one Graph API call, but you must run that call before the 60 days are up. SaaS schedulers do this for you and bill it as a feature; do it yourself in cron.

Reels and feed images are different endpoints in disguise

The container POST is the same path, but themedia_typediscriminator (REELS, IMAGE, VIDEO, CAROUSEL) flips Meta into a different processing branch with different supported codecs, aspect ratios, and field requirements. A script that works for IMAGE will return 'unsupported media' if you point it at a REEL without flipping the discriminator.

How the 4:1 picker actually decides

The other half of auto-posting on Instagram is which file to post next. Most schedulers leave that to a human dragging things around a calendar. We use a deterministic sliding window:

  • Read the last 5 rows of media_posts ordered by posted_at DESC.
  • Count how many are tagged post_type='organic'.
  • If that count is less than 4, the next pick is organic; otherwise product.
  • Within that type, the picker takes the oldest status='draft' row.
  • Output one JSON line: {post_type, video_path, post_number, reason, fallback}.

Over time the feed locks to exactly 4 organic + 1 product per 5 posts. The reason for the shape is human: a feed that is one in five product is still a creator feed; a feed that is one in two product is a sales feed and gets unfollowed. The whole picker is 114 lines of Python (scripts/ig_post_type_picker.py). It exits with code 2 when the queue is exhausted; the shell harness treats that as a clean no-op.

0lines in mixer/post_to_ig.py
0fires per day on launchd
0Graph API calls per fire
0ssecond container poll deadline

Who should actually write this themselves

Not everyone needs to. Here is the honest split.

  • Solo dev or indie hacker shipping a product: write it yourself. The 344 lines are within the budget of an afternoon, you stop paying a per-seat scheduler fee, and the picker is whatever fits your content cadence.
  • Marketing team posting from one brand account: probably not worth it. A SaaS scheduler is calling the same endpoints; you pay them so a non-engineer can drag a clip into a calendar slot. The math favors the SaaS.
  • Agency posting on behalf of many clients: definitely not worth it. Hosted multi-account schedulers already solve the multi-token, multi-bucket, multi-locale problem. Recreating that yourself is a different job.
  • Dev tool or open-source project: write it yourself, but optimize the rest of your distribution for Reddit and X. Instagram is rarely the channel where a technical audience finds new tooling. (S4L's actual product is the Reddit and X engagement loop; the IG poster is a side rail for our own brand presence.)

Want the same shape for Reddit and X, the platforms where dev tools actually get found

30 minutes to walk through S4L's Reddit and X engagement pipeline. Same self-hosted spirit as the IG poster on this page, applied to the platforms that matter for indie dev distribution.

Frequently asked questions

How does auto-posting on Instagram actually work under the hood?

It is three HTTP requests against graph.instagram.com/v22.0. First, POST /{IG_USER_ID}/media with media_type=REELS, a public video URL, and a caption. The response is a creation_id (a container). Second, GET /{creation_id} repeatedly until status_code is FINISHED. Reels processing usually takes 10 to 60 seconds; we time out at 300. Third, POST /{IG_USER_ID}/media_publish with the creation_id. The response carries the published media id and you can read its permalink with one more GET. Every official SaaS scheduler is wrapping this exact sequence. There is no other path.

Why does Instagram need a public video URL? Why not upload bytes directly?

Because the Graph API for Reels takes a video_url param, not a file payload. Meta's worker fetches your URL from its own infrastructure and renders it. That means you must host the file somewhere reachable from Meta's network during the few minutes between container creation and the FINISHED status, and ideally beyond it for safety. We use a Google Cloud Storage bucket with public read (gs://mk0r-media-temp in project mediar-394022). Anything works, including S3, R2, or a CDN, as long as the URL returns 200 with the right Content-Type for the container's lifetime. SaaS schedulers usually do this for you and bill you for the egress.

What is appsecret_proof and why do most posts about Instagram auto-posting skip it?

It is a SHA-256 HMAC of your access token, keyed by your app secret, sent on every request as the appsecret_proof query parameter. Meta strongly recommends it for server-side calls and rejects calls without it for apps that have it enabled in App Settings. Our wrapper is one line: hmac.new(secret.encode(), token.encode(), hashlib.sha256).hexdigest(). We attach it to every request in mixer/post_to_ig.py. Most beginner tutorials skip this entirely because their example apps do not require it. Production accounts almost always do.

How often can you safely auto-post to Instagram?

Meta's documented platform rate limit for the Graph API is 25 published media per IG account per 24 hours, rolling. Practical safe ground for an unsponsored account is much lower. We post five times a day at 09:00, 12:00, 15:00, 18:00, 21:00. That gives us comfortable headroom on the documented limit, lines up with active-window posting research, and avoids the 'posted 11 things at midnight' fingerprint. Do not chase the cap. The rest of the cap exists for retries, deletes, and republishes, not for stacking content.

Does S4L auto-post to Instagram for customers?

No. S4L is the social autoposter we built for ourselves. The Instagram pipeline runs against our own IG_USER_ID, drafted from videos rendered out of mixer/remotion/out/ on this repo. The product we sell to other indie devs and small teams is the Reddit and X engagement loop, not a hosted Instagram service. If you want what is on this page, you can read mixer/post_to_ig.py in the repo and adapt it. If you want managed engagement on Reddit and X for your dev tool, that is the call to book.

How do you avoid posting the same thing twice?

Two layers. At the file level, mixer/post_to_ig.py writes a sibling file named <video>.posted.json the moment a post lands; the picker skips any video that has one. At the database level, the same script updates the media_posts row in our Neon Postgres with status='posted', posted_at, and the IG permalink. The next picker invocation reads from media_posts and only considers rows still tagged status='draft'. Both layers must agree before a video is treated as fresh.

Why not use a browser automation tool against instagram.com instead?

Because Meta has a Graph API that does this exact thing in 344 lines of pure Python with zero browser dependency. A browser automation script for IG has to keep up with React-rendered selectors, rotating session cookies, and a fingerprint surface Meta watches closely. The Graph API path is officially supported, gives you a real status code, and fails loudly with a JSON error. There is no scenario where running Selenium on instagram.com is the right choice if your account is eligible for a Business or Creator API token, and most are.

What does the 4:1 picker actually do?

It is a deterministic sliding-window picker over the last five posts. If fewer than four of the last five rows in media_posts are tagged post_type='organic', the next pick is organic; otherwise the next pick is product. Over time this locks to exactly 4 organic + 1 product per 5 posts on the timeline. The picker is 114 lines (scripts/ig_post_type_picker.py), reads from Postgres, and emits one JSON line that the shell harness parses. The reason for this shape is human: a feed that is one in five product is still a creator feed; a feed that is one in two product is a sales feed and gets unfollowed.

Can I run this without S4L at all?

Yes. The Instagram Graph API is documented at developers.facebook.com/docs/instagram-platform/content-publishing. You need a Business or Creator IG account, a Facebook Page connected to it, an App with the instagram_basic and instagram_content_publish permissions, and a long-lived user token. The whole posting pipeline is three HTTP calls plus a polling loop. Our 344-line file is one example of writing it; Meta's docs are the canonical one. SaaS schedulers exist because most teams do not want to own the long-token refresh, the GCS bucket, the launchd plist, or the sliding-window picker, not because the API is hard.

s4l.aibooked calls from social
© 2026 s4l.ai. All rights reserved.