instagrapi

Persisting instagrapi sessions: file, Redis, and Postgres patterns

Updated

Why sessions matter

Every Client() you instantiate without a saved session generates a fresh set of Android device UUIDs, a new app version string, and an empty cookie jar. From Instagram’s perspective that is a brand-new phone holding your credentials, and brand-new phones logging into established accounts are exactly what its risk model is tuned to catch. Re-fingerprinting on every login is the single most common cause of challenge_required in the wild — far more common than IP issues, request volume, or anything else people typically blame.

The session blob is the fix. dump_settings() serialises the device fingerprint, the authenticated cookies, the request headers, and the user pk into a JSON document. load_settings() rehydrates them into a new Client() so the next run looks like the same phone resuming a session, not a different phone logging in cold. Pair that with a stable proxy IP and the challenge rate on a healthy account drops by an order of magnitude.

What follows is the same pattern at three levels of operational maturity: a single file for a script, Redis for a worker fleet, Postgres for multi-account orchestration.

File-based persistence (the basic case)

For a single script on a single machine, a JSON file on disk is enough. The contract is straightforward: try to load the session before logging in; if the file does not exist yet, log in cold and dump the resulting state. After that first cold start the file is your steady-state path.

from instagrapi import Client

cl = Client()
try:
    cl.load_settings("session.json")
    cl.login("USERNAME", "PASSWORD")
except FileNotFoundError:
    cl.login("USERNAME", "PASSWORD")
    cl.dump_settings("session.json")

Two subtleties. load_settings() must run before login(), otherwise the device IDs are regenerated and you have defeated the entire mechanism. And login() is still called on the warm path — instagrapi reuses the cookies but performs a quick session check; if Instagram has invalidated the cookies, it falls back to a full login transparently.

Storing in Redis

A single file works until you have two workers. Two processes writing the same session.json produce a classic last-writer-wins race; one of them ends up with stale cookies, gets a sudden login_required, re-authenticates, and triggers a challenge because the fingerprint drifted between the two writes. Redis gives you a single source of truth with atomic GET/SET, plus a natural place to hang an expiry, a per-account lock, and observability.

The minimal pattern stores the session blob under a per-username key. On startup, fetch the blob if it exists, hydrate the client, log in, and write the (possibly refreshed) settings back.

import json, redis
from instagrapi import Client

r = redis.Redis()
KEY = "ig:session:USERNAME"

cl = Client()
blob = r.get(KEY)
if blob:
    cl.set_settings(json.loads(blob))
cl.login("USERNAME", "PASSWORD")
r.set(KEY, json.dumps(cl.get_settings()))

set_settings()/get_settings() are the in-memory equivalents of the file helpers — same data, no disk involved. Note that the blob is not encrypted at rest; if your Redis is shared, put TLS in front of it and ACL the key prefix, or encrypt the JSON before writing. Because the cookies are bearer credentials, anyone who can GET ig:session:* can authenticate as those accounts.

For TTL, set something on the order of weeks (r.set(KEY, blob, ex=14*86400)) so abandoned sessions self-clean. For locking, wrap requests in a redis.lock.Lock keyed on the username — see the concurrency section below.

Storing in Postgres (multi-account)

Once you are running tens or hundreds of accounts, Redis-as-truth starts to feel thin. You want joins against account metadata, you want updated_at for freshness queries, you want to pin a proxy to each account, and you want backups. Postgres with a jsonb settings column gives you all of that without changing instagrapi’s contract.

create table ig_sessions (
  username text primary key,
  settings jsonb not null,
  proxy text,
  updated_at timestamptz not null default now()
);

The Python side is a select before login, an upsert after. Keep settings as jsonb rather than text so you can index into it (settings->>'uuids') when debugging fingerprint drift across accounts.

import json
import psycopg

with psycopg.connect(DSN) as conn, conn.cursor() as cur:
    cur.execute("select settings from ig_sessions where username = %s", (USER,))
    row = cur.fetchone()
    if row:
        cl.set_settings(row[0])
    cl.login(USER, PWD)
    cur.execute(
        "insert into ig_sessions (username, settings) values (%s, %s) "
        "on conflict (username) do update set settings = excluded.settings, updated_at = now()",
        (USER, json.dumps(cl.get_settings())),
    )
    conn.commit()

The proxy column matters: pin one residential IP per account and read it alongside the settings, so the same account always egresses from the same address. Mixing IPs across runs is the second most common challenge trigger after fingerprint drift.

Concurrency: don’t share a session across requests

Persisted sessions tempt you to fan out: ten workers, one shared blob, ten parallel requests. Instagram sees ten requests in flight from the same cookie jar within milliseconds and lights up its anti-bot signals — humans on a phone do not pipeline like that. The result is a sudden wave of challenge_required or please_wait_a_few_minutes across every worker, and the session is effectively burnt.

The rule: one in-flight request per session at a time. Enforce it with a Redis lock keyed on the username, or with an actor model where every account has a single dedicated worker that processes its queue serially. Do not multi-thread on a single Client instance — instagrapi is not thread-safe, and even if it were, Instagram would still flag the traffic pattern. Parallelism comes from running more accounts, not from parallelising one.

Wrapping up

File for scripts, Redis for fleets, Postgres for multi-account systems — same dump_settings/load_settings underneath. Pair the persisted session with a pinned proxy and a serial-per-account execution model and the everyday challenge rate falls off a cliff. From here, the 2FA and challenge guide covers what to do when one fires anyway, and the proxy setup guide covers the IP side of the same problem.

Related guides

Frequently asked

What is in the session.json file?

Device fingerprint (UUIDs, app version), cookies, headers, and the user pk. It's the minimum state needed to look like the same client to Instagram across runs.

Is session.json safe to commit to git?

No. It contains live cookies that can authenticate as your account. Treat it like a credential. Use environment variables or a secrets manager.

How long does a session last?

Weeks to months in practice, as long as you keep using the same device fingerprint, IP, and don't trigger challenges. Sessions die on password changes, manual logout, and severe abuse signals.

Can I share one session across processes?

Yes — but serialize access. Two concurrent requests with the same cookie jar will trigger Instagram's anti-bot heuristics. Use a Redis lock or an actor model, one in-flight request per session.

Skip the infra?

Managed Instagram API — same endpoints, sessions and proxies handled.

Try HikerAPI → Full comparison
More from the team