instagrapi

Run instagrapi in Docker: stateless containers, secrets, sessions

Maintained by the instagrapi contributors · Library on GitHub

Updated

You have an instagrapi-based service that ran beautifully on a developer laptop and you have decided to put it in a container. The first Dockerfile is rarely wrong on its own — FROM python:3.11, pip install instagrapi, COPY app/ /app, CMD ["python", "-m", "app.main"] — and the first docker run works exactly the way the laptop did. Two days later the container has been redeployed three times, every restart has invalidated the session.json that was sitting in the working directory, and the account that survived a month of laptop-based testing is suddenly serving challenge_required on every login attempt. The retry log has filled with please_wait_a_few_minutes. The .env file you mounted as a quick fix is now baked into a private registry image somebody else on the team can pull.

The cause is the same in each failure: containers are stateless by design and instagrapi is stateful by design. The library expects to keep one cookie jar, one device fingerprint, one proxy URL, and one rate-limit history per account, and it expects all of that to outlive any individual process. A naive container deployment throws every one of those signals away on each restart, and Instagram’s risk model reads the throw-away as fraud. This page walks the Docker-specific way to put instagrapi in a container without losing the trust state on every redeploy: externalize the session blob, treat credentials as runtime data rather than image data, and pin a residential proxy because every major cloud’s egress range is on Instagram’s pre-flagged list.

Setup

Use a multi-stage Dockerfile so the final image excludes the build chain. instagrapi pulls in a few dependencies that need a C toolchain to compile wheels — cffi, cryptography, Pillow on some platform combinations — and you do not want gcc, libffi-dev, and the wheel cache shipping into production. A two-stage layout cuts roughly 150 MB off the runtime image and removes the entire build toolchain from anything that ships to a registry.

FROM python:3.11-slim AS builder
RUN apt-get update && apt-get install -y --no-install-recommends \
    gcc libffi-dev && rm -rf /var/lib/apt/lists/*
WORKDIR /build
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt

FROM python:3.11-slim
COPY --from=builder /root/.local /root/.local
ENV PATH=/root/.local/bin:$PATH
WORKDIR /app
COPY app/ ./app/
USER nobody
CMD ["python", "-m", "app.main"]

The pair of python:3.11-slim stages keeps the final image close to 150 MB; an alpine base is tempting and another 70 MB lighter, but musl-based wheels for cryptography are a known source of subtle login failures, and the savings are not worth the debugging hours. Pair the Dockerfile with a compose.yml that pulls credentials from .env at runtime rather than COPY-ing them in.

services:
  app:
    build: .
    environment:
      IG_USERNAME: ${IG_USERNAME}
      IG_PASSWORD: ${IG_PASSWORD}
      IG_PROXY_URL: ${IG_PROXY_URL}
      REDIS_URL: redis://redis:6379/0
    depends_on: [redis]
  redis:
    image: redis:7-alpine
    volumes:
      - ig-redis:/data
volumes:
  ig-redis:

The .env file is referenced, never committed, never copied. The Redis volume is the only piece of persistent state outside the host filesystem; the application container itself is fully replaceable, which is the property the rest of this page builds on.

Working example

The minimum container-shaped example is a worker process that loads its session from Redis on boot, runs a periodic IG fetch loop, and persists session updates after each call. The shape matters more than the call itself: load before login, dump after every successful call, and never trust the local filesystem to carry state across restarts.

# app/main.py
import json, os, time
import redis
from instagrapi import Client

r = redis.from_url(os.environ['REDIS_URL'])
SESSION_KEY = 'ig:session:main'
PROXY_URL   = os.environ.get('IG_PROXY_URL')

def load_client() -> Client:
    cl = Client()
    if PROXY_URL:
        cl.set_proxy(PROXY_URL)
    blob = r.get(SESSION_KEY)
    if blob:
        cl.set_settings(json.loads(blob))
    cl.login(username=os.environ['IG_USERNAME'],
             password=os.environ['IG_PASSWORD'])
    r.set(SESSION_KEY, json.dumps(cl.get_settings()))
    return cl

def loop():
    cl = load_client()
    while True:
        info = cl.user_info_by_username('instagram')
        # write to your DB here — idempotent upserts only
        r.set(SESSION_KEY, json.dumps(cl.get_settings()))
        time.sleep(900)

if __name__ == '__main__':
    loop()

Three details make the snippet survive a redeploy. The session blob is read from Redis before login() so the device fingerprint outlives the container; without that, every docker compose up looks to Instagram like a freshly purchased phone signing in, and the risk model reacts accordingly. The proxy URL is set before login() rather than after, because it is part of the trust signal Instagram pins to the session — switching from a residential IP at login time to a datacenter IP at fetch time looks like a credential takeover. The settings are dumped back to Redis after every IG call, not only at shutdown; container shutdowns are not always graceful, and a SIGKILL from the orchestrator would otherwise lose the post-call cookie state along with whatever rate-limit budget the call had just earned back.

Production caveats

Three patterns repeatedly break Docker + instagrapi deployments, and each one shows up later than you would like. The order below is roughly how often a team loses an afternoon to it.

1. Stateless containers lose session.json

The default Python application writes session.json to the working directory and trusts it to survive process restarts. In a container that assumption breaks immediately. A docker compose down && docker compose up swaps the writable layer entirely; an orchestrator-driven redeploy starts a fresh filesystem from the image; even docker restart of a long-running container can lose a session if the previous process never flushed its in-memory cookies to disk. The fix is to never trust the local filesystem. A bind mount or named volume scoped to the session directory works for a single replica; an external store (Redis, Postgres) works for any replica count and lets you migrate hosts without copying files. The Redis pattern in the working example is the cheapest version that survives every restart shape, including the orchestrator’s OOMKilled path.

2. Datacenter IPs trigger Instagram blocks faster

A container running on AWS, GCP, Azure, or any of the dozen smaller VPS providers Instagram has watched scrape its public surface for years is going to start every login from an IP that is already flagged. Hardened residential accounts can sometimes survive a single login from a datacenter range; freshly created accounts almost never can. The right shape is to pin one residential proxy per account at the application layer with cl.set_proxy(), set it before login(), and keep it stable for the lifetime of that account. Rotating egress IPs trip impossible-travel signals and are worse than a single sticky address. See the proxy setup guide for the proxy-side configuration.

3. Credentials in image vs at runtime

A Dockerfile that ends with COPY .env /app/.env ships the IG password into every layer of every image pushed to the registry. The image is now a credential, and every developer with docker pull access has the password whether the team meant to share it or not. The fix is to keep credentials at runtime: docker secrets in Swarm or compose, --env-file for ad-hoc runs, a secrets-manager init pattern (Vault, AWS Secrets Manager, GCP Secret Manager) for orchestrated deployments. The .env file lives next to the compose file, is gitignored, and is read by the orchestrator — never copied into the image. The same logic applies to the session blob: a container that bakes a session.json into the image is leaking a logged-in cookie jar, which is functionally equivalent to leaking the password and is just as urgent to rotate when it is found.

Fix in instagrapi

Four steps, in order — each one assumes the previous one is in place.

  1. Externalize the session to Redis (or a named volume). The session blob is a few kilobytes of JSON; round-tripping it through Redis on every IG call adds sub-millisecond overhead and removes the local-filesystem dependency entirely. A named volume mounted at the session directory is acceptable for a single-replica deployment but not for a swarm; pick the storage backend before the container count, not after. See the session persistence guide for the storage-side patterns that apply across both single-host and clustered deployments.

  2. Pass credentials at runtime, never at build time. Use --env-file, docker secrets, or a secrets-manager init container that writes credentials to a tmpfs the application reads on startup. Audit the registry for any image that has ever shipped a .env; if one exists, rotate the password before doing anything else, because the image is now in every backup of every CI runner that ever pulled it. Mount the .env file as read-only and own it as nobody so a compromised application cannot exfiltrate it past its own process boundary.

  3. Pin one residential proxy per account. Set the proxy at process startup, before login(), and keep the same egress IP for the lifetime of the account. The proxy URL belongs in IG_PROXY_URL next to the credentials, never in the image. If the container hops cloud regions on redeploy, the proxy keeps the IG-facing IP stable even though the container’s outbound address changed; without that decoupling, every deploy looks like a new device on a new continent and the account spends its first few minutes back online inside a challenge_required loop.

  4. Run the container as a non-root user. A USER nobody directive in the Dockerfile (or a numeric UID via Kubernetes securityContext) gives you a smaller blast radius if the application or one of its dependencies is compromised. instagrapi does not need root; very few Python applications do. The non-root posture also prevents accidental writes to the working directory, which is exactly the kind of accidental write that creates a stray session.json outside the externalized store and quietly forks the account’s session state across two backends.

Deep dive

Container health is its own integration shape with instagrapi. The orchestrator’s default HEALTHCHECK — typically a TCP probe on a port the application binds — only proves the process is alive, not that the IG session is. A more useful health probe calls cl.account_info() once per minute and reports unhealthy on a LoginRequired or ChallengeRequired; the orchestrator can then recycle the container, which triggers the same load-from-Redis path the working example uses, and the recycle picks up a still-valid session if one exists or surfaces the auth failure to the team if not. Tune the probe interval to whatever rate-limit budget is left after the application’s own IG traffic; one extra call per minute is cheap, but eight replicas all probing once per minute consume eight calls per minute against a single account and can themselves trigger please_wait_a_few_minutes faster than the workload would on its own.

Related integrations

Related errors

Related guides

Frequently asked

Why does instagrapi keep hitting challenge_required in my Docker container?

Containers are stateless — every restart loses session.json, which means a fresh device fingerprint and a triggered Instagram challenge. Mount a Docker volume or persist sessions to external storage (Redis, Postgres) so the fingerprint survives container restarts.

How should I pass IG credentials into a Docker container?

Docker secrets (in Swarm or compose), environment variables from a .env file (NOT committed), or pull from a secrets manager (Vault, AWS Secrets Manager) at startup. Never bake credentials into the Docker image.

Will my Docker container's IP get blocked by Instagram?

If you run on a major cloud (AWS, GCP, Azure), yes — datacenter IPs are pre-flagged. Route through a residential proxy at the application layer (instagrapi's set_proxy()) or a system-level VPN. The proxy URL is part of session state too, so a Docker container that swaps egress IPs across restarts looks like impossible-travel to Instagram's risk model.

Can I use a multi-stage Dockerfile for instagrapi?

Yes — and you should. A multi-stage Dockerfile keeps the final layer small: stage 1 installs build deps (gcc, libffi-dev for some pinned wheels), stage 2 copies only the wheel cache. The final Docker image is around 150MB lighter and excludes the build chain, which also tightens the container's attack surface.

Skip the infra?

Managed Instagram API — same endpoints, sessions and proxies handled.

Try HikerAPI → Full comparison
More from the team