Deploy instagrapi in Django: sessions, workers, scheduling
Maintained by the instagrapi contributors · Library on GitHub
Updated
You are building a Django application that needs Instagram data — a feed of mentions for the marketing team, an inbox of DMs piped into the support tool, a nightly job that snapshots a competitor’s followers — and instagrapi is the obvious library to reach for. The first integration most developers write looks like a views.py that constructs a fresh Client(), calls cl.login() with credentials from settings, and returns the response inline. It works once on a laptop, falls over the moment it touches a production deployment, and produces a stream of challenge_required errors that have nothing to do with the account itself and everything to do with the integration shape.
The reasons it falls over are predictable. Sync instagrapi calls block; a 30-second user_followers walk inside a Django view ties up a request worker for the whole 30 seconds. Each gunicorn worker holds its own Client() with its own freshly-generated device fingerprint, so the same Instagram account looks like five new phones logging in at once. The rate-limit budget is per-account on Instagram’s side but per-process in your code, so three workers exhaust the quota three times faster than expected. This page walks through the integration pattern that fixes all three: persist the session in Redis with a refresh lock, push every Instagram call into Celery, and centralise the rate-limit budget behind a token bucket that all workers share.
Setup
Add instagrapi alongside the libraries that make session-sharing and scheduling sane in a Django app. django-redis gives you a process-shared Redis cache; django-celery-beat adds the scheduler you will need once the integration outgrows ad-hoc management commands.
pip install instagrapi django-redis django-celery-beat
Configure a small INSTAGRAM block in settings.py that holds nothing secret and points at a Redis connection. Treat credentials as environment variables — never commit the password, and never log it.
# settings.py
INSTAGRAM = {
'SESSION_BACKEND': 'redis',
'REDIS_URL': 'redis://localhost:6379/0',
'ACCOUNTS': {
'main': {
'username': os.environ['IG_USERNAME'],
'password': os.environ['IG_PASSWORD'],
},
},
}
The ACCOUNTS dictionary scales naturally to multi-account setups: each account gets its own session key in Redis (ig:session:<name>) and its own rate-limit bucket. If you only have one account today, ship it as a dictionary anyway — refactoring the lookup logic later is more painful than typing two lines now. The cost of leaving credentials in settings.py even temporarily is high: a leaked Django settings file containing an Instagram password is enough to lose the account permanently to a fast-acting attacker, so the env-var read is non-negotiable from day one. Pair it with django-environ or any other twelve-factor library you already use; the important thing is that the password never appears in version control or in a Docker image.
Working example
The minimum end-to-end Django example is a management command that loads a session from Redis, runs an instagrapi call, and writes the result to a model. Management commands are the right place to start because they avoid the request-worker question entirely — Celery is the next step, but the session-loading pattern is identical either way.
# myapp/management/commands/sync_followers.py
from django.core.management.base import BaseCommand
from instagrapi import Client
import json
from django_redis import get_redis_connection
from myapp.models import Follower
class Command(BaseCommand):
def handle(self, *args, **opts):
r = get_redis_connection()
cl = Client()
blob = r.get('ig:session:main')
if blob:
cl.set_settings(json.loads(blob))
cl.login(
username=os.environ['IG_USERNAME'],
password=os.environ['IG_PASSWORD'],
)
r.set('ig:session:main', json.dumps(cl.get_settings()))
target = cl.user_id_from_username('instagram')
for pk, info in cl.user_followers(target, amount=200).items():
Follower.objects.update_or_create(pk=pk, defaults={'username': info.username})
Three things make this snippet production-shaped rather than demo-shaped. First, the session is loaded from Redis before login() so the device fingerprint is reused across runs — without this, every invocation looks like a brand-new device and Instagram’s risk model fires challenge_required reliably within the first few runs. Second, get_settings() is dumped back to Redis after a successful login so the post-login cookie jar replaces the pre-login one; skip this and you waste the trust signal Instagram just minted for you. Third, the writes go through update_or_create rather than a naive bulk_create — Instagram’s pagination occasionally re-yields followers across page boundaries, so the integration must be idempotent.
Production caveats
Three patterns break Django + instagrapi integrations once they leave the laptop. Each has a counterpart in the fix section; read them in order so the rationale lands before the recipe.
1. Sessions across Django worker processes
Each gunicorn or uvicorn worker is its own Python process, which means a module-level cl = Client() is private to one worker — there is no shared state across the four-or-eight processes a typical Django deployment runs. If you let each worker construct its own Client() with no shared backing store, every worker generates its own device fingerprint and Instagram sees the same account logging in from N different phones simultaneously. The risk model treats this as a credential-stuffing pattern and challenges the account aggressively.
The fix is to centralise the session in Redis (or a single-row Django model) and have every worker load it on boot. The wrinkle is concurrency: two workers booting at the same time can both find a stale session, both run login(), and both overwrite the Redis blob — at which point one of them has a cookie jar that the other already invalidated. Lock the refresh path with a Redis SETNX so only one worker does the login and the rest wait for the result.
2. Long IG calls block request workers
A user_followers(amount=1000) walk takes 30+ seconds; even a user_info_by_username lookup runs a few hundred milliseconds. Either of these inside a Django view holds a request worker for that duration. With four workers and a slow Instagram day, the entire request pool can be drained by four concurrent users hitting the same endpoint, and every other user gets a 502. The web tier should never call instagrapi directly; views push tasks to Celery and return 202 Accepted plus a task id, and the client polls a status endpoint backed by Redis.
3. Rate-limit budget is per-account, not per-process
Instagram budgets requests per account, but your code budgets per worker process by default — there is no built-in coordination between gunicorn workers. Three workers running at full speed against one account exhaust the quota three times faster than the budget you wrote, and your single-account integration starts seeing please_wait_a_few_minutes long before you would expect. Token-bucket the budget in Redis with the account name as the key; every Instagram call increments the bucket and waits if it is empty.
Fix in instagrapi
Four steps, in order — each one assumes the previous one is in place.
-
Persist sessions to Redis with locking. Wrap the session load/save in a small helper that uses
SETNXto coordinate the refresh path. The pattern is: try-load, attempt-lock, login-and-save inside the lock, fall back to read after the lock expires. Use a one-minute lock TTL — long enough forlogin()plus the post-logindump_settings(), short enough that a crashed worker does not block the next refresh forever.import json, time from instagrapi import Client from django_redis import get_redis_connection def get_client(account: str) -> Client: r = get_redis_connection() key = f'ig:session:{account}' cl = Client() blob = r.get(key) if blob: cl.set_settings(json.loads(blob)) return cl lock_key = f'ig:session:{account}:lock' if r.set(lock_key, '1', nx=True, ex=60): cl.login(...) r.set(key, json.dumps(cl.get_settings())) r.delete(lock_key) return cl # another worker is logging in — wait briefly and reread. time.sleep(2) blob = r.get(key) cl.set_settings(json.loads(blob)) return cl -
Wrap IG calls in Celery tasks. Every public-facing entry point pushes a task and returns a 202 with the task id. The view never calls instagrapi; the task does. See the celery integration for the worker-side pattern, including retries on
PleaseWaitFewMinutes. -
Centralise the rate-limit budget in Redis. Maintain a token bucket keyed by account. Decrement before every Instagram call; sleep until the next refill if the bucket is empty. The budget is a per-account number — somewhere between 200 and 600 calls per hour for hardened residential accounts, less for fresh accounts — and you should tune it down until
please_wait_a_few_minutesstops appearing. -
Pin one residential proxy per account. Set
cl.set_proxy()beforelogin()and keep the same IP for the lifetime of the account. Datacenter IPs will not pass the first challenge on a hardened account; rotating residential IPs trip the impossible-travel signal. One stable IP per account is the configuration that works.
Deep dive
If your team is moving the Django app to async views, instagrapi is not the right library — aiograpi, the async-native fork maintained by the same author, is. Calling sync instagrapi from an async def view sounds harmless but it is not: the sync HTTP call inside instagrapi blocks the event loop for the duration of the request, which means every other coroutine the worker is servicing freezes for the same amount of time. With ten concurrent users on one async worker, a 500 ms instagrapi call inflates p99 latency by 4–5 seconds. Wrapping the call in sync_to_async does not help — it pushes the blocking work into a thread, which avoids freezing the event loop, but it still consumes a thread from the limited Django ASGI thread pool, and you pay the cost in throughput rather than latency. The fix is to run aiograpi from async views and instagrapi (via Celery) from sync paths; they share the session-file format, so a session dumped by one can be loaded by the other if you ever need to migrate gradually.
The longer-term answer for any team running Django at meaningful scale is to keep instagrapi out of the request path entirely and treat it as a workload that lives in Celery, with the web tier doing nothing but enqueueing tasks and reading status. That separation buys you two operational properties that are hard to retrofit: you can scale the worker tier independently of the web tier (Instagram throughput is bounded by Instagram, not by request volume), and you can tear down and recreate the web tier on a deploy without disturbing in-flight Instagram work. Both of those compound as the team grows, and both are nearly free to set up if you do them on day one.
Related integrations
- Deploy instagrapi in FastAPI: async, aiograpi, background tasks FastAPI with instagrapi: why sync calls block the event loop, when to switch to aiograpi, and how to wrap blocking IG actions in BackgroundTasks.
- Run instagrapi tasks in Celery: queues, retries, rate limits Wrap instagrapi calls in Celery tasks: per-task session reuse, exponential backoff for please_wait_a_few_minutes, and distributed rate-limit budgets.
- Run instagrapi in Docker: stateless containers, secrets, sessions Containerize an instagrapi service: handle stateless restarts, externalize session storage, manage IG credentials safely, and avoid datacenter IP blocks.
Related errors
- login_required: how to recover the instagrapi session in Python Fix instagrapi's login_required error: when sessions die mid-script, what causes the cookie to expire, and how to recover without re-triggering 2FA.
- please_wait_a_few_minutes: instagrapi rate limit and how to recover please_wait_a_few_minutes is Instagram's soft rate limit. instagrapi raises PleaseWaitFewMinutes — sleep, slow down, rotate proxy, persist.
Related guides
- Persisting instagrapi sessions: file, Redis, and Postgres patterns Reuse instagrapi login sessions across runs and processes: dump_settings, load_settings, and storing the session blob in Redis or Postgres.
- Instagram Private API in Python: a practical guide with instagrapi How to use Instagram's private (mobile) API from Python with instagrapi. Login, session reuse, fetching media, posting, and avoiding common errors.
Frequently asked
Should I call instagrapi from a Django view directly?
Almost never. instagrapi calls block — a Django view that calls cl.user_info_by_username() ties up a request worker for hundreds of milliseconds to seconds. Push the call to a Celery task and have the view return a 202 + task id.
Where does the Django app store the instagrapi session?
Database (single-row model) or Redis. Filesystem session.json works for one-replica dev only. Use the same session.json shape Django writes to and reads back; the dump_settings()/set_settings() interface is JSON-compatible.
Can I run instagrapi alongside Django's async views?
Use aiograpi instead — instagrapi is sync. Django's async view is event-loop driven; calling sync instagrapi blocks the loop. Same author maintains both libraries.
How do I share an instagrapi session across multiple Django workers?
Persist to Redis with a per-account key (ig:session:{username}). Lock writes with a Redis SETNX so two workers don't both refresh the session and clobber each other's cookies.
Skip the infra?
Managed Instagram API — same endpoints, sessions and proxies handled.
Try HikerAPI → Full comparison