instagrapi

Get Instagram followers in Python with instagrapi

Updated

What you can fetch

user_followers returns a dict keyed by numeric pk, where each value is a UserShort pydantic model. The fields populated for every entry are pk, username, full_name, profile_pic_url, is_private, and is_verified. That is the same shape Instagram’s mobile app gets when it renders the followers list — nothing more, nothing less.

What is not in there: email, phone, follower-of-follower counts, last-active timestamp, or any engagement signal. Instagram’s private API treats those as private to the user themselves; no amount of clever scraping inside user_followers will surface them. If you need richer per-user data — biography, follower count, external URL — you make a second user_info call per follower, and that is where rate limits start to matter. A 50k-follower target with a per-follower enrichment pass is 50k+ extra requests, which is its own crawl with its own proxy budget.

The companion endpoint is user_following, which returns the accounts a target follows rather than the accounts that follow them. The shape and pagination are identical — everything in this guide applies to both. The “following” list also tends to be smaller and less aggressively rate-limited, so it is a good place to validate your setup before pointing it at a 200k follower base.

Fetching by username vs by pk

The endpoint takes a numeric user_id (the account’s pk), not a username. If all you have is a handle, resolve it once with user_id_from_username and cache the result — usernames can change but the pk is stable for the life of the account.

from instagrapi import Client

cl = Client()
cl.load_settings("session.json")
cl.login("YOUR_USERNAME", "YOUR_PASSWORD")

user_id = cl.user_id_from_username("instagram")
followers = cl.user_followers(user_id, amount=200)
for follower_id, info in followers.items():
    print(follower_id, info.username, info.full_name)

On a one-shot script this round-trip costs nothing. On a job that walks the same target every hour, persist the pk in Redis or a small SQLite table next to the session — you save one request per run, and one request is one fewer chance to trip the risk system. The same applies if you are walking a list of targets from a CSV: resolve every username to a pk once, store the mapping, and read from it on subsequent runs. Username changes are rare enough that revalidating monthly is plenty.

Walking the full list with pagination

Instagram returns followers in chunks of 50–100. instagrapi walks those chunks for you: pass amount=N to stop after roughly N entries (it will round up to a chunk boundary), or pass amount=0 to fetch the entire list. There is no second function to call.

all_followers = cl.user_followers(user_id, amount=0)
print(f"Fetched {len(all_followers)} followers")

For accounts under ~10k followers this is fine on a laptop with a logged-in session and no proxy. Above that, a single amount=0 call becomes a multi-minute crawl that holds the connection open and gives you no checkpoint — if it fails at follower 80,000 of 120,000, you start over.

For 100k+ targets, drop down to the chunked variant user_followers_v1_chunk(user_id, max_id=""), which returns one page plus a next_max_id cursor. Persist the cursor in Redis keyed by (account, target_user_id), sleep between calls, and resume on failure. That pattern is the difference between a crawl that finishes overnight and one that you babysit for three days.

One nuance worth flagging: len(all_followers) rarely equals the profile’s displayed follower count. Instagram caches that number aggressively, and the iterator returns only entries it can actually serve — recently-deactivated accounts and shadowbanned profiles drop out. A 1–3% delta is normal; a 30% delta means your crawl was rate-limited mid-walk and silently truncated.

Exporting to CSV

The follower dict is small enough to dump straight to CSV with the stdlib — no pandas, no extra dependency. Pick the columns most analysis actually uses (pk, username, full_name, is_private, is_verified) and skip profile_pic_url, which expires within hours and bloats the file.

import csv
with open("followers.csv", "w", newline="") as f:
    w = csv.writer(f)
    w.writerow(["pk", "username", "full_name", "is_private", "is_verified"])
    for pk, u in all_followers.items():
        w.writerow([pk, u.username, u.full_name, u.is_private, u.is_verified])

If you want JSON instead, every UserShort has .dict() (pydantic v1) or .model_dump() (pydantic v2). Stream those to a JSONL file the same way and you can jq over the result later without re-fetching. For long crawls, write each chunk to disk as it comes in rather than buffering the entire dict — if the crawl dies at 80%, you still have 80% of the data on disk instead of an empty file and an exception.

Avoiding rate-limit pain

Three mitigations, in order of how much they help:

  1. Reuse your session. Every fresh Client() is a new device fingerprint, and Instagram’s risk model weights “new device fetching followers list” heavily. Run cl.dump_settings("session.json") after a successful interactive login, then cl.load_settings(...) on every subsequent run before calling login(). This alone removes most of the noise in a low-volume script.

  2. Pin a residential proxy per account. Datacenter IPs and shared VPN exits get flagged within a few hundred follower-fetches. A residential proxy that stays sticky to one account looks like a phone on a home Wi-Fi and survives an order of magnitude more requests. Set it once with cl.set_proxy("http://user:pass@host:port") before login. The full pattern — sticky sessions, health-checking, rotation when an IP burns — is in the proxy setup guide.

  3. Sleep between calls. Even with a clean session and a residential proxy, cl.delay_range = [1, 3] (a 1–3 second jitter between requests) buys you a lot. For chunked walks of large accounts, push the lower bound to 2 or 3.

Two specific failure modes to watch for. please_wait_a_few_minutes is a soft block — back off for 30 minutes, then retry; if it returns immediately, your IP is the problem. feedback_required is harsher: Instagram has decided the account is acting unusually, and the only fix is hours of idle time plus, often, a manual login from the official app to clear the flag.

Wrapping up

Followers are usually the first dataset, not the last. Once you have the list, the next step is typically pulling each follower’s recent media — see the Instagram scraper guide for the media-fetch pattern at scale. And before you point any of this at production volume, read the proxy setup guide: it is the single biggest determinant of whether your account survives the week. Start small, persist your session and cursor, and add proxies before you scale, not after.

Related guides

Frequently asked

Can I fetch followers of a private account?

Only if the logged-in account follows that private account. instagrapi will return the full list it can see; you can't bypass Instagram's privacy model.

How many followers can I fetch in one run?

Instagram paginates roughly 50–100 entries per request. Without a proxy, fetching more than a few thousand followers in a short window will trigger 'please_wait_a_few_minutes' or feedback_required.

Why does the count returned by instagrapi differ from the profile count?

The profile count comes from a cached counter; the iterator returns actual follower entries, which may differ slightly because of recent unfollows, deactivated accounts, or shadowbanned profiles.

Can I get follower email or phone?

No. The mobile API does not expose contact info for other users. Only your own account's contact info is accessible.

Skip the infra?

Managed Instagram API — same endpoints, sessions and proxies handled.

Try HikerAPI → Full comparison
More from the team