Twitter

Legally Scrape Twitter Data

Download to Excel & CSV Files via API

Unlike screen scrapers, our no-code platform exports data directly from Twitter's Official API so you can download extracted data seamlessly. This means no breaking code, getting blocked, overpriced proxies or incorrect data.

LcqVDfueb8g ▶️

Sign Up to Scrape Data from Twitter's API

How to Scrape Twitter Data

Scraping social media sites like Twitter doesn’t need to be complicated. In fact, Twitter wants you to scrape their data using the Official Twitter API, which allows you to scrape 10,000 Tweets per month on the basic tier, 1,000,000 Tweets per month on the pro tier, and even more with their enterprise plans!

How Can Twitter Data Help Your Business or Research?

Many businesses & researchers scrape data from Twitter to improve their social media presence or better understand the conversations taking place in their marketplace. Another common use case is to perform sentiment analysis on Tweet data, identifying positive or negative sentiment on current trends.

Regardless of whatever business or research you’re in, scraping Twitter data is more common now than ever, with so many public conversations, users and hashtags available to scrape on Twitter for finding new insights into our ever changing online society.

Official Twitter API Scraping

The best way to scrape Twitter data is through the Official Twitter API, which allows you to legally scrape Tweets & users from the platform with Twitter’s blessing. The downside is that access to the API is no longer free, and the starting price is $100 USD for scraping up to 10,000 Tweets per month.

The Twitter API is Worth the Time it Will Save You

We highly recommend using the official Twitter API and feel that it’s well worth the price for the time & frustration it will save you in attempting circumvent Twitter’s Terms of Service for scraping data yourself. If you end up using proxies, their cost can easily exceed the cost of the Twitter API in addition to the hours wasted fighting blocks and bans from Twitter.

Unofficial Twitter Scrapers

Despite our guidance, we know that a lot of you are going to attempt to bypass the official Twitter API and try to scrape it yourself. To save you a few days of headache, we’ll outline below the main reasons why this is a bad idea.

Twitter Rate Limits

Since you need to be logged in to Twitter to access and scrape Twitter data, you’ll be limited to viewing a certain number of Tweets per day, typically a few hundred depending on current Twitter policies and if you’re paying for access or not.

Scrape Twitter by their Rate Limits

This means that if you’re using an unofficial Twitter scraper, it will need to collect Tweets while pretending to be logged in as a Twitter user. Even if successful, the Twitter scraper will only be able to collect a few hundred Tweets before it needs to log out and use another account.

Depending on your requirements, you’ll end up needing to create dozens or hundreds of fake Twitter accounts in order to collect enough Twitter data and circumvent their API. The issue you’ll have here is in creating new accounts - Twitter isn’t stupid and when they see you attempt to make 10 new Twitter accounts from the same IP address, they will get suspicious and ban all of them.

Will Your Twitter Scraper Get You Banned?

To get around this, you’d need to use a proxy service (a good one too, since most proxies these days are easily detectable due to high latency) and create each Twitter account with a different IP address, keeping track of all of this. Assuming you don’t get banned while web scraping Twitter, the time and effort needed to bypass these Twitter scraping rate limits will easily exceed the cost of using the official Twitter API.

Broken Twitter Scrapers

If you browse Github, you’ll find plenty of “community” supported libraries for web scraping Twitter, often from your own computer or IP address, putting you at extreme risk when automating access to Twitter.

You may also see some guides showing you how to write your own Python Twitter scraper or similar language. While these methods may have worked last decade, nearly all Twitter scrapers are now broken due to constant changes from Twitter.

Coding Your own Twitter Scraper? Good Luck!

While you may be able to build your own Twitter scraper or use an existing one from Github, you will still be bound by the rate limits mentioned earlier. So it will only work for scraping a few hundred Tweets a day if it works at all.

Furthermore, any Twitter scraper tools that run on your computer will jeopardize your own IP address and reputation, resulting in being blocked & banned from Twitter with other large sites that share reputation information.

Illegal Scraping Services

Is web scraping Twitter legal? A common myth that paid scraping services & proxy providers love to perpetuate is that scraping publicly accessible data is completely legal, regardless of the circumstances! This couldn’t be further from the truth, as the legality around web scraping centers on accessing social media data in accordance with the website’s Terms of Service.

If Twitter prohibits web scraping data in an automated fashion in its Terms of Service, then any third party (e.g. a scraping company) who helps you do this is guilty of Tortious Interference of Contract. Twitter has already filed lawsuits against major scraping companies and will likely continue to combat this illegal Twitter web scraping & data processing.

Don't Get Sued Scraping Twitter

Even if you don’t use third party scraping tools and engage in web scraping Twitter data yourself, it could still land you in trouble as Twitter has also pursued legal action against individuals for violating their terms. We can guarantee you that the cost of a lawyer and legal defense will well exceed the small price that Twitter asks for access to its official API instead of using an illegal Twitter scraper.

One final thing to consider is re-publishing Twitter data (even if collected legally through their API). You need to be very careful about publishing Tweet content or scraped data from Twitter, even if it’s publicly available Twitter data! This is due to copyright concerns over Tweets, in that republishing raw data (especially if personally identifiable via username) can violate the original author’s copyright and privacy expectations.

100% Legal HAR File Twitter Scraper

Unlike other scraping companies, we offer a way to scrape Twitter data through their official API (paying for both our service & Twitter’s API access), as well as through a clever legal loophole that allows us to scrape Twitter using your web traffic history instead of the Twitter API.

A Legal Twitter Scraper that Doesn't Use the API

However, we still highly recommend going the official Twitter API route (we’ll cover more in-depth next), but do want to present you with an alternative option to scraping Twitter that’s both 100% legal and doesn’t require paying for the Twitter API.

The key to our unofficial Twitter scraper is that it doesn’t violate Twitter’s terms, and instead relies on you simply using the Twitter website normally, browsing through the information you want to scrape, while you record the data from Twitter to your browser as a HAR file.

We then extract data from the HAR file instead of Twitter directly, this way no violations of Twitter’s Terms of Service occur. You can see a full in-depth tutorial in this video (or on the top of this page) and use the HAR File Web Scraper to try for yourself.

Scraping Twitter via API

If you’re convinced that the best way to scrape Twitter data is through their official API instead of an unofficial Twitter scraper, we’ll give a broad overview here covering how to create your Twitter developer account and how many Tweets you can scrape with each pricing tier.

Twitter Developer Account

Before scraping a single Tweet, you’ll want to create a Twitter developer account so you can access your API key and of course pay Twitter for that sweet, reliable Twitter API access. Creating an account is easy, just use your existing Twitter account and link it to your developer account to get started.

Getting Your Twitter API Key

The first step to using the Twitter API is to obtain your Twitter API key from the Twitter Developer Portal. We’ve written an article detailing how to get your Twitter API Key in 5 Minutes with a full video tutorial!

G_CX4HzOi94 ▶️

Twitter API Pricing Tiers

There are currently two pricing plans available for the Twitter API for scraping data: Basic ($100 USD per month) and Pro ($5,000 USD per month). Basic will allow you to scrape up to 10,000 Tweets per month from the past 7 days, whereas Pro will let you scrape 1,000,000 Tweets from the entire historical archive.

Which Twitter API Pricing Plan is Right For You?

Both plans also advertise being able to scrape follower lists from any public account with Twitter API rate limits of 1 request (or 1,000 followers) per minute, per the Official Followers Endpoint Documentation.

So if you absolutely must scrape historical data beyond 7 days ago, you will need to budget at least $5,000 USD for your project. However, we find that most people are fine using the Basic plan and limiting their analysis to the past 7 days of latest Tweets, which can provide more than enough Tweets for popular hashtags.

Getting Started with Twitter’s API

Twitter has a great step by step guide to getting started you may want to follow. We suggest starting there and working your way to the cURL command where you scrape Tweets from search results.

curl --request GET 'https://api.twitter.com/2/tweets/search/recent?query=from:twitterdev' --header 'Authorization: Bearer $BEARER_TOKEN'

Simply replace $BEARER_TOKEN with your own Twitter token and you’ll get back some data that looks like this, showing only the Tweet ID and text data of the Tweet by default.

{
  "data": [
    {
      "id": "1373001119480344583",
      "text": "Looking to get started with the Twitter API but new to APIs in general? @jessicagarson will walk you through everything you need to know in APIs 101 session. She'll use examples using our v2 endpoints, Tuesday, March 23rd at 1 pm EST. Join us on Twitch https://t.co/GrtBOXyHmB"
    },
    ...
  ],
  "meta": {
    "newest_id": "1373001119480344583",
    "oldest_id": "1364275610764201984",
    "result_count": 6
  }
}

Getting More Twitter Fields

The above Twitter scraping example is a little thin, only returning the text data of the Tweets matching the search query and Tweet IDs. These IDs can be useful for navigating to the target URL of the Tweets to obtain more information such as the like count, reply count, author’s username, website and follower count, but there’s a much easier way of scraping this Twitter data from the API (instead of web scraping each Tweet’s URL).

All we need to do is tell the API which additional fields we’d like it to return back. These are known as fields and expansions query parameters in the Twitter API Search Endpoint.

In this example, we want to first change the query to a hashtag, e.g. #beer and set expansions to author_id (telling the API to return more data back for the author_id field of each Tweet). We also want to include the description (or public bio) and public_metrics (for follower count) of each user, so we will supply them in the user.fields parameter.

Our query will now look like this, with the addition of &expansions=author_id&user.fields=description%2Cpublic_metrics.

curl --request GET 'https://api.twitter.com/2/tweets/search/recent?query=%23beer&expansions=author_id&user.fields=description%2Cpublic_metrics' --header 'Authorization: Bearer $BEARER_TOKEN'

And the response will now look like this:

{
  "data": [
    {
      "author_id": "1633426388",
      "id": "1564987610530988033",
      "text": "RT @bmurphypointman: #travel #bitcoin #reddit #blog #twitter #facebook #instagram #blogger #socialmedia #tiktok #vlog #deal #gift #deals #g\u2026"
    },
    ...
  ],
  "includes": {
    "users": [
      {
        "name": "Chr\u20acri",
        "public_metrics": {
          "followers_count": 3395,
          "following_count": 420,
          "tweet_count": 207054,
          "listed_count": 514
        },
        "username": "mOQIl",
        "id": "1633426388",
        "description": "Just a girl who loves travel \u2764\ufe0f  ice cream fanatic forever \u2764\ufe0f \u2764\ufe0f \u2764\ufe0f"
      },
      ...
    ]
  },
  "meta": {
    "newest_id": "1564987610530988033",
    "oldest_id": "1564985619885039616",
    "result_count": 10,
    "next_token": "b26v89c19zqg8o3fpz8ll44gzg9q2o07qus7r86ljwx31"
  }
}

While our data list still looks the same you’ll notice a new list returned under includes.users with the user details of all Twitter users who posted with #beer recently, including their user id, bios and follower counts!

We can also apply this method to the Tweets, e.g. if we want to see when they were made and their engagement metrics, we would simply add &tweet.fields=created_at,public_metrics to our request. You can also use this as a Twitter media scraper if you specify to scrape back attached media like images & videos, then the API will return links to these assets you can download.

Scraping Tweets With Stevesie Data

If writing cURL commands and working with JSON structured format Twitter content is not for you, then our service may be an easier alternative. Simply enter your input parameters into our service and we’ll create the cURL command for you, issuing the command to Twitter’s API and we’ll extract the data into downloadable CSV files you can use right away.

Download CSV Data from Twitter's API

To get started, see our Search Results Twitter Scraper where you can enter any valid Twitter search query and we’ll fetch and return the results back as downloadable CSV files. You can try it free right now (you will need to separately pay for Basic Twitter API Access) and download up to 10 rows daily for free with our Twitter scraper.

Basic Plan

With our basic plan, you’ll be able to scrape individual API endpoints one at a time and get back however many results these endpoints return. E.g. you can follow our example above with our Twitter Search API Scraper and get back up to 100 results at a time, downloaded as CSV files:

You can also use other endpoints like the Twitter Followers Scraper for exporting Twitter follower lists, but will be limited to downloading 1,000 Twitter accounts at a time per CSV file. You can also download Twitter following list using the Twitter Following Scraper.

Plus Plan

Our plus plan will perform pagination for you (combining multiple pages of results for you) and allow you to combine multiple queries together and aggregate all results into a single CSV file for any Twitter API endpoint.

This will allow you to scrape millions of Tweets & Twitter profiles without worrying about infrastructure or coding as our service is 100% cloud based and can act as your Twitter profile scraper. Hence, we can run jobs for you that take days or weeks (e.g. scraping 100M+ followers) effortlessly on our system while you focus on how you’re going to use this data effectively.

Need More Twitter API Functionality?

Our platform is 100% customizable! If you need to add or change some parameters for any endpoint, simply clone the endpoint and make your changes (which will only be visible to you). You can also tweak your own workflows for bulk data collection and add or remove extractors to capture different types of data returned automatically. Simply reach out to support if you need any help with this!

Posted by steve on Sept. 28, 2023, 6:30 a.m. 🚩  Report

⚡️ Endpoints

👥  Contributors: steve
Tweets & Archive Search (V2)
/2/tweets/search/{{recent_or_all}}
User Details by Username (V2)
/2/users/by/username/{{username}}
User Followers (V2)
/2/users/{{user_id}}/followers
Retweets (V1)
/1.1/statuses/retweets/{{tweet_id}}.json
User Timeline & Mentions (V2)
/2/users/{{user_id}}/{{tweets_or_mentions}}
Tweet Counts Timeline (V2)
/2/tweets/counts/{{recent_or_all}}
User Following (V2)
/2/users/{{user_id}}/following
List Members (V1)
/1.1/lists/members.json
Place Search (V1)
/1.1/geo/search.json
Spaces Details (V2)
/2/spaces/{{space_id}}
Spaces Search (V2)
/2/spaces/search
Trending Places Search (V1)
/1.1/trends/closest.json
Trending Topics (V1)
/1.1/trends/place.json
Tweet Details (V2)
/2/tweets/{{tweet_id}}
Tweet Likers (V2)
/2/tweets/{{tweet_id}}/liking_users
Tweet Retweeters (V2)
/2/tweets/{{tweet_id}}/retweeted_by
Tweet Search Full Archive (V1)
/1.1/tweets/search/fullarchive/{{environment_label}}.json
Tweet Search (V1)
/1.1/search/tweets.json
User Details by ID (V2)
/2/users/{{user_id}}
User Details (V1)
/1.1/users/lookup.json
User Followers (V1)
/1.1/followers/list.json
User Following (V1)
/1.1/friends/list.json
User Liked Tweets (V2)
/2/users/{{user_id}}/liked_tweets
User List Memberships (V1)
/1.1/lists/memberships.json
User Lists (V1)
/1.1/lists/list.json
User Owned Lists (V1)
/1.1/lists/ownerships.json
User Tweets (V1)
/1.1/statuses/user_timeline.json

🧪 Formulas

Tweet Search Full Archive (V1)

Tweets & Archive Search (V2)

User Following (V2)

User Details by Username (V2)

User Timeline & Mentions (V2)

Tweet Details (V2)

Tweet Retweeters (V2)

Tweet Counts Timeline (V2)

Tweet Likers (V2)

User List Memberships (V1)