Requests is the most popular library for the Python programming language to execute HTTP requests. On this page, you can convert your cURL command to the same HTTP request but in Python code, the request will be performed though ScrapeNinja Cloud servers.

Get retries, proxy rotation, and real Chrome TLS fingerprint for your CURL request, in a single click (this will require ScrapeNinja API key):

Your CURL command. You can copy any network request from Chrome as CURL. Example: curl -H "header: val"
Get your key on RapidAPI. Free plan is available.

Why Convert cURL to a Python Web Scraper?

During web scraping tasks, it's often convenient to access the network tab of the browser's dev console and select "copy as cURL request" for a specific HTTP request. This provides a ready-to-use cURL command that can be executed from the terminal. However, what if you want to execute this request using Python code? The online cURL to web scraper converter is here to assist you. It's essential to understand that this converter utilizes the cloud infrastructure of the ScrapeNinja web scraping API to send the HTTP request to the target website. This grants you the powerful features of the ScrapeNinja API, such as proxy rotation, retries, and a genuine Chrome TLS fingerprint.

Setting up Python environment for the web scraper

  1. Ensure you have Python 3.6+ installed on your system. If not, download and install it from the official Python website.
  2. Copy and paste the provided Python code into a new file in your desired directory. You can name this file as per your preference, for example,
  3. It's recommended to use a virtual environment to manage dependencies. To set one up, navigate to your project folder and run:
    python -m venv venv
  4. Activate the virtual environment:
    • On Windows: venv\Scripts\activate
    • On macOS and Linux: source venv/bin/activate
  5. If the scraper uses any external libraries, you can install them using pip. For instance, if it uses requests:
    pip install requests
  6. Finally, execute the scraper script from your terminal:

Note: Always ensure you're working within the virtual environment when running the scraper to avoid any dependency conflicts.

General Advice Regarding Request Headers

If your request includes a Cookies header, examine its contents closely. It might contain data that can be used to track you, such as your original request IP address, your user session ID (if you're authenticated on the website), or other sensitive information. Consider removing the cookies header entirely to see how the website responds.

User-agent header is generally not required. ScrapeNinja appends some sensible user-agent string under the hood so you might consider removing this one, as well. Of course you can try to supply this header to override ScrapeNinja default user-agent.

Quick video overview of the cURL converter: