.info.catchedAjax
of the ScrapeNinja response to retrieve the dumped request. This is a global handler, you can also specify XHR request catcher per-step (see "Interact with page" sectin below).
.info.log
events array from the returned ScrapeNinja response (find this particular event by filtering let xhr = body.info.log.find(e => e.type == 'xhr' && e.stepIdx == {{idx+1}})
).
Examples of valid payloads:
- JSON payload:
{\"fefe\":\"few\"}
- www-encoded payload:
key1=val1&key2=val2
Content-Type: application/x-www-form-urlencoded
to headers in case of www-encoded POST and Content-Type: application/json
function (input, cheerio) {
let $ = cheerio.load(input);
return { title: $('#title').text().trim() }
}
Grab extracted results from .extractor json property of ScrapeNinja response. Leave empty to parse everything on your side.
Raw ScrapeNinja Response:
Latency: {{ responseLatency }}ms HTTP Status: {{ responseBody.info.statusCode }}{{ responseBodyFormatted }}
Unescaped target website response
responseJson.body
property.
Use {{cmdKey}}+F for a quick search in the response body. Copy&paste this body to Cheerio Sandbox to develop your extractor.
Quick video overview of the sandbox:
Code Generator
Launching the scraper in your own Node.js environment:
(async () => { [CODE GOES HERE] } )();
. Check your Node.js version: node -v
.
Step 1. Create project folder, initialize empty npm project, and install node-fetch
mkdir your-project-folder && \ cd "$_" && \ npm i -g create-esnext && \ npm init esnext && \ npm i node-fetch -y
Step 2. Copy&paste the code above
Create new empty file like scraper.js
and paste the code to this file.
Step 3. Launch
node ./scraper.js
Launching the scraper in your own Python environment:
Step 1. Create project folder, setup venv virtual environment
mkdir your-project-folder && \ cd "$_" && \ python3 -m venv venv && \ source venv/bin/activate
Step 2: Install requests library in your virtual environment
python3 -m pip install requests
Step 3. Copy&paste the code above
Create new empty file like scraper.py
and paste the code to this file.
Step 4. Launch
python3 ./scraper.py
Running the cURL Command:
Sandbox FAQ
What is the ScrapeNinja Sandbox?
The ScrapeNinja Live Sandbox is an online tool designed to swiftly test the scraping capabilities of a specific target website using the ScrapeNinja Scraping API. It eliminates the need for coding or setting up a local environment. Our goal in creating the Sandbox was to simplify the process of exploring how to scrape a specific website, test various proxy countries for that target, and later bootstrap your project faster with the code generation feature provided by the Sandbox.
Is the ScrapeNinja Sandbox a paid service?
While the ScrapeNinja API operates on a subscription model, offering both free and paid plans, the ScrapeNinja Sandbox is currently available for free. This allows you to fully test its capabilities without any subscription. We hope you find the Sandbox a valuable tool for exploring the ScrapeNinja API.