Articles on: API

How to use concurrency?

According to the plan you chose, you will have access to a specific number of concurrent request. This means that you'll be able to only do a specific number of request at the same time.


For example, if you need to make 100 requests and have an allowed concurrency of 5, it means that you can send 5 requests at the same time. The simplest way for you to take advantage of this concurrency is to set up 5 workers / threads and having each of them send 20 requests.


Below you'll find some resources that can help you doing that.


Python


import concurrent.futures
import time
from scrapingbee import ScrapingBeeClient # Importing SPB's client
client = ScrapingBeeClient(api_key='APIKEY') # Initialize the client with your API Key, and using screenshot_full_page parameter to take a screenshot!

MAX_RETRIES = 5 # Setting the maximum number of retries if we have failed requests to 5.
MAX_THREADS = 4
urls = ["http://scrapingbee.com/blog", "http://reddit.com/"]

def scrape(url):
for _ in range(MAX_RETRIES):
response = client.get(url, params={'screenshot': True}) # Scrape!

if response.ok: # If we get a successful request
with open("./"+str(time.time())+"screenshot.png", "wb") as f:
f.write(response.content) # Save the screenshot in the file "screenshot.png"
break # Then get out of the retry loop
else: # If we get a failed request, then we continue the loop
print(response.content)

with concurrent.futures.ThreadPoolExecutor(max_workers=MAX_THREADS) as executor:
executor.map(scrape, urls)


This specific code example safes the response as a screenshot. You can remove that feature if you wish to safe a text response, or any other type of response.

Also, this is for Python, but you can most of other popular language examples here. They would be available under the specified language.


Updated on: 17/10/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!