The Arachnio API is designed to make extracting structured content from webpages easy. This post shows how to use the Arachnio Python client to scrape content and other metadata from a news article.
First, you'll need to subscribe to the Arachnio API. The Free Forever Plan will work just fine for this introduction. Before we head to the next step, you'll need your Base Product URL and one of your Bloblr API Keys.
Above is a screenshot of the Subscription Authentication screen, which contains these facts. The Base Product URL is circled in red, and the Blobr API keys in green. Both are redacted for privacy. 🤫
In this introduction, we will extract structured data from a webpage, so the next step is to pick a webpage to extract. Since we're using Python, we have picked an article about snakes for this example. 🐍
Now that we have our base URL, API key, and parameters, we can call the link extract endpoint using the Java client for Arachnio, arachnio4j.
You can add it to your project in {% code-line %}requirements.txt{% end-code-line %}:
{% code-block language="shell" %}
arachnio~=0.0.0
{% end-code-block %}
And then use it like this, for example to call the link extract endpoint:
{% code-block language="python" %}
from arachnio import ArachnioClient
""" ARACHNIO_BASE_URL and BLOBR_API_KEY are from Step 1 """
client = ArachnioClient(ARACHNIO_BASE_URL, BLOBR_API_KEY)
""" The link is from Step 2 """
response = client.extractLink(
"https://www.nytimes.com/2022/05/03/science/venom-medicines.html")
entity = response["entity"]
if entity["entityType"]=="webpage" and entity["webpageType"]=="article":
print(entity["title"]);
""" Deadly Venom From Spiders and Snakes May Also Cure What Ails You """
{% end-code-block %}
It's that simple! Calling the link unwind or link parse endpoints, or even the premium batch endpoints, is just as easy.
Happy scraping! ✌️