The Arachnio API is designed to make extracting structured content from webpages easy. This post shows how to use the Arachnio Java client to scrape content and other metadata from a news article.

Step 1: Subscribe 👍

First, you'll need to subscribe to the Arachnio API. The Free Forever Plan will work just fine for this introduction. Before we head to the next step, you'll need your Base Product URL and one of your Bloblr API Keys.

The Developer Subscription UI

Above is a screenshot of the Subscription Authentication screen, which contains these facts. The Base Product URL is circled in red, and the Blobr API keys in green. Both are redacted for privacy. 🤫

Step 2: Pick a News Article 🔗

In this introduction, we will extract structured data from a webpage, so the next step is to pick a webpage to extract. Since we're using Java, we have picked an article about coffee for this example. ☕️

Café au let's scrape!

Step 3: Call Link Extract Endpoint 📢

Now that we have our base URL, API key, and parameters, we can call the link extract endpoint using the Java client for Arachnio, arachnio4j.

You can add it to your project like this in your {% code-line %}pom.xml{% end-code-line %}:

{% code-block language="xml" %}
<dependency>
 <groupId>io.arachn</groupId>
 <artifactId>arachnio4j</artifactId>
 <version>0.1.4.0</version>
</dependency>
{% end-code-block %}

And then use it like this, for example to call the link extract endpoint:

{% code-block language="java" %}
/* ARACHNIO_BASE_URL and BLOBR_API_KEY are from Step 1 */
ArachnioClient client=new DefaultArachnioClient(ARACHNIO_BASE_URL, BLOBR_API_KEY);

/* We chose the link in Step 2 */
ExtractedLink response = client.extractLink(
   "https://www.nytimes.com/2022/08/25/science/spiders-misinformation-rumors.html");

/* And here is our response! */
if(response.getEntity() instanceof ArticleWebpageEntityMetadata article) {
   System.out.println(article.getTitle());
   /* Coffee Drinking Linked to Lower Mortality Risk, New Study Finds */
}
{% end-code-block %}

Conclusion

It's that simple! Calling the link unwind or link parse endpoints, or even the premium batch endpoints, is just as easy.

Happy scraping! ✌️