In this article, we will tell you how to scrape technical articles from Medium using ScrapeStorm’s “Smart mode“.
Introduction to the scraping tool
ScrapeStorm is a new generation of Web Scraping Tool based on artificial intelligence technology. It is the first scraper to support both Windows, Mac and Linux operating systems.
Introduction of scraping objects
Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.
Official Website: https://medium.com/
title, title_link, abstract, publisher, claps, labels
Function point directory
Preview of the scraped result
Export to Excel2007:
Let’s take a closer look at how to scrape technical articles from Medium. The specific steps are as follows:
1. Download and install ScrapeStorm, then register and log in
(1) Open the ScrapeStorm official website, download and install the latest version.
(2) Click Register/Login to register a new account and then log in to ScrapeStorm.
Tips: You can use this web scraping software directly, you don’t need to register, but the tasks under the anonymous account will be lost when you switch to the registered user, so it is recommended that you use it after registration.
2. Create a task
(1) Copy the URL of Medium
Click here to learn more about how to enter the URL correctly.
(2) Create a new smart mode task
You can create a new scraping task directly on the software, or you can create a task by importing rules.
Click here to learn how to import and export scraping rules.
3. Configure the scraping rules
(1) Set the fields
Intelligent mode automatically recognizes the fields on the page. You can right-click the field to rename the name, add or delete fields, modify data, and so on.
Click here to learn how to how to configure the extracted field.
Add or remove fields as needed, and rename the fields. The results of the field settings are as follows:
(2) Manually set the page
YouTube’s search page is a scroll-loaded page, the software does not recognize unloaded data, and you need to manually set the page to “Scroll to Load”.
Click here to learn how to manually select the page.
(3) Use the “Scrape into” feature to scrape the detail page data
There is only partial data on the list page, you can use the “scrape into” function to enter the detail page to scrape the data.
Click here to learn how to extract the list page plus the detail page.
On the details page we add the required fields: claps, labels
4. Set up and start the scraping task
(1) Running and Anti-block settings
Click “Setting”, set waiting time based on web page open speed. You can check “Block Images” and “Block Ads”. The anti-block settings follow the system default settings. Then click “Save”.
P.S. “Block Images” will reduce the load time and speed up the scraping process. And this operation does not affect the scraping and downloading of images.
(2) Start scraping data
Premium Plan and above users can use “Scheduled job” and “Sync to Database”. If you want to download images, you can check “Download images while running”. Then click “Start”.
Click here to learn about scheduled job.
Click here to learn about sync to database.
Click here to learn about download images.
(3) Wait a moment, you will see the data being scraped.
5. Export and view data
(1) Click “Export” to download your data.
(2) Choose the format to export according to your needs.
ScrapeStorm provides a variety of export methods to export locally, such as excel, csv, html, txt or database. Professional Plan and above users can also post directly to wordpress.
Click here to learn more about how to view the extraction results and clear the extracted data.
Click here to learn more about how to export the result of extraction.
Here is another tutorial for social network websites: