Do I really need to customize a scaper for every input page URL?

This page summarizes the projects mentioned and recommended in the original post on /r/webscraping

Our great sponsors
  • SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • autoscraper

    A Smart, Automatic, Fast and Lightweight Web Scraper for Python

  • Maybe you could try out this tool never used it myself https://github.com/alirezamika/autoscraper Or you would need to train a neural network which does the parsing for you but the accuracy won't be 100% with this method https://github.com/scrapy/scrapely To answer your question there is no way to write one program which scrapes different pages correctly because their structure is not the same and some of them probably will have protection which will block your scraper so that would need extra attention or additional code

  • readability

    A standalone version of the readability lib

  • another such example is https://github.com/mozilla/readability

  • SurveyJS

    Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App. With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.

    SurveyJS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts