What does the process of web scraping actually look like?

This page summarizes the projects mentioned and recommended in the original post on reddit.com/r/webscraping

Our great sponsors
  • InfluxDB - Build time-series-based applications quickly and at scale.
  • Sonar - Write Clean Python Code. Always.
  • SaaSHub - Software Alternatives and Reviews
  • parsel-cli

    cli for evaluating css and xpath selectors

    For that I use my own little tool called parsel-cli which allows to quickly test parsing expressions on live web pages.

  • requests-cache

    Transparent persistent cache for python requests

    The hardest part is actually running a web scraper at scale and that's where many people fail. We have all of the working pieces - we can find the products and parse the raw data. Time to scale it up! Best tip here is to start off with caching. Using caching libraries like requests-cache or whatever library equivalent will speed up process significantly.

  • InfluxDB

    Build time-series-based applications quickly and at scale.. InfluxDB is the Time Series Platform where developers build real-time applications for analytics, IoT and cloud-native services. Easy to start, it is available in the cloud or on-premises.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts