Python Crawler

Open-source Python projects categorized as Crawler

Top 23 Python Crawler Projects

  • Scrapy

    Scrapy, a fast high-level web crawling & scraping framework for Python.

  • Project mention: Scrapy: A Fast and Powerful Scraping and Web Crawling Framework | news.ycombinator.com | 2024-02-16
  • pyspider

    A Powerful Spider(Web Crawler) System in Python.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • newspaper

    newspaper3k is a news, full-text, and article metadata extraction in Python 3. Advanced docs:

  • Photon

    Incredibly fast crawler designed for OSINT. (by s0md3v)

  • Douyin_TikTok_Download_API

    🚀「Douyin_TikTok_Download_API」是一个开箱即用的高性能异步抖音、快手、TikTok、Bilibili数据爬取工具,支持API调用,在线批量解析及下载。

  • Project mention: TikTok video scraper | /r/webscraping | 2023-05-23

    At the moment I am working on a web scraper for TikTok. At the moment, I am able to retrieve data about the first 16 videos from a channel. The way I achieved this was to make requests to an unofficial API https://github.com/Evil0ctal/Douyin_TikTok_Download_API. My problem is that the requirements for this project do not allow me to use any package that would extract data from TikTok. I would like to ask you all, how should I go about this task. Already tried getting data from the HTML, but is not sufficient since most of it is not displayed when I use requests.get(URL). Could you please recommend some repositories that could help or some way of extracting the data? Thank you!

  • autoscraper

    A Smart, Automatic, Fast and Lightweight Web Scraper for Python

  • scrapy-redis

    Redis-based components for Scrapy.

  • Project mention: How to make scrapy run multiple times on the same URLs? | /r/scrapy | 2023-06-26
  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • myGPTReader

    A community-driven way to read and chat with AI bots - powered by chatGPT.

  • ProxyBroker

    Proxy [Finder | Checker | Server]. HTTP(S) & SOCKS :performing_arts:

  • toapi

    Every web site provides APIs.

  • weibo-crawler

    新浪微博爬虫,用python爬取新浪微博数据,并下载微博图片和微博视频

  • trafilatura

    Python & command-line tool to gather text on the Web: web crawling/scraping, extraction of text, metadata, comments

  • Project mention: Trafilatura: Python tool to gather text on the Web | news.ycombinator.com | 2023-08-14

    The feature list answers that question pretty well: https://github.com/adbar/trafilatura#features

    Basically: you could implement all of this on top of BeautifulSoup - polite crawling policies, sitemap and feed parsing, URL de-duplication, parallel processing, download queues, heuristics for extracting just the main article content, metadata extraction, language detection... but it would require writing an enormous amount of extra code.

  • TorBot

    Dark Web OSINT Tool

  • Grab

    Web Scraping Framework

  • news-please

    news-please - an integrated web crawler and information extractor for news that just works

  • PSpider

    简单易用的Python爬虫框架,QQ交流群:597510560

  • OpenWPM

    A web privacy measurement framework

  • grab-site

    The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns

  • Project mention: Ask HN: How can I back up an old vBulletin forum without admin access? | news.ycombinator.com | 2024-01-29

    The format you want is WARC. Even the Library of Congress uses it. There are many many WARC scrapers. I'd look at what the Internet Archive recommends. A quick search turned up this from the Archive Team and Jason Scott https://github.com/ArchiveTeam/grab-site (https://wiki.archiveteam.org/index.php/Who_We_Are) but I found that in less than 15 seconds of searching so do your own diligence.

  • mlscraper

    🤖 Scrape data from HTML websites automatically by just providing examples

  • XSRFProbe

    The Prime Cross Site Request Forgery (CSRF) Audit and Exploitation Toolkit.

  • botasaurus

    The All in One Framework to build Awesome Scrapers.

  • Project mention: This Week In Python | dev.to | 2024-04-05

    botasaurus – The All in One Framework to build Awesome Scrapers

  • scrapyrt

    HTTP API for Scrapy spiders

  • bookcorpus

    Crawl BookCorpus

  • Project mention: Show HN: New AI Dataset Based on LibGen and Sci-Hub | news.ycombinator.com | 2023-09-08
  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Python Crawler related posts

Index

What are some of the best open-source Crawler projects in Python? This list will help you:

Project Stars
1 Scrapy 50,824
2 pyspider 16,319
3 newspaper 13,720
4 Photon 10,501
5 Douyin_TikTok_Download_API 6,780
6 autoscraper 5,937
7 scrapy-redis 5,451
8 myGPTReader 4,375
9 ProxyBroker 3,714
10 toapi 3,462
11 weibo-crawler 3,045
12 trafilatura 2,740
13 TorBot 2,599
14 Grab 2,354
15 news-please 1,925
16 PSpider 1,811
17 OpenWPM 1,311
18 grab-site 1,260
19 mlscraper 1,225
20 XSRFProbe 915
21 botasaurus 899
22 scrapyrt 816
23 bookcorpus 776

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com