crawlee VS cheerio

Compare crawlee vs cheerio and see what are their differences.

crawlee

Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Puppeteer, Playwright, Cheerio, JSDOM, and raw HTTP. Both headful and headless mode. With proxy rotation. (by apify)

cheerio

The fast, flexible, and elegant library for parsing and manipulating HTML and XML. (by cheeriojs)
Our great sponsors
  • SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
crawlee cheerio
29 50
12,129 27,780
5.0% 1.0%
9.8 9.7
2 days ago 3 days ago
TypeScript TypeScript
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

crawlee

Posts with mentions or reviews of crawlee. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-01.
  • How to scrape Amazon products
    4 projects | dev.to | 1 Apr 2024
    In this guide, we'll be extracting information from Amazon product pages using the power of TypeScript in combination with the Cheerio and Crawlee libraries. We'll explore how to retrieve and extract detailed product data such as titles, prices, image URLs, and more from Amazon's vast marketplace. We'll also discuss handling potential blocking issues that may arise during the scraping process.
  • Automating Data Collection with Apify: From Script to Deployment
    4 projects | dev.to | 17 Mar 2024
    Previously, the Apify SDK offered a blend of crawling functionalities and Actor building features. However, a recent update separated these functionalities into two distinct libraries: Crawlee and Apify SDK v3. Crawlee now houses the web scraping and crawling tools, while Apify SDK v3 focuses solely on features specific to building Actors for the Apify platform. This distinction allows for a clear separation of concerns and enhances the development experience for various use cases.
  • Launching Crawlee Blog: Your Node.js resource hub for web scraping and automation.
    1 project | dev.to | 26 Feb 2024
    v3.1 added an error tracker for analyzing and summarizing failed requests.
  • Anything like scrapy in other languages?
    1 project | /r/webscraping | 10 Dec 2023
    Closest I found was https://crawlee.dev/ for Javascript/Typescript although still seems not on the level of scrapy. I didn't try it.
  • What is Playwright?
    5 projects | dev.to | 11 Oct 2023
    Also, you can go even further and develop your own web scraper with Crawlee, a Node.js library that helps you pass those challenges automatically using Puppeteer or Playwright. Crawlee helps you build reliable scrapers fast. Quickly scrape data, store it, and avoid getting blocked with headless browsers, smart proxy rotation, and auto-generated human-like headers and fingerprints.
  • Best web scraping framework to learn
    1 project | /r/webscraping | 12 Jul 2023
    https://crawlee.dev/ its very good, you can easily run your spiders in cloud with apify, and nodejs/puppeteer has many advantages than python/selenium
  • Deep diving into Apify world
    1 project | /r/thewebscrapingclub | 2 Apr 2023
    Apify is a platform for web scraping that helps the developer starting from the coding, having developed its open-source NodeJs library for web scraping called Crawlee. Then on their platform, you can run and monitor the scrapers and also finally sell your scrapers in their store.
  • Build and run your Python web scrapers in the cloud with Apify SDK for Python
    2 projects | /r/webscraping | 14 Mar 2023
    You can use our open source tools (not only this one, but also Crawlee for example) to build your scrapers and run them on your computer, and then if you need to run them in the cloud, you can upload them to the Apify platform and run them there. Our free tier is good enough for smaller web scraping and automation projects, and if you need more compute resources or proxies, you can go for one of our paid tiers.
  • How to scrape the web with Puppeteer in 2023
    5 projects | dev.to | 7 Mar 2023
    Comfortable scraping and crawling with Puppeteer is better done together with another library. This library is called Crawlee, and it's also free and open-source, just like Puppeteer. Crawlee wraps Puppeteer and grants access to all of Puppeteer's functionality, but also provides useful crawling and scraping tools like error handling, queue management, storages, proxies or fingerprints out of the box.
  • What's the most advanced, best maintained, most fully featured web scraper for node.js
    2 projects | /r/node | 11 Feb 2023

cheerio

Posts with mentions or reviews of cheerio. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-02.
  • 8 NPM Packages for JavaScript Beginners [2024][+tutorials]
    6 projects | dev.to | 2 Apr 2024
    Cheerio is your ticket to the world of server-side magic, allowing you to manipulate HTML and XML documents with jQuery-like syntax. It’s perfect for web scraping, data extraction, or just making sense of the mess that is web content. With Cheerio, you get to play around with the DOM, use CSS selectors, and basically do all the cool things you'd do in the browser, but server-side.
  • How to scrape Amazon products
    4 projects | dev.to | 1 Apr 2024
    In this guide, we'll be extracting information from Amazon product pages using the power of TypeScript in combination with the Cheerio and Crawlee libraries. We'll explore how to retrieve and extract detailed product data such as titles, prices, image URLs, and more from Amazon's vast marketplace. We'll also discuss handling potential blocking issues that may arise during the scraping process.
  • Creating and deploying web scraper using Apify
    1 project | dev.to | 27 Mar 2024
    Used libraries Axios - it is a promise HTTP clients to make requests to the specified URL. Cheerio- it is a library for parsing and manipulating HTML that is commonly used here for extracting data from downloaded HTML content. Apify SDK- it is for building Apify Actors, that is utilized for initializing actor environments, getting input data, and pushing extracted data to the dataset.
  • Htmlq: Like Jq, but for HTML
    2 projects | news.ycombinator.com | 19 Mar 2024
    Nice. I've used Cheerio for this in the past: https://github.com/cheeriojs/cheerio?tab=readme-ov-file#sele...
  • Automating Data Collection with Apify: From Script to Deployment
    4 projects | dev.to | 17 Mar 2024
    For this article, I will be using the TypeScript Starter template as shown in the screenshot above. This comes with Nodejs, Cheerio, Axios
  • Web Scraping in Python – The Complete Guide
    11 projects | news.ycombinator.com | 20 Feb 2024
    > I'm not sure why Python web scraping is so popular compared to Node.js web scraping

    Take this with a grain of salt, since I am fully cognizant that I'm the outlier in most of these conversations, but Scrapy is A++ the no-kidding best framework for this activity that has been created thus far. So, if there was scrapyjs maybe I'd look into it, but there's not (that I'm aware of) so here we are. This conversation often comes up in any such "well, I just use requests & ..." conversation and if one is happy with main.py and a bunch of requests invocations, I'm glad for you, but I don't want to try and cobble together all the side-band stuff that Scrapy and its ecosystem provide for me in a reusable and predictable way

    Also, often those conversations conflate the server side language with the "scrape using headed browser" language which happens to be the same one. So, if one is using cheerio <https://github.com/cheeriojs/cheerio> then sure node can be a fine thing - if the blog post is all "fire up puppeteer, what can go wrong?!" then there is the road to ruin of doing battle with all kinds of detection problems since it's kind of a browser but kind of not

    I, under no circumstances, want the target site running their JS during my crawl runs. I fully accept responsibility for reproducing any XHR or auth or whatever to find the 3 URLs that I care about, without downloading every thumbnail and marketing JS and beacon and and and. I'm also cognizant that my traffic will thus stand out since it uniquely does not make the beacon and marketing calls, but my experience has been that I get the ban hammer less often with my target fetches than trying to pretend to be a browser with a human on the keyboard/mouse but is not

  • Web Scraping in Node.js Using Axios,Cheerio and Json2csv
    3 projects | dev.to | 20 Nov 2023
    Web scraping is a powerful technique used to extract data from websites. In this tutorial, we'll explore how to perform web scraping using Node.js, Axios for making HTTP requests,Cheerio for parsing HTML content and also json2csv for converting json data to csv. We'll scrape product data from a sample website, "https://scrapeme.live/shop/".
  • Portadom: A Unified Interface for DOM Manipulation
    4 projects | dev.to | 30 Aug 2023
    Web scraping, while immensely useful, often requires developers to navigate a sea of tools and libraries, each with its own quirks and intricacies. Whether it's JSDOM, Cheerio, Playwright, or even just plain old vanilla JS in the DevTools console, moving between these platforms can be a challenge.
  • Querying parsed HTML in BigQuery
    4 projects | dev.to | 26 May 2023
    While looking for a way to implement capo.js in BigQuery to understand how pages in HTTP Archive are ordered, I came across the Cheerio library, which is a jQuery-like interface over an HTML parser.
  • JavaScript Web Crawler with Node.js: A Step-By-Step Tutorial
    3 projects | dev.to | 17 Apr 2023
    Cheerio is a JavaScript tool for parsing HTML and XML in Node.js. It provides APIs for traversing and manipulating the DOM of a webpage.

What are some alternatives?

When comparing crawlee and cheerio you can also consider the following projects:

NectarJS - 🔱 Javascript's God Mode. No VM. No Bytecode. No GC. Just native binaries.

jsdom - A JavaScript implementation of various web standards, for use with Node.js

awesome-puppeteer - A curated list of awesome puppeteer resources.

puppeteer - Node.js API for Chrome

rdflib.js - Linked Data API for JavaScript

Electron - :electron: Build cross-platform desktop apps with JavaScript, HTML, and CSS

jirax - :sunglasses: :computer: Simple and flexible CLI Tool for your daily JIRA activity (supported on all OSes)

Prettyprint Object - Function to pretty-print an object with an ability to annotate every value.

teachcode - A tool to develop and improve a student’s programming skills by introducing the earliest lessons of coding.

Playwright - Playwright is a framework for Web Testing and Automation. It allows testing Chromium, Firefox and WebKit with a single API.

pwa-asset-generator - Automates PWA asset generation and image declaration. Automatically generates icon and splash screen images, favicons and mstile images. Updates manifest.json and index.html files with the generated images according to Web App Manifest specs and Apple Human Interface guidelines.

webworker-threads - Lightweight Web Worker API implementation with native threads