node-fetch VS cheerio

Compare node-fetch vs cheerio and see what are their differences.

cheerio

The fast, flexible, and elegant library for parsing and manipulating HTML and XML. (by cheeriojs)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
node-fetch cheerio
92 50
8,651 27,826
0.3% 0.7%
1.7 9.7
2 months ago 2 days ago
JavaScript TypeScript
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

node-fetch

Posts with mentions or reviews of node-fetch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-02.
  • Mastering The Heap: How to Capture and Store Images from Fetch Responses
    2 projects | dev.to | 2 May 2024
    node-fetch.
  • Building a README Crawler With Node.js
    5 projects | dev.to | 8 Apr 2024
    To execute the algorithm, we will use Node.js (for the JavaScript runtime) and node-fetch (for network requests). This means we will run the code locally from the command line. For this project, we will have an output folder to store all the README data, as well as a list (queue) of repository URLs to visit. Before diving into the code, it is important to plan the input and output of the algorithm. For this web crawler, we will start at a valid GitHub repository page, which would be one URL string. After visiting each page with a README, we will export the data into a new file. Now lets cover the process of requesting a repository page from a URL. For this, we only care about saving the README file that is displayed, and we will ignore any other links that GitHub displays (such as the navbar). We will send a URL request with node-fetch, and retrieve the result of a HTML string. If we convert the HTML string to a DOM Tree, we can search for a specific element. GitHub stores the README file under a div with the class "markdown-body". We can use a library called 'jsdom' to use Browser API methods, and return a specific node.
  • OAuth 2.0 implementation in Node.js
    3 projects | dev.to | 13 Mar 2024
    Note: In case you run into install reference error: fetch isn’t defined, ensure you install node-fetch
  • 5 Ways to Make HTTP Requests in Node.js
    3 projects | dev.to | 20 Feb 2024
    Node Fetch is a JavaScript library tailored for Node.js that simplifies making HTTP requests. It offers a straightforward and Promise-based approach for fetching resources from the internet or server, such as GET, POST, PUT, and DELETE requests. Designed for server-side applications, it's compatible with the Fetch API, allowing easy code transition between client-side and server-side environments.
  • CommonJS Is Hurting JavaScript
    7 projects | news.ycombinator.com | 30 Jun 2023
    Would anyone be interested in an article about the crusade to move JS to ESM? I've been considering writing one, here's a preview:

    Sindresorus wrote a gist "Pure ESM modules"[0] and converted all his modules to Pure ESM, breaking anyone `require`ing his code; he later locked the thread to prevent people from complaining. node-fetch released a pure ESM version a year ago that is 16x less popular than the CommonJS version[1]. The results of these changes broke a lot of code and resulted in many hours of developers figuring out how make their projects compatible with Pure ESM modules (or decide to ignore them and use old CommonJS versions)--not to mention the tons of pointless drama on GitHub issues.

    Meanwhile, TC-39 member Matteo Collima advocated a moderate approach dependent on where your module will be run [2]. So the crusade is led not by the Church, but by a handful of zealots dedicated to establishing ESM supremacy for unclear reasons (note how Sindresorus' gist lacks any justifications). It's kind of like the Python 2 to 3 move except with even less rationale and not driven by the core devs.

    0 - https://gist.github.com/sindresorhus/a39789f98801d908bbc7ff3...

    1 - https://www.npmjs.com/package/node-fetch?activeTab=versions

    2 - https://github.com/nodejs/node/issues/33954#issuecomment-924...

  • Library recommendation
    1 project | /r/node | 23 Jun 2023
    https://www.npmjs.com/package/node-fetch is pretty standard assuming you're referring to an HTTP client library
  • Next-Level Technical Blogging with Dev.to API
    2 projects | dev.to | 13 Jun 2023
    The API is CORS-enabled, meaning you’ll have to use the getArticles() functions from your backend. For making the actual request, you can use the fetch() function, available since Node.js v18. For older versions of Node.js, you can use a fetch()-compatible library like node-fetch.
  • Nuxt 3 in production shows "fetch failed" on load
    1 project | /r/Nuxt | 3 Apr 2023
    I have the same setup. On node 18 fetch would not go through. I changed 127.0.0.1 to localhost in my config/env. More info here
  • EOS bot
    1 project | /r/u_honneyhive | 26 Mar 2023
    I am making a bot that is supposed to take data from Upland's database from the account "dcrawtu15ye". I am using autocode to take it and I have found some ways to use it but some of my code still comes back as null. I have been using the eos docs to find info and all it can do right now is get account info if I use console.log(await rpc.get_account('dcrawtu1u5ye'));. I am using the dependency node-fetch. I wanted to know if there is something wrong with the code below. I also used greymass from this list and this article supposedly might help too.
  • How to Parse RSS Feed in Javascript
    2 projects | dev.to | 20 Mar 2023
    The RSS feed's URL will then need to be requested over the network. The native fetch API of JavaScript will be used since it is the most efficient. It undoubtedly works in browsers, and it appears that Node has a pretty well-liked implementation of it.

cheerio

Posts with mentions or reviews of cheerio. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-02.
  • 8 NPM Packages for JavaScript Beginners [2024][+tutorials]
    6 projects | dev.to | 2 Apr 2024
    Cheerio is your ticket to the world of server-side magic, allowing you to manipulate HTML and XML documents with jQuery-like syntax. It’s perfect for web scraping, data extraction, or just making sense of the mess that is web content. With Cheerio, you get to play around with the DOM, use CSS selectors, and basically do all the cool things you'd do in the browser, but server-side.
  • How to scrape Amazon products
    4 projects | dev.to | 1 Apr 2024
    In this guide, we'll be extracting information from Amazon product pages using the power of TypeScript in combination with the Cheerio and Crawlee libraries. We'll explore how to retrieve and extract detailed product data such as titles, prices, image URLs, and more from Amazon's vast marketplace. We'll also discuss handling potential blocking issues that may arise during the scraping process.
  • Creating and deploying web scraper using Apify
    1 project | dev.to | 27 Mar 2024
    Used libraries Axios - it is a promise HTTP clients to make requests to the specified URL. Cheerio- it is a library for parsing and manipulating HTML that is commonly used here for extracting data from downloaded HTML content. Apify SDK- it is for building Apify Actors, that is utilized for initializing actor environments, getting input data, and pushing extracted data to the dataset.
  • Htmlq: Like Jq, but for HTML
    2 projects | news.ycombinator.com | 19 Mar 2024
    Nice. I've used Cheerio for this in the past: https://github.com/cheeriojs/cheerio?tab=readme-ov-file#sele...
  • Automating Data Collection with Apify: From Script to Deployment
    4 projects | dev.to | 17 Mar 2024
    For this article, I will be using the TypeScript Starter template as shown in the screenshot above. This comes with Nodejs, Cheerio, Axios
  • Web Scraping in Python – The Complete Guide
    11 projects | news.ycombinator.com | 20 Feb 2024
    > I'm not sure why Python web scraping is so popular compared to Node.js web scraping

    Take this with a grain of salt, since I am fully cognizant that I'm the outlier in most of these conversations, but Scrapy is A++ the no-kidding best framework for this activity that has been created thus far. So, if there was scrapyjs maybe I'd look into it, but there's not (that I'm aware of) so here we are. This conversation often comes up in any such "well, I just use requests & ..." conversation and if one is happy with main.py and a bunch of requests invocations, I'm glad for you, but I don't want to try and cobble together all the side-band stuff that Scrapy and its ecosystem provide for me in a reusable and predictable way

    Also, often those conversations conflate the server side language with the "scrape using headed browser" language which happens to be the same one. So, if one is using cheerio <https://github.com/cheeriojs/cheerio> then sure node can be a fine thing - if the blog post is all "fire up puppeteer, what can go wrong?!" then there is the road to ruin of doing battle with all kinds of detection problems since it's kind of a browser but kind of not

    I, under no circumstances, want the target site running their JS during my crawl runs. I fully accept responsibility for reproducing any XHR or auth or whatever to find the 3 URLs that I care about, without downloading every thumbnail and marketing JS and beacon and and and. I'm also cognizant that my traffic will thus stand out since it uniquely does not make the beacon and marketing calls, but my experience has been that I get the ban hammer less often with my target fetches than trying to pretend to be a browser with a human on the keyboard/mouse but is not

  • Web Scraping in Node.js Using Axios,Cheerio and Json2csv
    3 projects | dev.to | 20 Nov 2023
    Web scraping is a powerful technique used to extract data from websites. In this tutorial, we'll explore how to perform web scraping using Node.js, Axios for making HTTP requests,Cheerio for parsing HTML content and also json2csv for converting json data to csv. We'll scrape product data from a sample website, "https://scrapeme.live/shop/".
  • Portadom: A Unified Interface for DOM Manipulation
    4 projects | dev.to | 30 Aug 2023
    Web scraping, while immensely useful, often requires developers to navigate a sea of tools and libraries, each with its own quirks and intricacies. Whether it's JSDOM, Cheerio, Playwright, or even just plain old vanilla JS in the DevTools console, moving between these platforms can be a challenge.
  • Querying parsed HTML in BigQuery
    4 projects | dev.to | 26 May 2023
    While looking for a way to implement capo.js in BigQuery to understand how pages in HTTP Archive are ordered, I came across the Cheerio library, which is a jQuery-like interface over an HTML parser.
  • JavaScript Web Crawler with Node.js: A Step-By-Step Tutorial
    3 projects | dev.to | 17 Apr 2023
    Cheerio is a JavaScript tool for parsing HTML and XML in Node.js. It provides APIs for traversing and manipulating the DOM of a webpage.

What are some alternatives?

When comparing node-fetch and cheerio you can also consider the following projects:

axios - Promise based HTTP client for the browser and node.js

jsdom - A JavaScript implementation of various web standards, for use with Node.js

request - 🏊🏾 Simplified HTTP request client.

puppeteer - Node.js API for Chrome

got - 🌐 Human-friendly and powerful HTTP request library for Node.js

Electron - :electron: Build cross-platform desktop apps with JavaScript, HTML, and CSS

cross-fetch - Universal WHATWG Fetch API for Node, Browsers and React Native.

Prettyprint Object - Function to pretty-print an object with an ability to annotate every value.

undici - An HTTP/1.1 client, written from scratch for Node.js

Playwright - Playwright is a framework for Web Testing and Automation. It allows testing Chromium, Firefox and WebKit with a single API.

superagent - Ajax for Node.js and browsers (JS HTTP client). Maintained for @forwardemail, @ladjs, @spamscanner, @breejs, @cabinjs, and @lassjs.

webworker-threads - Lightweight Web Worker API implementation with native threads