grub-2.0
colly
grub-2.0 | colly | |
---|---|---|
4 | 39 | |
19 | 22,205 | |
- | 1.2% | |
0.0 | 5.7 | |
over 1 year ago | 17 days ago | |
Python | Go | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
grub-2.0
-
I want to dive into how to make search engines
Not finished, but the Selenium based crawler works pretty well to combat most blocks: https://github.com/kordless/grub-2.0
For IP blocks, try this: https://github.com/kordless/mitta-screenshot
-
Ask HN: Decent, open source search engine?
I started https://mitta.us as this, but am pivoting to prompt management for GPT-3. I've Open Sourced the code for the crawler here: https://github.com/kordless/grub-2.0. The entire system uses Google Vision for extracting text. I dislike fiddling with the DOM...
If you are interested in using Solr for this, I can provide instructions to you. I'm kordless at the gmails ... com.
-
How to Scrape and Extract Hyperlink Networks with BeautifulSoup and NetworkX
Depending on the use case you might try imaging the page, then send the image to an ML model for full text before indexing. If you need links extracted, Selenium also supports parsing the assembled DOM: https://github.com/kordless/grub-2.0/tree/main/aperture
-
Mastering Web Scraping in Python: Crawling from Scratch
I’ve found imaging the page and doing OCR on the image is quite good for text extraction. Many pages on the Internet render with JavaScript, which means BS may not see the text in the DOM.
Here is the code to do some of that: https://github.com/kordless/grub-2.0
colly
-
Scraping the full snippet from Google search result
SerpApi focuses on scraping search results. That's why we need extra help to scrape individual sites. We'll use GoColly package.
-
Show HN: Flyscrape – A standalone and scriptable web scraper in Go
Interesting. Can you compare it to colly? [0]
Last time I looked it was the most popular choice for scraping in Go and I have some projects using it.
Is it similar? Does it have more/less features or is it more suited for a different use case? (Which one?)
[0] https://github.com/gocolly/colly
- Colly: Elegant Scraper and Crawler Framework for Golang
-
New modern web crawling tool
Sounds cool, but how is this different from Colly: https://github.com/gocolly/colly?
-
colly VS scrapemate - a user suggested alternative
2 projects | 15 Apr 2023
-
Web Scraping in Python: Avoid Detection Like a Ninja
We could write some snippets mixing all these, but the best option in real life is to use a tool with it all, like Scrapy, pyspider, node-crawler (Node.js), or Colly (Go).
- Web scraping with Go
-
Web scraper help
Unless you're specifically trying to do it using net/http, I recommend using colly. I've used it in a few scrappers and I love it!
-
Web Scraping in Golang
In this blog, we will be covering the basics of web scraping in Go using the Fiber and Colly frameworks. Colly is an open-source web scraping framework written in Go. It provides a simple and flexible API for performing web scraping tasks, making it a popular choice among Go developers. Colly uses Go's concurrency features to efficiently handle multiple requests and extract data from websites. It offers a wide range of customization options, including the ability to set request headers, handle cookies, follow redirects, and more
-
Learn how to scrape Trustpilot reviews using Go
github.com/gocolly/colly - popular and widely-used library for web scraping in Go. It provides a higher-level API than net/http and makes it easier to extract information from websites. It also provides features such as concurrency, automatic request retries, and support for cookies and sessions.
What are some alternatives?
ChromeController - Comprehensive wrapper and execution manager for the Chrome browser using the Chrome Debugging Protocol.
GoQuery - A little like that j-thing, only in Go.
skyscraper - Structural scraping for the rest of us.
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
mitta-screenshot - Mitta's Chrome extension for saving the current view of a website.
xpath - XPath package for Golang, supports HTML, XML, JSON document query.
rod - A Devtools driver for web automation and scraping
phalanx - Phalanx is a cloud-native distributed search engine that provides endpoints through gRPC and traditional RESTful API.
Geziyor - Geziyor, blazing fast web crawling & scraping framework for Go. Supports JS rendering.
markov - Materials for book: "Markov Chains for programmers"
Ferret - Declarative web scraping