grub-2.0
skyscraper
grub-2.0 | skyscraper | |
---|---|---|
4 | 3 | |
19 | 401 | |
- | - | |
0.0 | 4.9 | |
over 1 year ago | 10 months ago | |
Python | Clojure | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
grub-2.0
-
I want to dive into how to make search engines
Not finished, but the Selenium based crawler works pretty well to combat most blocks: https://github.com/kordless/grub-2.0
For IP blocks, try this: https://github.com/kordless/mitta-screenshot
-
Ask HN: Decent, open source search engine?
I started https://mitta.us as this, but am pivoting to prompt management for GPT-3. I've Open Sourced the code for the crawler here: https://github.com/kordless/grub-2.0. The entire system uses Google Vision for extracting text. I dislike fiddling with the DOM...
If you are interested in using Solr for this, I can provide instructions to you. I'm kordless at the gmails ... com.
-
How to Scrape and Extract Hyperlink Networks with BeautifulSoup and NetworkX
Depending on the use case you might try imaging the page, then send the image to an ML model for full text before indexing. If you need links extracted, Selenium also supports parsing the assembled DOM: https://github.com/kordless/grub-2.0/tree/main/aperture
-
Mastering Web Scraping in Python: Crawling from Scratch
I’ve found imaging the page and doing OCR on the image is quite good for text extraction. Many pages on the Internet render with JavaScript, which means BS may not see the text in the DOM.
Here is the code to do some of that: https://github.com/kordless/grub-2.0
skyscraper
-
Web Scraping in Python – The Complete Guide
Yes!
My Clojure scraping framework [0] facilitates that kind of workflow, and I’ve been using it to scrape/restructure massive sites (millions of pages). I guess I’m going to write a blog post about scraping with it at scale. Although it doesn’t really scale much above that – it’s meant for single-machine loads at the moment – it could be enhanced to support that kind of workflow rather easily.
[0]: https://github.com/nathell/skyscraper
-
Babashka: GraalVM Helped Create a Scripting Environment for Clojure
I plan to port my scraping framework (Skyscraper, https://github.com/nathell/skyscraper) to babashka one day. I’m not sure how easy it will be, though, since it uses core.async (which I believe bb has limited support for) and SQLite via clojure.java.jdbc.
-
Mastering Web Scraping in Python: Crawling from Scratch
I’ve done a fair share of scraping, and I learned that on a large scale, there are a lot of cross-cutting repetitive concerns. Things like caching, fetching HTML (preferably in parallel), throttling, retries, navigation, emitting the output as a dataset…
My library, Skyscraper [0], attempts to help with these. It’s written in Clojure (based on Enlive or Reaver, both counterparts to Beautiful Soup), but the principles should be readily transferable everywhere.
[0]: https://github.com/nathell/skyscraper
What are some alternatives?
ChromeController - Comprehensive wrapper and execution manager for the Chrome browser using the Chrome Debugging Protocol.
WebDumper - A tool for scraping, dumping and unpacking (webpacked) javascript source files.
mitta-screenshot - Mitta's Chrome extension for saving the current view of a website.
rod - A Devtools driver for web automation and scraping
reaver - A Clojure library for extracting data from HTML.
phalanx - Phalanx is a cloud-native distributed search engine that provides endpoints through gRPC and traditional RESTful API.
hickory - HTML as data
markov - Materials for book: "Markov Chains for programmers"
colly - Elegant Scraper and Crawler Framework for Golang
babashka-sql-pods - Babashka pods for SQL databases