tarsier
dude
tarsier | dude | |
---|---|---|
8 | 28 | |
1,003 | 412 | |
53.4% | - | |
9.1 | 9.0 | |
6 days ago | 6 days ago | |
Jupyter Notebook | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tarsier
-
ScrapeGraphAI: Web scraping using LLM and direct graph logic
Agreed!
Apify's Website Content Crawler[0] does a decent job of this for most websites in my experience. It allows you to "extract" content via different built-in methods (e.g. Extractus [1]).
We currently use this at Magic Loops[2] and it works _most_ of the time.
The long-tail is difficult though, and it's not uncommon for users to back out to raw HTML, and then have our tool write some custom logic to parse the content they want from the scraped results (fun fact: before GPT-4 Turbo, the HTML page was often too large for the context window... and sometimes it still is!).
Would love a dedicated tool for this. I know the folks at Reworkd[3] are working on something similar, but not sure how much is public yet.
[0] https://apify.com/apify/website-content-crawler
[1] https://github.com/extractus/article-extractor
[2] https://magicloops.dev/
[3] https://reworkd.ai/
- Control the browser using GPT-4 vision by AgentGPT team
- Show HN: GPT-4 vision utilities to browse the web
dude
-
Webscraping beginner here ready to start leveling up to intermediate. Looking for some good webscraping repositories (e.g any of your GitHub repos/projects) that I can use as learning tools, and general recommendations for what to do next
Please check https://github.com/roniemartinez/dude
-
Need help with downloading a section of multiple sites as pdf files.
You can use my library which also uses Playwright. I have an example here: https://github.com/roniemartinez/dude/discussions/116
-
Why do you use python for web scraping?
I also built a framework so I can easily switch between these libraries with less code change (still on hiatus for a few months before going back to it): https://github.com/roniemartinez/dude
-
Thank GOD for Poetry!
There's a lot of options but I am quite happy with Github Actions workflows + Poetry as it handles tests and publish to PyPI. Just an example, in my workflows, I deploy to TestPyPI and PyPI here: https://github.com/roniemartinez/dude/tree/master/.github/workflows
-
What stack or tools are you using for ensuring code quality and best practices in medium and large codebases ?
But for documentation, I use mkdocs-material as it can easily be used with minor customization and changes can be easily deployed in Github: https://roniemartinez.github.io/dude/
- Is there any thing Beautifulsoup can do that Scrapy can not?
-
Screenshotting site, but remove all popups.
Add an adblocker. I implemented Dude/pydude with the this and page results are clean without ads and pop-ups. For the screenshot, here is an example: https://github.com/roniemartinez/dude/discussions/116
-
which Python Library is best for scraping?
You can also use my library if you want things to be simpler:) https://github.com/roniemartinez/dude
-
For those of you using Python, what is your go to library to build your scraper?
I use my own library, Dude! https://github.com/roniemartinez/dude
-
Building a (relatively) easily adaptable, flexible web scraper (seeking conceptual advice)
I built a simple web scraper that is simple to use but this is still a work-in-progress - https://github.com/roniemartinez/dude