learning_xslt_with_python
scaling-to-distributed-crawling
learning_xslt_with_python | scaling-to-distributed-crawling | |
---|---|---|
1 | 5 | |
1 | 36 | |
- | - | |
5.4 | 0.0 | |
about 1 year ago | over 2 years ago | |
HTML | HTML | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
learning_xslt_with_python
-
A Brief Defense of XML
I think a middle ground has been reached, XSLT 3.0 allows you to transform your XML into JSON and back. The XSLT 3.0 (2017) processor is available to non-Java languages from Saxonica, for Python "pip install saxonpy" (Linux only)
If you want to see how to do these XML-to-JSON and JSON-to-XML transforms I have written a little learning repo with a CLI: https://github.com/aleph2c/leaning_xslt
Here is Michael Kay's white paper on Transforming JSON using XSLT 3.0: https://www.saxonica.com/papers/xmlprague-2016mhk.pdf
Once your data is in a JSON format, you could implement your compact-binary-format idea around it.
scaling-to-distributed-crawling
-
DOs and DON'Ts of Web Scraping
We published a repository and blog post about distributed crawling in Python. It is a bit more complicated than what we've seen so far. It uses external software (Celery for asynchronous task queue and Redis as the database).
- Mastering Web Scraping in Python: Scaling to Distributed Crawling - ZenRows
- Mastering Web Scraping in Python: Scaling to Distributed Crawling – ZenRows
-
Mastering Web Scraping in Python: Scaling to Distributed Crawling
We will start to separate concepts before the project grows. We already have two files: tasks.py and main.py. We will create another two to host crawler-related functions (crawler.py) and database access (repo.py). Please look at the snippet below for the repo file, it is not complete, but you get the idea. There is a GitHub repository with the final content in case you want to check it.
What are some alternatives?
celery - Distributed Task Queue (development branch)
colly - Elegant Scraper and Crawler Framework for Golang
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
Redis - Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
newspaper - newspaper3k is a news, full-text, and article metadata extraction in Python 3. Advanced docs:
PeARS-orchard - This is the development version of PeARS, the people's search engine. More compact but less robust than PeARS-lite. If you just want to use PeARS as a local indexer, use PeARS-lite instead.
storm-crawler - A scalable, mature and versatile web crawler based on Apache Storm
Crawly - Crawly, a high-level web crawling & scraping framework for Elixir.
Angular - Deliver web apps with confidence 🚀