xmltodict
requests-html
Our great sponsors
xmltodict | requests-html | |
---|---|---|
7 | 14 | |
5,370 | 13,574 | |
- | 0.4% | |
0.6 | 0.0 | |
3 months ago | 8 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xmltodict
-
XML to CSV or JSON using Cloud Function
Your Cloud Function would be written in Node.js, Python, Go, Java, C#, Ruby, or PHP; pick the one you're most comfortable with. It would get the name and bucket of the newly uploaded XML file as an input parameter. It would then load the file and call a library that makes the conversion. Example libraries: xml-js (for Node), xmltodict (for Python).
-
Did I reinvent a wheel?
Go with xmltodict. Works pretty fine, and you just have to drop any key begining with @ or # (if there is not already an option for that).
-
Top python libraries/ frameworks that you suggest every one
Nope, sorry, it's just an XML generator. The Python stdlib offers https://docs.python.org/3/library/xml.etree.elementtree.html and PyPI offers https://github.com/martinblech/xmltodict for parsing, and you could write CSV with csvwriter or pandas.
- Dict or List to store table like data
-
Like JQ, but for HTML
xmlstarlet is really nothing like jq, as a language. But yes, I use it because it is the best commandline xml processor I'd found. That's the only similarity to jq.
Is this the yq? https://kislyuk.github.io/yq/ It does contain an 'xq', as a literal wrapper for jq, piping output into it after transcoding XML to JSON using xmltodict https://github.com/martinblech/xmltodict (which explodes xml into separate JSON data structures).
This is a bash one-liner! But TBF it really is a 'jq for xml'. I think it would be horrible for some things, but you could also do a lot of useful things painlessly.
- Parsing unknown XML file with Python?
-
I used raw data from my watch (and Python) to make a map of all the NH48 hikes from this year. I hiked Liberty and Flume before I got the watch in June, so I need to do those again! Color-coded by altitude.
Super-easy, take a look at xmltodict https://github.com/martinblech/xmltodict xmltodict.parse(xml_str) gets you a dictionary
requests-html
- will requests-html library work as selenium
-
8 Most Popular Python HTML Web Scraping Packages with Benchmarks
requests-html
-
How to batch scrape Wall Street Journal (WSJ)'s Financial Ratios Data?
Ya, thanks for advice. When using requests_html library, I am trying to lower down the speed using response.html.render(timeout=1000), but it raise Runtime error instead on Google Colab: https://github.com/psf/requests-html/issues/517.
- Note, the first time you ever run the render() method, it will download Chromium into your home directory (e.g. ~/.pyppeteer/). This only happens once.
-
Data scraping tools
For dynamic js, prefer requests-html with xpath selection.
-
Which string to lower case method to you use?
Example: requests-html which has a rather exhaustive README.md, but their dedicated page is not that helpful, if I remember correctly, and currently the domain is suspended.
-
Top python libraries/ frameworks that you suggest every one
When it comes to web scraping, the usual people recommend is beautifulsoup, lxml, or selenium. But I highly recommend people check out requests-html also. Its a library that is a happy medium between ease of use as in beautifulsoup and also good enough to be used for dynamic, javascript data where it would be overkill to use a browser emulator like selenium.
- How to make all https traffic in program go through a specific proxy?
-
Requests_html not working?
Quite possible. If you look at requests-html source code, it is simply one single python file that acts as a wrapper around a bunch of other packages, like requests, chromium, parse, lxml, etc., plus a couple convenience functions. So it could easily be some sort of bad dependency resolution.
-
Web Scraping in a professional setting: Selenium vs. BeautifulSoup
What I do is try to see if I can use requests_html first before trying selenium. requests_html is usually enough if I dont need to interact with browser widgets or if the authentication isnt too difficult to reverse engineer.
What are some alternatives?
lxml - The lxml XML toolkit for Python
Scrapy - Scrapy, a fast high-level web crawling & scraping framework for Python.
untangle - Converts XML to Python objects
MechanicalSoup - A Python library for automating interaction with websites.
MarkupSafe - Safely add untrusted strings to HTML/XML markup.
requests - A simple, yet elegant HTTP library. [Moved to: https://github.com/psf/requests]
pyquery - A jquery-like library for python
feedparser - Parse feeds in Python
xhtml2pdf - A library for converting HTML into PDFs using ReportLab
RoboBrowser
xmldataset - xmldataset: xml parsing made easy 🗃️
pyspider - A Powerful Spider(Web Crawler) System in Python.