lxml | NetworkX | |
---|---|---|
17 | 61 | |
2,588 | 14,278 | |
1.4% | 1.5% | |
9.6 | 9.6 | |
7 days ago | 1 day ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lxml
-
8 Most Popular Python HTML Web Scraping Packages with Benchmarks
lxml
- Looking for someone to web scrape housing data needed research. Will pay you for your work!!
-
13 ways to scrape any public data from any website
Parsel is a library build to extract data from XML/HTML documents with XPath and CSS selectors support, and could be combined with regular expressions. It's usees lxml parser under the hood by default.
-
lazy and fast .mpd file parser - for video streaming
So, now that I no longer work in that industry, and I had some free time, I created a lazy parsing package using lxml instead of the xml parser in the standard library, which can help people who want to have a python only parsing solution.
-
Guide to working with fancier XML documents with python?
Seriously, use LXML.
- There is framework for everything.
- how to find text in website ?
-
Parsing XML file deletes whitespace. How to avoid it?
I got curious about this now so I did some tests on my own, and it appears that the XML parser implementation in Python does indeed strip all newline characters from attributes. Whether this is according to XML standard I do not know; I also briefly tried an alternative XML implementation for Python and it behaves the same, so I would assume that this is standard behavior, but I'm not knowledgable enough about XML to say for certain.
-
Use case for ETL over ELT?
I use lxml for the XML parsing and pyodbc as the ODBC library. We have a small team so I just keep it as simple as possible: 1. A cursor yields the XML documents from a SQL query as a stream 2. A generator function parses the XML document and yields the rows (you could parallelize this step) 3. Stream each of the resulting rows to a single CSV file 4. Scoop up the resulting CSV file into the target database (usually with the DB engine's loader; bulk insert isn't so fast over ODBC) It ends up being a straight forward, low-overhead approach.
-
CompactLogix: Implementing HTTP requests & XML Data Transfer via TCP/IP
If that sounds too weird maybe take a look at pycomm3, python also has lxml as well as requests. You could write a script that retrieves the data from the clx using the appropriate pycomm3 driver for cplx and then do xml things with the data using lxml and transmit the data over http using requests.
NetworkX
-
Routes to LANL from 186 sites on the Internet
Built from this data... https://github.com/networkx/networkx/blob/main/examples/grap...
-
The Hunt for the Missing Data Type
I think one of the elements that author is missing here is that graphs are sparse matrices, and thus can be expressed with Linear Algebra. They mention adjacency matrices, but not sparse adjacency matrices, or incidence matrices (which can express muti and hypergraphs).
Linear Algebra is how almost all academic graph theory is expressed, and large chunks of machine learning and AI research are expressed in this language as well. There was recent thread here about PageRank and how it's really an eigenvector problem over a matrix, and the reality is, all graphs are matrices, they're typically sparse ones.
One question you might ask is, why would I do this? Why not just write my graph algorithms as a function that traverses nodes and edges? And one of the big answers is, parallelism. How are you going to do it? Fork a thread at each edge? Use a thread pool? What if you want to do it on CUDA too? Now you have many problems. How do you know how to efficiently schedule work? By treating graph traversal as a matrix multiplication, you just say Ax = b, and let the library figure it out on the specific hardware you want to target.
Here for example is a recent question on the NetworkX repo for how to find the boundary of a triangular mesh, it's one single line of GraphBLAS if you consider the graph as a matrix:
https://github.com/networkx/networkx/discussions/7326
This brings a very powerful language to the table, Linear Algebra. A language spoken by every scientist, engineer, mathematician and researcher on the planet. By treating graphs like matrices graph algorithms become expressible as mathematical formulas. For example, neural networks are graphs of adjacent layers, and the operation used to traverse from layer to layer is matrix multiplication. This generalizes to all matrices.
There is a lot of very new and powerful research and development going on around sparse graphs with linear algebra in the GraphBLAS API standard, and it's best reference implementation, SuiteSparse:GraphBLAS:
https://github.com/DrTimothyAldenDavis/GraphBLAS
SuiteSparse provides a highly optimized, parallel and CPU/GPU supported sparse Matrix Multiplication. This is relevant because traversing graph edges IS matrix multiplication when you realize that graphs are matrices.
Recently NetworkX has grown the ability to have different "graph engine" backends, and one of the first to be developed uses the python-graphblas library that binds to SuiteSparse. I'm not a directly contributor to that particular work but as I understand it there has been great results.
-
Build the dependency graph of your BigQuery pipelines at no cost: a Python implementation
In the project we used Python lib networkx and a DiGraph object (Direct Graph). To detect a table reference in a Query, we use sqlglot, a SQL parser (among other things) that works well with Bigquery.
- NetworkX – Network Analysis in Python
-
Custom libraries and utility tools for challenges
If you program in Python, can use NetworkX for that. But it's probably a good idea to implement the basic algorithms yourself at least one time.
-
Google open-sources their graph mining library
For those wanting to play with graphs and ML I was browsing the arangodb docs recently and I saw that it includes integrations to various graph libraries and machine learning frameworks [1]. I also saw a few jupyter notebooks dealing with machine learning from graphs [2].
Integrations include:
* NetworkX -- https://networkx.org/
* DeepGraphLibrary -- https://www.dgl.ai/
* cuGraph (Rapids.ai Graph) -- https://docs.rapids.ai/api/cugraph/stable/
* PyG (PyTorch Geometric) -- https://pytorch-geometric.readthedocs.io/en/latest/
--
1: https://docs.arangodb.com/3.11/data-science/adapters/
2: https://github.com/arangodb/interactive_tutorials#machine-le...
-
org-roam-pygraph: Build a graph of your org-roam collection for use in Python
org-roam-ui is a great interactive visualization tool, but its main use is visualization. The hope of this library is that it could be part of a larger graph analysis pipeline. The demo provides an example graph visualization, but what you choose to do with the resulting graph certainly isn't limited to that. See for example networkx.
What are some alternatives?
xmltodict - Python module that makes working with XML feel like you are working with JSON
Numba - NumPy aware dynamic Python compiler using LLVM
selectolax - Python binding to Modest and Lexbor engines (fast HTML5 parser with CSS selectors).
Dask - Parallel computing with task scheduling
html5lib - Standards-compliant library for parsing and serializing HTML documents and fragments in Python
julia - The Julia Programming Language
untangle - Converts XML to Python objects
RDKit - The official sources for the RDKit library
bleach - Bleach is an allowed-list-based HTML sanitizing library that escapes or strips markup and attributes
snap - Stanford Network Analysis Platform (SNAP) is a general purpose network analysis and graph mining library.
pyquery - A jquery-like library for python
SymPy - A computer algebra system written in pure Python