jmespath.py
brackit
Our great sponsors
jmespath.py | brackit | |
---|---|---|
30 | 21 | |
2,049 | 45 | |
1.8% | - | |
0.0 | 6.9 | |
9 days ago | about 2 months ago | |
Python | Java | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jmespath.py
-
Automating Nightly Local Database Refreshes from Azure Blob Storage with Docker
The Azure CLI lets us write queries to filter the results of the az storage blob list command. The queries are written in JMESPath, which is a query language for JSON. In this case, we are filtering the results to only include blobs that end with the .bacpac extension and then selecting the first one as ordered by the lastModified property. If there are no blobs found, the script exits with a failure code. If we find a blob, we download it to the local path specified by the localPath variable.
-
What's New in Python 3.12
For JSON there is the `jmespath` library which might help.
-
jq 1.7 Released
I love jq, but I also use JMESPath (especially with AWS CLI), yq (bundled with tomlq and xq as well), and dasel [2]. I also wish hclq [3] wasn't so dead!
-
Announcing serde-query 0.2.0
Probably writing the query side of things is a lot of the fun here, but there is actually a spec (and a complying Rust impl) you can hook into for this JQ-like querying: https://jmespath.org/ ( https://github.com/jmespath/jmespath.rs ).
-
Spring Boot logging with Loki, Promtail, and Grafana (Loki stack)
Thanks to custom variables that use labels, we can create various filters for the dashboard. You can look up my configuration of variables and extend it with an analogy way for your own needs. At the top, I marked the filter with detected pods in selected namespace. In the lower part, you can see a preview of all labels that are associated with a single log line. Most labels are meta information that Promtail adds during scraping targets. This part of the Promtail configuration provides it. In this section, I also marked a few labels that not comes out-of-the box e.g. leavel , class , thread . We added these labels using the Promtail json stage. You need to know that Promtail processes scraped logs in a pipeline. A pipeline is comprised of a set of stages. json stage is a parsing stage that reads the log line as JSON and accepts JMESPath expressions to extract data.
-
jmespath.py VS jertl - a user suggested alternative
2 projects | 31 Oct 2022
-
I've built a PathDict, a library that makes it easy to work with dicts!
Interesting. How does this compared to Jmespath? Not saying Jmespath is superior, just wondering whether you were aware of it.
- Ask HN: What is something you built but never marketed?
-
I wrote a JSON parsing library that makes it easy to query and even do arithmetic operations on JSON.
We also have jmespath
-
Type-Checked Keypaths in Rust
JMESPath is what JSONPath should have been
specified, standardized, with tests and implementations for multiple languages
brackit
- Show HN: Bitemporal, Binary JSON Based DBS and Event Store
-
Show HN: Evolutionary (binary) JSON data store (full immutable revision history)
I've already posted the project a couple of years ago and it gained some interest, but a lot of stuff has been done since then, especially regarding performance, a complete new JSON store, a REST API, various internals refactored, an improved JSONiq based query engine allowing updates, a now already dated web UI, a new Kotlin based CLI, a Python and TypeScript client to ease the use of Sirix...
First prototypes from a precursor stem already from 2005.
So, what is it all about?
I'm working on an evolutionary data store in my spare time[1]. It is based on the idea to get rid of the need for a second trx log (the WAL) by using a persistent tree of tries (preserving the previous revision through copy on write and path copying to the root) index as the log itself with only a single permitted read/write txn concurrently and in parallel to N read-only txns, which are bound to specific revisions during the start. The single writer is permitted on a resource (comparable to a table/relation in a relational DB) basis within a database, reads do not involve any locks at all.
The idea is, that the system atomically swaps the tree root to the new version (replicated). If something fails the log can simply be truncated to the former tree root.
Thus, the system has many similarities with Git (structural sharing of unchanged nodes/pages) and ZFS snapshots (regarding the latter the keyed trie has been inspired by ZFS, as well as that checksums for child pages are stored in parent pages in the references to the child pages)[2].
You can of course simply execute time travel queries on the whole revision history, add commit comments and the author to answer questions such as who committed what at which point in time and why...
The system not only copies full data pages, but it applies a sliding snapshot versioning algorithm to keep storage space to a minimum.
Thus, it's best suited for fast flash drives with fast random reads and sequential writes. Data is never overwritten, thus audit trails are given for free.
The system stores find granular JSON nodes, thus the structure and size of an object has almost no limits. A path summary is built, which is an unordered set of all paths to leaf nodes in the tree and enables various optimizations. Furthermore a rolling hash is optionally built, whereas during inserts all ancestor node hashes are adapted.
Furthermore it optionally keeps track of update operations and the ctx nodes involved during txn commits. Thus, you can easily get the changes between revisions, you can check the full history of nodes, as well as navigate in time to the first revision, the last revision, the next and previous revision of a node...
You can also open a revision at a specific system time revert to a revision and commit a new version while preserving all revisions in-between.
As said one feature is, that the objects can be arbitrarily nested, thus almost no limits in the number and updates are cheap.
A dated Jupyter notebook with some examples can be found in [3] and overall documentation in [4].
The query engine[5] Brackit is retargetable (a couple of interfaces and rewrite rules have to be implemented for DB systems) and especially finds implicit joins and applies known algorithms from the relational DB systems world to optimize joins and aggregate functions due to set-oriented processing of the operators.[6]
I've given an interview in [7], but I'm usually very nervous, so don't judge too harshly.
Give it a try and happy coding!
Kind regards
Johannes
[1] https://sirix.io | https://github.com/sirixdb/sirix
[2] https://sirix.io/docs/concepts.html
[3] https://colab.research.google.com/drive/1NNn1nwSbK6hAekzo1YbED52RI3NMqqbG#scrollTo=CBWQIvc0Ov3P
[4] https://sirix.io/docs/
[5] http://brackit.io
[6] https://colab.research.google.com/drive/19eC-UfJVm_gCjY--koOWN50sgiFa5hSC
[7] https://youtu.be/Ee-5ruydgqo?si=Ift73d49w84RJWb2
- Evolutionary, JSON data store (keeping the full revision history)
-
Java opensource projects that need help from community.
Append-only database system (based on a persistent inddx structure): https://github.com/sirixdb/sirix or a retargetable query compiler https://github.com/sirixdb/brackit
-
Ask HN: Do you prefer Svelte or SolidJS?
Hello,
I want to find enthusiastic OSS frontend developers for my JSON data store project[1], which is able to retain the full revision history of a database resource (binary JSON) through small sized copy-on-write snapshots of the main index tree of tries and a novel sliding snapshot algorithm.
As I'm a fan of compilers (http://brackit.io) I think either working on the current frontend with Svelte[2], which is currently really dated and uses Sapper or a new frontend using SolidJS would be great.
What are the advantages/disadvantages of both frameworks in your opinion? I'm a backend software engineer, but maybe SolidJS is more familiar to frontend devs because of JSX and at least in benchmarks it seems to be faster. But maybe the differences except for the different syntaxes aren't that big.
I envision visualizations for comparing revisions of resources or subtrees therein and also to visualize time travel queries. A screenshot of the old frontend: https://github.com/sirixdb/sirix/blob/master/Screenshot%20from%202020-09-28%2018-50-58.png
Let me know which framework you'd prefer for the task at hand and what are the advantages/disadvantages in your opinion for both of them in general.
If you want to help, it's even better. Let me know :-)
[1] https://sirix.io || https://github.com/sirixdb/sirix
-
Implementing a Merkle Tree for an Immutable Verifiable Log
Basically JSONiq, with a few minor syntax differences.
Our query engine/compiler is and can be used by other data stores as well:
-
Zq: An Easier (and Faster) Alternative to Jq
I'm working on a JSONiq based implementation to jointly process JSON data and XML. The compiler uses set-oriented processing and is meant to provide also a base for JSON based database systems with shared common optimizations:
The language itself borrows a lot of concepts from functional languages as higher order functions, closures... you can also develop modules with functions for easy reuse...
A simple join for instance looks like this:
let $stores :=
That's one of the main steps forward for Brackit, a retargetable JSONiq query engine/compiler (http://brackit.io) and the append-only data store SirixDB (https://sirix.io) and a new web frontend. My vision is not only to explore the most recent revision but also any other older revisions, to display the diffs, to display thd results of time travel queries... help is highly welcome as I'm myself a backend engineer and working on the query engine and the data store itself :-)
- Select, put and delete data from JSON, TOML, YAML, XML and CSV files
-
Brackit: A retargatable JSONiq query engine
Hi all,
Sebastian and his students did a tremendous job creating Brackit[1] in the first place as a retargetable query engine for different data stores. They worked hard to optimize aggregations and joins. Despite its clear database query engine routes, it's furthermore useable as a standalone ad-hoc in-memory query engine.
Sebastian did his research for his Ph.D. at the TU-Kaiserslautern at the database systems group of Theo Härder. Theo Härder coined the well-known acronym ACID with Andreas Reuter, the desired properties of transactions.
As he's currently not maintaining the project anymore, I stepped up and forked the project a couple of years ago. I'm using it for my evolutionary, immutable data store SirixDB[2], which stores the entire history of your JSON data in small-sized snapshots in an append-only file (tailored binary format similar to BSON). It's exceptionally well suited for audits, undo operations, and sophisticated analytical time travel queries.
I've changed a lot of stuff, such that Brackit is getting more and more compatible with the JSONiq query language standard, added JSONiq update primitives, array slices as known from Python and fixed several bugs. Furthermore, I've added interfaces for temporal data stores, temporal XPath axis to navigate not only in space, but also in time and temporal extension functions in SirixDB, index rewrite rules, etc. pp.
As Brackit can query XML, you're of course able to transform XML data to JSON and vice versa.
Moshe and I are working on a Jupyter Notebook / Tutorial[3] for interactive queries.
We're looking forward to your bug reports, issues, and questions. Contributions are, of course, highly welcome. Maybe even implementations for other data stores or common query optimizations.
Furthermore, we'd gladly see further (university-based?) research.
It should, for instance, be possible to add vector instructions for SIMD instructions in the future, as the query engine is already set-oriented and processes sets of tuples for the so-called FLWOR expressions (see JSONiq). Brackit rewrites FLWOR expression trees in the AST to a pipeline of operations to port optimizations from relational query engines for efficient join processing and aggregate expressions. Furthermore, certain parts of the queries are parallelizable, as detailed in Sebastian's thesis. We also envision a stage for the compiler to use distributed processing (first research used MapReduce, but we can now use better-suited approaches, of course).
Kind regards
Johannes
[1] https://github.com/sirixdb/brackit
[2] https://sirix.io | https://github.com/sirixdb/sirix
[3] https://colab.research.google.com/drive/19eC-UfJVm_gCjY--koO...
What are some alternatives?
jq - Command-line JSON processor [Moved to: https://github.com/jqlang/jq]
jq - Command-line JSON processor
yq - yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor
jfq - JSONata on the command line
jello - CLI tool to filter JSON and JSON Lines data with Python syntax. (Similar to jq)
yq - Command-line YAML, XML, TOML processor - jq wrapper for YAML/XML/TOML documents
sirix - SirixDB is an an embeddable, bitemporal, append-only database system and event store, storing immutable lightweight snapshots. It keeps the full history of each resource. Every commit stores a space-efficient snapshot through structural sharing. It is log-structured and never overwrites data. SirixDB uses a novel page-level versioning approach.
jmespath.terminal - JMESPath exploration tool in the terminal
cloud-custodian - Rules engine for cloud security, cost optimization, and governance, DSL in yaml for policies to query, filter, and take actions on resources
dacite - Simple creation of data classes from dictionaries.
textql - Execute SQL against structured text like CSV or TSV
xsv - A fast CSV command line toolkit written in Rust.