ndjson.github.io
datasette
Our great sponsors
ndjson.github.io | datasette | |
---|---|---|
17 | 187 | |
23 | 8,934 | |
- | - | |
0.0 | 9.3 | |
9 months ago | 5 days ago | |
CSS | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ndjson.github.io
-
What the fuck
However, since every JSON document can be represented in a single line, something like newline-delimited JSON / JSON Lines feels like it would've been more suitable for that kind of data.
- The XML spec is 25 years old today
-
Consider Using CSV
No one uses that format for streamed json, see ndson and jsonl
http://ndjson.org/
The size complaint is overblown, as repeated fields are compressed away.
As other folks rightfully commented, csv is a mine field. One should assume every CSV file is broken in some way. They also don't enumerate any of the downsides of CSV.
What people should consider is using formats like Avro or Parquet that carry their schema with them so the data can be loaded and analyzed without have to manually deal with column meaning.
-
DevTool Intro: The Algolia CLI!
What is ndjson? Newline delimited JSON is the format the Algolia CLI reads from and writes to files. This means that any command that passes ndjson formatted data as output or accepts it as input can be piped together with an Algolia CLI command! We’ll see more of this in the next example
-
On read of JSON file it loads the entire JSON into memory.
You might consider using json-lines format (also known as newline-delimited JSON), in which each line is a separate JSON document so they can be loaded individually.
-
How to format it as json?
The format you're getting is known as Newline-Delimited JSON. Instead of trying to parse the whole input and pass that to the JSON Decoder, you can use something like bufio.Scanner to get and parse it line by line.
-
Arrow2 0.12.0 released - including almost complete support for Parquet
This is in oposition to NDJSON, which allows to split records without deserializing JSON itself, via e.g. read_lines. fwiw CSV suffers from the same problem as JSON - generally not possible to break into records without deserializing. It is worse than NDJSON because the character \n may appear at any position within an item, thus forbidding read_lines.
-
Processing large JSON files in Python without running out of memory
I've always seen it referred to as ndjson
-
Speeding up Go's builtin JSON encoder up to 55% for large arrays of objects
I think this would be fine, as long as the CSV layer was still parsable using the RFC 4180, then you could still use a normal CSV parser to parse the CSV layer and a normal JSON parser to parse the JSON layer. My worry with your example is that it is nether format, so it will need custom serialisation and deserialisation logic as it is essentially a bran new format.
https://datatracker.ietf.org/doc/html/rfc4180
If you’re looking for line-oriented JSON, another option would be ndjson: http://ndjson.org/
- IETF should keep XMPP as IM standard, instead of Matrix
datasette
-
Ask HN: High quality Python scripts or small libraries to learn from
Simon Willison's github would be a great place to get started imo -
https://github.com/simonw/datasette
- Show HN: TextQuery – Query and Visualize Your CSV Data in Minutes
-
Little Data: How do we query personal data? (2013)
I'm a fan on simonw's datasette/dogsheep ecosystem https://datasette.io/
-
LaTeX and Neovim for technical note-taking
I use Anki the exact same way. After a lifetime of learning I have accepted that I will never read over anything I write for myself voluntarily - so my two options are:
1. Write an article so good I can publish it and look it over myself later on. I did this last year with https://andrew-quinn.me/fzf/, for example.
2. Create Anki cards out of the material. Use the builtin Card Browser or even https://datasette.io/ on the underlying SQLite database in a pinch to search for my notes any time I have to.
-
Daily Price Tracking for Trader Joes
Were you aware of, or tempted by https://datasette.io/ for creating your solution?
- SQLite-Web: Web-based SQLite database browser written in Python
-
Ask HN: What two software products should have a kid?
Browsing HN, GitHub and the like we get to see a huge variety of software products and code bases.
I often see products and think - if this product X, got together with Y, it would be pretty cool - kind of like if they had a kid together.
Not too literally, but more on the conceptual level - my level of programming is low.
E.g. Just some....
- pocketable.io & datasette (+with some more charting) [https://pocketbase.io, https://datasette.io]
-
Ask HN: Looking for a project to volunteer on? (February 2024)
You might like the Datasette project: https://datasette.io/
I don't think they are desperate for contributions but it's a welcoming environment and a fun project to hack on. You'll learn a lot just from reading the source and the incredibly informative PRs. The creator is a really talented developer with a great blog which shows up on the HN front page often.
-
Stuff I Learned during Hanukkah of Data 2023
Last year I worked through the challenges using VisiData, Datasette, and Pandas. I walked through my thought process and solutions in a series of posts.
-
What We Watched: A Netflix Engagement Report – About Netflix
> uploads of boring raw excel data and receive a nice UI
https://datasette.io/
What are some alternatives?
ndjson - Streaming line delimited json parser + serializer
nocodb - 🔥 🔥 🔥 Open Source Airtable Alternative
flatten-tool - Tools for generating CSV and other flat versions of the structured data
duckdb - DuckDB is an in-process SQL OLAP Database Management System
miller - Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON
sql.js-httpvfs - Hosting read-only SQLite databases on static file hosters like Github Pages
babashka - A Clojure babushka for the grey areas of Bash (native fast-starting Clojure scripting environment) [Moved to: https://github.com/babashka/babashka]
litestream - Streaming replication for SQLite.
grop - helper script for the `gron | grep | gron -u` workflow
Sequel-Ace - MySQL/MariaDB database management for macOS
csv2sqlite
beekeeper-studio - Modern and easy to use SQL client for MySQL, Postgres, SQLite, SQL Server, and more. Linux, MacOS, and Windows.