nebula
zsv
Our great sponsors
nebula | zsv | |
---|---|---|
9 | 25 | |
150 | 169 | |
2.0% | - | |
7.4 | 7.4 | |
about 1 month ago | 5 days ago | |
C++ | C | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nebula
-
Show HN: Turn any data into a fast analytical API
we use our in-house baked engine - open sourced here https://github.com/varchar-io/nebula
Yeah, Tinybird has lots of similarities, I will do more research on it, thanks for the reference.
- Show HN: Visualize your streaming data in real-time
-
How would you build a BI platform that delivers "real time" insights to users on their smartphones and computer devices in a company of about 200 people?
Take a look at this open source project - it may be helpful - https://github.com/varchar-io/nebula
-
Streaming multi-file SQL and CSV/TSV/etc., native/WASM and fastest CSV parser
cool - I also hand crafted a CSV parser following RFC4180 a while ago, not sure if you have a repeatable way to benchmark the performance difference?
https://github.com/varchar-io/nebula/blob/master/src/storage...
-
Looking for a recommendation for basic, cloud or server based reporting.
- if you look for hosting solution by yourself, bringing up a nebula cluster (even a single node) is simple, check out https://github.com/varchar-io/nebula
- Introduce an open-source project in data engineering
-
How is Elasticsearch similar to MongoDB in terms of data storage and usage?
Many modern data systems are designed in a similar way - extract, index, and query in including many low-latency real-time analytical systems, such as clickhouse, druid, pinot, nebula. Take nebula (https://github.com/varchar-io/nebula) as an example - it connects real-time storage engines like Kafka, cloud storage, or pubsub systems, extracts and index data from message queue into its own distributed system, providing low latency query on top of it for business use cases.
- Extremely-Fast Interactive Big Data Analytics
zsv
-
Analyzing multi-gigabyte JSON files locally
If it could be tabular in nature, maybe convert to sqlite3 so you can make use of indexing, or CSV to make use of high-performance tools like xsv or zsv (the latter of which I'm an author).
https://github.com/BurntSushi/xsv
https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql...
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Parsing CSV doesn't have to be slow if you use something like xsv or zsv (https://github.com/liquidaty/zsv) (disclaimer: I'm an author). The speed of CSV parsers is fast enough that unless you are doing something ultra-trivial such as "count rows", your bottleneck will be elsewhere.
The benefits of CSV are:
- human readable
- does not need to be typed (sometimes, data in the raw such as date-formatted data is not amenable to typing without introducing a pre-processing layer that gets you further from the original data)
- accessible to anyone: you don't need to be a data person to dbl-click and open in Excel or similar
The main drawback is that if your data is already typed, CSV does not communicate what the type is. You can alleviate this through various approaches such as is described at https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql..., though I wouldn't disagree that if you can be assured that your starting data conforms to non-text data types, there are probably better formats than CSV.
The main benefit of Arrow, IMHO, is less as a format for transmitting / communicating but rather as a format for data at rest, that would benefit from having higher performance column-based read and compression
- Yq is a portable yq: command-line YAML, JSON, XML, CSV and properties processor
-
csvkit: Command-line tools for working with CSV
I wanted so much to use csvkit and all the features it had, but its horrendous performance made it unscalable and therefore the more I used it, the more technical debt I accumulated.
This was one of the reasons I wrote zsv (https://github.com/liquidaty/zsv). Maybe csvkit could incorporate the zsv engine and we could get the best of both worlds?
Examples (using majestic million csv):
---
- Ask HN: Programs that saved you 100 hours? (2022 edition)
-
Show HN: Split CSV into multiple files to avoid the Excel's 1M row limitation
}
```
This of course assumes that each line is a single record, so you'll need some preprocessing if your CSV might contain embedded line-ends. For the preprocessing, you can use something like the `2tsv` command of https://github.com/liquidaty/zsv (disclaimer: I'm its author), which converts CSV to TSV and replaces newline with \n.
You can also use something like `xsv split` (see https://lib.rs/crates/xsv) which frankly is probably your best option as of today (though zsv will be getting its own shard command soon)
- Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
-
Ask HN: Best way to find help creating technical doc (open- or closed-source)?
Am looking for one-time help creating documentation (e.g. man pages, tutorials) for open source project (e.g. https://github.com/liquidaty/zsv) as well as product documentation for commercial products, but not enough need for a full-time job. Requires familiarity with, for lack of better term, data janitorial work, and preferably with methods of auto-generating documentation. Any suggestions as to forums or other ways to find folks who might fit the bill for ad-hoc or part-time work of this nature?
-
Q – Run SQL Directly on CSV or TSV Files
Nice work. I am a fan of tools like this and look forward to giving this a try.
However, in my first attempted query (version 3.1.6 on MacOS), I ran into significant performance limitations and more importantly, it did not give correct output.
In particular, running on a narrow table with 1mm rows (the same one used in the xsv examples) using the command "select country, count() from worldcitiespop_mil.csv group by country" takes 12 seconds just to get an incorrect error 'no such column: country'.
using sqlite3, it takes two seconds or so to load, and less than a second to run, and gives me the correct result.
Using https://github.com/liquidaty/zsv (disclaimer, I'm one of its authors), I get the correct results in 0.95 seconds with the one-liner `zsv sql 'select country, count() from data group by country' worldcitiespop_mil.csv`.
I look forward to trying it again sometime soon
-
A Trillion Prices
All this banter arguing over CSV, JSON, sqlite seems unnecessary when you can just push format X through a pipe and get whichever format Y you want back out: https://github.com/liquidaty/zsv/blob/main/docs/csv_json_sql...
(disclaimer: I'm one of the zsv authors)
What are some alternatives?
AlphaPlot - :chart_with_upwards_trend: Application for statistical analysis and data visualization which can generate different types of publication quality 2D and 3D plots with extensive visual customization.
visidata - A terminal spreadsheet multitool for discovering and arranging data