adix
fast-sqlite3-inserts
adix | fast-sqlite3-inserts | |
---|---|---|
4 | 11 | |
38 | 363 | |
- | - | |
7.2 | 0.0 | |
11 days ago | about 1 year ago | |
Nim | Rust | |
ISC License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
adix
-
I/O is no longer the bottleneck
Note: Just concatenating the bibles keeps your hash map artificially small...which matters because as you correctly note the big deal is if you can fit the histogram in the L2 cache as noted elsewhere and this really matters if you go parallel where N CPUsL2 caches can speed things up a lot -- until* your histograms blow out CPU-private L2 cache sizes. https://github.com/c-blake/adix/blob/master/tests/wf.nim (or a port to your favorite lang) might make it easy to play with these ideas.
-
A Cost Model for Nim
which is notably logarithmic - not unlike a B-Tree.
When these expectations are exceeded you can at least detect a DoS attack. If you wait until such are seen, you can activate a "more random" mitigation on the fly at about the same cost as "the next resize/re-org/whatnot".
All you need to do is instrument your search to track the depth. There is some example such strategy in Nim at https://github.com/c-blake/adix for simple Robin-Hood Linear Probed tables.
-
Performance comparison: counting words in Python, Go, C++, C, Awk, Forth, Rust
Knuth-McIlroy comes up a lot. Previous discussion [1]. For this example I can make a Nim program run almost exactly the same speed as `wc -w`, yet the optimized C program runs 1.2x faster not 3.34x slower - a whopping 4x discrepancy - much bigger than many of the ratios in the table. So, people should be very cautious about conclusions from any of this.
[1] https://news.ycombinator.com/item?id=24817594
[2] https://github.com/c-blake/adix/blob/master/tests/wf.nim
fast-sqlite3-inserts
-
SQLite performance tuning: concurrent reads, multiple GBs and 100k SELECTs/s
I am experimenting with SQLite, where I try inserting 1B rows in under a minute. The current best is inserting 100M rows at 23s. I cut many corners to get performance, but the tweaks might suit your workload.
I have explained my rationale and approach here - https://avi.im/blag/2021/fast-sqlite-inserts/
the repo link - https://github.com/avinassh/fast-sqlite3-inserts
-
I/O is no longer the bottleneck
I am working on a project [0] to generate 1 billion rows in SQLite under a minute and inserted 100M rows inserts in 33 seconds. First, I generate the rows and insert them in an in-memory database, then flush them to the disk at the end. To flush it to disk it takes only 2 seconds, so 99% of the time is being spent generating and adding rows to the in-memory B Tree.
For Python optimisation, have you tried PyPy? I ran my same code (zero changes) using PyPy, and I got 3.5x better speed.
I published my findings here [1].
[0] - https://github.com/avinassh/fast-sqlite3-inserts
[1] - https://avi.im/blag/2021/fast-sqlite-inserts/
- Ask HN: Which personal projects got you hired?
-
Is there any language that is as similar as possible to Python in syntax, readability, and features, but is statically typed?
I have a side project where I tried to insert one billion rows in SQLite. I was able to insert 100 million rows using Python under 210 seconds. The same thing with PyPy took 120 seconds. I am wondering what kind of speed boost I would get with Cython
- Ask for benchmark. The owner can’t verify a 18% perf gain, could you?
-
Inserting One Billion Rows in SQLite Under A Minute
Measure, measure, measure! There is a PR which made really minor changes, but it got 2x speed boost with CPython version
- Inserting One Billion Rows in SQLite Under a Minute
- Weekly Coders, Hackers & All Tech related thread - 17/07/2021
-
How we achieved write speeds of 1.4 million rows per second
[somewhat related] Recently, I was benchmarking SQLite inserts and I managed to insert 3.3M records per second (100M in 33 ish seconds) on my local machine - https://github.com/avinassh/fast-sqlite3-inserts Ofcourse the comparison is not apples to apples, but sharing here if anyone finds it interesting
What are some alternatives?
countwords - Playing with counting word frequencies (and performance) in various languages.
tsbs - Time Series Benchmark Suite, a tool for comparing and evaluating databases for time series data
RAMCloud - **No Longer Maintained** Official RAMCloud repo
julia - The Julia Programming Language
wordcount - Counting words in different programming languages.
plum - Multiple dispatch in Python
KindleClippingsTranslator - Czytacz slowek
sqlite_micro_logger_arduino - Fast and Lean Sqlite database logger for Arduino UNO and above
tiny_sqlite - A thin SQLite wrapper for Nim
remixdb - RemixDB: A read- and write-optimized concurrent KV store. Fast point and range queries. Extremely low write-amplification.
word_frequency_nim - The word frequency program, written in simple nim.
dynamic-dns - An automated dynamic DNS solution for Docker and DigitalOcean