1brc
server
1brc | server | |
---|---|---|
28 | 24 | |
5,246 | 7,414 | |
- | 3.4% | |
9.8 | 9.5 | |
24 days ago | 4 days ago | |
Java | Python | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
1brc
-
The One Billion Row Challenge in CUDA: from 17 minutes to 17 seconds
This would be the code to beat. Ideally with only 8 cores but any number of cores is also very interesting.
https://github.com/gunnarmorling/1brc/discussions/710
-
One Billion Row Challenge in Golang - From 95s to 1.96s
Given that 1-billion-line-file is approximately 13GB, instead of providing a fixed database, the official repository offers a script to generate synthetic data with random readings. Just follow the instructions to create your own database.
-
1BRC Merykitty's Magic SWAR: 8 Lines of Code Explained in 3k Words
Local disk I/O is no longer the bottleneck on modern systems: https://benhoyt.com/writings/io-is-no-longer-the-bottleneck/
In addition, the official 1BRC explicitly evaluated results on a RAM disk to avoid I/O speed entirely: https://github.com/gunnarmorling/1brc?tab=readme-ov-file#eva... "Programs are run from a RAM disk (i.o. the IO overhead for loading the file from disk is not relevant)"
-
Processing One Billion Rows in PHP!
You may have heard of the "The One Billion Row Challenge" (1brc) and in case you don't, go checkout Gunnar Morlings's 1brc repo.
-
The One Billion Row Challenge in Go: from 1m45s to 4s in nine solutions
Here’s a thread on results with duckdb, I don’t mean to discourage you taking a shot at all though: https://github.com/gunnarmorling/1brc/discussions/39
-
Ask HN: How can I learn about performance optimization?
If you are in “javaland” look at billion row challenge, you will learn a lot - https://github.com/gunnarmorling/1brc
- Lessons Learned from Doing the One Billion Row Challenge
- 1B Row Challenge Shows Java Can Process 1B Rows File in 2 Seconds
-
From slow to SIMD: A Go optimization story
Even manual vectorization is pain...writing ASM, really?
Rust has unstable portable SIMD and a few third-party crates, C++ has that as well, C# has stable portable SIMD and a very small BLAS-like library on top of it (hell it even exercises PackedSIMD when ran in a browser) and Java is getting stable Panama vectors some time in the future (though the question of codegen quality stands open given planned changes to unsafe API).
Go among these is uniquely disadvantaged. And if that's not enough, you may want to visit 1Brc's challenge discussions and see that Go struggles get anywhere close to 2s mark with both C# and C++ are blazing past it:
https://hotforknowledge.com/2024/01/13/1brc-in-dotnet-among-...
https://github.com/gunnarmorling/1brc/discussions/67
-
JEP Draft: Deprecate Memory-Access Methods in Sun.misc.Unsafe for Removal
In terms of performance: I realize that this is a somewhat "toy" issue, and it's a sample size of 1, but for the currently ongoing "One Billion Row Challenge"[1] (an ongoing Java performance competition related to parsing and aggregating a 13 GB file), all of the current top-performers are using Unsafe. More specifically, the use of Unsafe appears to have been the change for a few entries that allowed getting below the 3-second barrier in the test.
1. https://github.com/gunnarmorling/1brc
server
- FLaNK Weekly 08 Jan 2024
- Is there any open source app to load a model and expose API like OpenAI?
- "A matching Triton is not available"
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
- Triton Inference Server - Backend
-
Single RTX 3080 or two RTX 3060s for deep learning inference?
For inference of CNNs, memory should really not be an issue. If it is a software engineering problem, not a hardware issue. FP16 or Int8 for weights is fine and weight size won’t increase due to the high resolution. And during inference memory used for hidden layer tensors can be reused as soon as the last consumer layer has been processed. You likely using something that is designed for training for inference and that blows up the memory requirement, or if you are using TensorRT or something like that, you need to be careful to avoid that every tasks loads their own copy of the library code into the GPU. Maybe look at https://github.com/triton-inference-server/server
-
Machine Learning Inference Server in Rust?
I am looking for something like [Triton Inference Server](https://github.com/triton-inference-server/server) or [TFX Serving](https://www.tensorflow.org/tfx/guide/serving), but in Rust. I came across [Orkon](https://github.com/vertexclique/orkhon) which seems to be dormant and a bunch of examples off of the [Awesome-Rust-MachineLearning](https://github.com/vaaaaanquish/Awesome-Rust-MachineLearning)
-
Multi-model serving options
You've already mentioned Seldon Core which is well worth looking at but if you're just after the raw multi-model serving aspect rather than a fully-fledged deployment framework you should maybe take a look at the individual inference servers: Triton Inference Server and MLServer both support multi-model serving for a wide variety of frameworks (and custom python models). MLServer might be a better option as it has an MLFlow runtime but only you will be able to decide that. There also might be other inference servers that do MMS that I'm not aware of.
-
I mean,.. we COULD just make our own lol
[1] https://docs.nvidia.com/launchpad/ai/chatbot/latest/chatbot-triton-overview.html[2] https://github.com/triton-inference-server/server[3] https://neptune.ai/blog/deploying-ml-models-on-gpu-with-kyle-morris[4] https://thechief.io/c/editorial/comparison-cloud-gpu-providers/[5] https://geekflare.com/best-cloud-gpu-platforms/
-
Why TensorFlow for Python is dying a slow death
"TensorFlow has the better deployment infrastructure"
Tensorflow Serving is nice in that it's so tightly integrated with Tensorflow. As usual that goes both ways. It's so tightly coupled to Tensorflow if the mlops side of the solution is using Tensorflow Serving you're going to get "trapped" in the Tensorflow ecosystem (essentially).
For pytorch models (and just about anything else) I've been really enjoying Nvidia Triton Server[0]. Of course it further entrenches Nvidia and CUDA in the space (although you can execute models CPU only) but for a deployment today and the foreseeable future you're almost certainly going to be using a CUDA stack anyway.
Triton Server is very impressive and I'm always surprised to see how relatively niche it is.
[0] - https://github.com/triton-inference-server/server
What are some alternatives?
1brc - C99 implementation of the 1 Billion Rows Challenge. 1️⃣🐝🏎️ Runs in ~1.6 seconds on my not-so-fast laptop CPU w/ 16GB RAM.
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
yolov7-object-tracking - YOLOv7 Object Tracking Using PyTorch, OpenCV and Sort Tracking
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
csvlens - Command line csv viewer
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
nodejs - 1️⃣🐝🏎️ The One Billion Row Challenge with Node.js -- A fun exploration of how quickly 1B rows from a text file can be aggregated with different languages.
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
pocketbase - Open Source realtime backend in 1 file
Triton - Triton is a dynamic binary analysis library. Build your own program analysis tools, automate your reverse engineering, perform software verification or just emulate code.
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
Megatron-LM - Ongoing research training transformer models at scale