leaf
qdrant
leaf | qdrant | |
---|---|---|
2 | 140 | |
5,552 | 17,943 | |
-0.0% | 3.4% | |
0.0 | 9.9 | |
about 1 month ago | 2 days ago | |
Rust | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
leaf
-
[D] Why does AMD do so much less work in AI than NVIDIA?
I used a lot of the dependencies behind the leaf framework which was abandoned by its authors a while back due to funding issues, as I implemented it in Rust and most bindings were maintained while the leaf framework itself wasn't anymore.
-
AMD Demonstrates Stacked 3D V-Cache Technology: 192 MB at 2 TB/SEC
I tried to create a ML framework[0] that would work on both CUDA and OpenCL (and natively on the CPU) around 2015/2016, which included creating FFI wrappers for both CUDA and OpenCL. This is where my experience on the subject (and my contempt for NVIDIA) comes from.
Me memory isn't perfect, but IIRC the situation was roughly the following: We were quite short on resources (both devtime and money), which meant that we had to choose our scope wisely. Optimally we would have implemented both CUDA and OpenCL 2.0, but we had to settle for OpenCL 1.2 (which offered reduced performance, but was "good enough" for inference). IIRC OpenCL 2.0 was very very similar in what capabilities it assumed and offered to the CUDA version at the time, and cards like the GTX Titan X had "compute capabilities" that supported features like shared virtual memory between CPU and GPU in CUDA at the time. In fact the advances around memory management (and async copying) that were present in CUDA and not in OpenCL 1.x were the main source for the performance differences between the two.
From everything that I can tell at that point in time, if NVIDIA would have wanted to support OpenCL 2.0 they could have done so based on technical requirements. What the reason for not doing so is, is just pure speculation (lack of internal resources due to focusing on devtools?), but to me it always looked like they were using the edge they got via their proprietary libraries like cuDNN to get a foot into the field of ML and then purposefully neglected OpenCL to prevent any competitors from catching up. Classic Embrace, Extend, Extinguish.
[0]: https://github.com/autumnai/leaf
qdrant
-
Boost Your Code's Efficiency: Introducing Semantic Cache with Qdrant
I took Qdrant for this project. The reason was that Qdrant stands for high-performance vector search, the best choice against use cases like finding similar function calls based on semantic similarity. Qdrant is not only powerful but also scalable to support a variety of advanced search features that are greatly useful to nuanced caching mechanisms like ours.
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
I'm currently looking to implement locally, using QDrant [1] for instance.
I'm just playing around, but it makes sense to have a runnable example for our users at work too :) [2].
[1]. https://qdrant.tech/
-
Show HN: A fast HNSW implementation in Rust
Also compare with qdrant's Rust implementation; they tout their performance. https://github.com/qdrant/qdrant/tree/master/lib/segment/src...
-
pgvecto.rs alternatives - qdrant and Weaviate
3 projects | 13 Mar 2024
-
Open-source Rust-based RAG
There are much better known examples, such as https://qdrant.tech/ and https://github.com/lancedb/lancedb
-
Qdrant 1.8.0 - Major Performance Enhancements
For more information, see our release notes. Qdrant is an open source project. We welcome your contributions; raise issues, or contribute via pull requests!
-
Perform Image-Driven Reverse Image Search on E-Commerce Sites with ImageBind and Qdrant
Initialize the Qdrant Client with in-memory storage. The collection name will be “imagebind_data” and we will be using cosine distance.
-
7 Vector Databases Every Developer Should Know!
Qdrant is an open-source vector search engine optimized for performance and flexibility. It supports both exact and approximate nearest neighbor search, providing a balance between accuracy and speed for various AI and ML applications.
- Ask HN: Who is hiring? (February 2024)
-
Step-by-Step Guide to Building LLM Applications with Ruby (Using Langchain and Qdrant)
Qdrant serves as a vector database, optimized for handling high-dimensional data typically found in AI and ML applications. It's designed for efficient storage and retrieval of vectors, making it an ideal solution for managing the data produced and consumed by AI models like Mistral 7B. In our setup, Qdrant handles the storage of vectors generated by the language model, facilitating quick and accurate retrievals.