Our great sponsors
- InfluxDB - Collect and Analyze Billions of Data Points in Real Time
- Onboard AI - Learn any GitHub repo in 59 seconds
- SaaSHub - Software Alternatives and Reviews
-
We're looking for feedback on the project, and we'd love to hear from you! If you're interested in contributing, please reach out to us on our Discord, or post an issue on our GitHub.
-
The current direction most people are taking is to "fine-tune" an existing base model (something like StableLM or Dolly) using a technique called LoRA. Ref: https://github.com/tloen/alpaca-lora People are also looking into fine-tuning directly on the GGML format. Ref: https://github.com/ggerganov/ggml/issues/8
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
-
At present, we are powered by ggml (similar to llama.cpp), but we intend to add additional backends in the near-future. This means that we currently only support CPU inference, but we have several ideas in mind for how to add GPU support, as well as other accelerators.
-
The current direction most people are taking is to "fine-tune" an existing base model (something like StableLM or Dolly) using a technique called LoRA. Ref: https://github.com/tloen/alpaca-lora People are also looking into fine-tuning directly on the GGML format. Ref: https://github.com/ggerganov/ggml/issues/8
-
You could try looking at the min-GPT example of tch-rs. I'd also strongly suggest watching Karpathy's video to understand what's going on.
-
Onboard AI
Learn any GitHub repo in 59 seconds. Onboard AI learns any GitHub repo in minutes and lets you chat with it to locate functionality, understand different parts, and generate new code. Use it for free at www.getonboard.dev.