Our great sponsors
- Onboard AI - Learn any GitHub repo in 59 seconds
- InfluxDB - Collect and Analyze Billions of Data Points in Real Time
- SaaSHub - Software Alternatives and Reviews
-
Like many of you, I've been attempting to follow the developments in the open source / local LLM scene over the last few weeks. Wrapping my head around the different pieces of the puzzle was pretty hard, and actually trying out some models on my laptop took surprisingly a lot of effort.
I built LM Studio to make discovering local models easy super easy and require no command line setup.
It is cross-platform, but only the Apple Silicon version is available right now.
The app is built on top of the llama.cpp project [0] and it should support any model based on the llama architecture + converted to "ggml" [1].
Here are a few great models to get started with. You can just paste these URLs into the search bar in the app:
- https://huggingface.co/TheBloke/wizardLM-7B-GGML/tree/previo...
- https://huggingface.co/TheBloke/Manticore-13B-GGML/tree/prev...
- https://huggingface.co/eachadea/ggml-vicuna-7b-1.1
- https://huggingface.co/eachadea/ggml-vicuna-7b-1.1
Anecdotally, these models run reasonably fast on my M1 MacBook Pro which has 16GB of RAM.
-
Like many of you, I've been attempting to follow the developments in the open source / local LLM scene over the last few weeks. Wrapping my head around the different pieces of the puzzle was pretty hard, and actually trying out some models on my laptop took surprisingly a lot of effort.
I built LM Studio to make discovering local models easy super easy and require no command line setup.
It is cross-platform, but only the Apple Silicon version is available right now.
The app is built on top of the llama.cpp project [0] and it should support any model based on the llama architecture + converted to "ggml" [1].
Here are a few great models to get started with. You can just paste these URLs into the search bar in the app:
- https://huggingface.co/TheBloke/wizardLM-7B-GGML/tree/previo...
- https://huggingface.co/TheBloke/Manticore-13B-GGML/tree/prev...
- https://huggingface.co/eachadea/ggml-vicuna-7b-1.1
- https://huggingface.co/eachadea/ggml-vicuna-7b-1.1
Anecdotally, these models run reasonably fast on my M1 MacBook Pro which has 16GB of RAM.
-
Onboard AI
Learn any GitHub repo in 59 seconds. Onboard AI learns any GitHub repo in minutes and lets you chat with it to locate functionality, understand different parts, and generate new code. Use it for free at www.getonboard.dev.