-
Like many of you, I've been attempting to follow the developments in the open source / local LLM scene over the last few weeks. Wrapping my head around the different pieces of the puzzle was pretty hard, and actually trying out some models on my laptop took surprisingly a lot of effort.
I built LM Studio to make discovering local models easy super easy and require no command line setup.
It is cross-platform, but only the Apple Silicon version is available right now.
The app is built on top of the llama.cpp project [0] and it should support any model based on the llama architecture + converted to "ggml" [1].
Here are a few great models to get started with. You can just paste these URLs into the search bar in the app:
- https://huggingface.co/TheBloke/wizardLM-7B-GGML/tree/previo...
- https://huggingface.co/TheBloke/Manticore-13B-GGML/tree/prev...
- https://huggingface.co/eachadea/ggml-vicuna-7b-1.1
- https://huggingface.co/eachadea/ggml-vicuna-7b-1.1
Anecdotally, these models run reasonably fast on my M1 MacBook Pro which has 16GB of RAM.
[0] https://github.com/ggerganov/llama.cpp
[1] https://github.com/ggerganov/ggml
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
Like many of you, I've been attempting to follow the developments in the open source / local LLM scene over the last few weeks. Wrapping my head around the different pieces of the puzzle was pretty hard, and actually trying out some models on my laptop took surprisingly a lot of effort.
I built LM Studio to make discovering local models easy super easy and require no command line setup.
It is cross-platform, but only the Apple Silicon version is available right now.
The app is built on top of the llama.cpp project [0] and it should support any model based on the llama architecture + converted to "ggml" [1].
Here are a few great models to get started with. You can just paste these URLs into the search bar in the app:
- https://huggingface.co/TheBloke/wizardLM-7B-GGML/tree/previo...
- https://huggingface.co/TheBloke/Manticore-13B-GGML/tree/prev...
- https://huggingface.co/eachadea/ggml-vicuna-7b-1.1
- https://huggingface.co/eachadea/ggml-vicuna-7b-1.1
Anecdotally, these models run reasonably fast on my M1 MacBook Pro which has 16GB of RAM.
[0] https://github.com/ggerganov/llama.cpp
[1] https://github.com/ggerganov/ggml