ollama-ui
codellama
ollama-ui | codellama | |
---|---|---|
2 | 9 | |
554 | 15,154 | |
- | 7.0% | |
7.2 | 5.5 | |
18 days ago | 18 days ago | |
JavaScript | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ollama-ui
-
Dumbar, a Not So Smart Menubar App
This is great. I was thinking about putting something like this together for personal use, so - thanks for saving us the trouble!
Ollama support would be amazing, especially with the recent integration of codellama and phind-codellama. I’m sure you’re aware, but for the benefit of anyone else: there is a third party Ollama web ui[1] linked to from the ollama project homepage. It’s barebones, but does the trick.
[1]: https://github.com/ollama-ui/ollama-ui
- Meta: Code Llama, an AI Tool for Coding
codellama
-
Meta AI releases Code Llama 70B
The github [0] hasn't been fully updated, but it links to a paper [1] that describes how the smaller code llama models were trained. It would be a good guess that this model is similar.
[0] https://github.com/facebookresearch/codellama
-
Open/Local LLM support for MineDojo/Voyager
This k8s application deploys an instance of Voyager along with a Fabric Minecraft server with required fabric mods. It assumes you have a local deployment of a Large Language Model (LLM) with 4K-8K token context length with a compatible OpenAI API, including embeddings support.
-
Code Llama Parameters
I have been playing with code Llama (the 7B python one). It does pretty well, but I don't understand what the parameters in the code mean and how I should modify them to work best on my hardware. I'm looking at the code in: https://github.com/facebookresearch/codellama/blob/main/llama/generation.py.
-
What frameworks or platforms to use for full fine tuning of Code Llama?
Should I use HuggingFace https://huggingface.co/codellama/CodeLlama-34b-hf or grab the model from Facebook https://github.com/facebookresearch/codellama?
- Code Llama Released
-
Meta just released its answer to GitHub Copilot, and it’s free
such rights.
https://github.com/facebookresearch/codellama/blob/main/LICE...
https://github.com/facebookresearch/llama/blob/main/LICENSE
-
Introducing Code Llama: A New Era of AI-Driven Coding
Bringing AI to the coding community: Code Llama is designed to support software engineers across sectors – including research, industry, and open-source projects. You can checkout the Github repo here.
-
Code Llama by MetaAI (released yesterday)
GIthub https://github.com/facebookresearch/codellama
- Meta: Code Llama, an AI Tool for Coding
What are some alternatives?
aider - aider is AI pair programming in your terminal
tabby - Self-hosted AI coding assistant
llama.cpp - LLM inference in C/C++
refact - WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
Dumbar - A smrt, no, smart, ok, no dumb smartbar for Ollama
lmdeploy - LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
smartcat
Voyager - An Open-Ended Embodied Agent with Large Language Models