lmdeploy
godot-dodo
lmdeploy | godot-dodo | |
---|---|---|
4 | 16 | |
2,640 | 510 | |
20.8% | - | |
9.8 | 3.1 | |
5 days ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lmdeploy
- FLaNK-AIM Weekly 06 May 2024
-
AMD May Get Across the CUDA Moat
I wouldn’t say ROCm code is “slower”, per se, but in practice that’s how it presents. References:
https://github.com/InternLM/lmdeploy
https://github.com/vllm-project/vllm
https://github.com/OpenNMT/CTranslate2
You know what’s missing from all of these and many more like them? Support for ROCm. This is all before you get to the really wildly performant stuff like Triton Inference Server, FasterTransformer, TensorRT-LLM, etc.
ROCm is at the “get it to work stage” (see top comment, blog posts everywhere celebrating minor successes, etc). CUDA is at the “wring every last penny of performance out of this thing” stage.
In terms of hardware support, I think that one is obvious. The U in CUDA originally stood for unified. Look at the list of chips supported by Nvidia drivers and CUDA releases. Literally anything from at least the past 10 years that has Nvidia printed on the box will just run CUDA code.
One of my projects specifically targets Pascal up - when I thought even Pascal was a stretch. Cue my surprise when I got a report of someone casually firing it up on Maxwell when I was pretty certain there was no way it could work.
A Maxwell laptop chip. It also runs just as well on an H100.
THAT is hardware support.
-
Nvidia Introduces TensorRT-LLM for Accelerating LLM Inference on H100/A100 GPUs
vLLM has healthy competition. Not affiliated but try lmdeploy:
https://github.com/InternLM/lmdeploy
In my testing it’s significantly faster and more memory efficient than vLLM when configured with AWQ int4 and int8 KV cache.
If you look at the PRs, issues, etc you’ll see there are many more optimizations in the works. That said there are also PRs and issues for some of the lmdeploy tricks in vllm as well (AWQ, Triton Inference Server, etc).
I’m really excited to see where these projects go!
- Meta: Code Llama, an AI Tool for Coding
godot-dodo
-
Meta: Code Llama, an AI Tool for Coding
If you can find a large body of good, permissively licensed example code, you can finetune an LLM on it!
There was a similar attempt for Godot script trained a few months ago, and its reportedly pretty good:
https://github.com/minosvasilias/godot-dodo
I think more attempts havent been made because base llama is not that great at coding in general, relative to its other strengths, and stuff like Starcoder has flown under the radar.
-
[P] godot-dodo – Finetuning starcoder on single-language instruction data
This a continuation of previous work done for the godot-dodo project, which involved finetuning LLaMA models on GitHub-scraped GDScript code.
-
Godot-Dodo – Finetuning starcoder on single-language instruction data
This a continuation of previous work done in the godot-dodo project (https://github.com/minosvasilias/godot-dodo), which involved finetuning LLaMA models on GitHub-scraped GDScript code.
Starcoder performs significantly better than LLaMA using the same dataset, and exceeds evaluation scores of both gpt-4 and gpt-3.5-turbo, showing that single-language finetunes of smaller models may be a competitive option for coding assistants, especially for less commonplace languages such as GDScript.
The twitter thread also details some drawbacks of the current approach, namely increasing occurences where the model references out-of-scope objects in its generated code, a problem that worsens as the amount of training epochs increases.
-
Has anyone got this running in Godot?
Not exactly what your looking for, but related: https://github.com/minosvasilias/godot-dodo
- Possible to train GPT on a custom scripting language?
- Godot-dodo – Finetuning LLaMA on single-language comment:code data pairs
What are some alternatives?
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama.cpp - LLM inference in C/C++
godot-copilot - AI-assisted development for the Godot engine.
llama-cpp-python - Python bindings for llama.cpp
lit-llama - Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
CTranslate2 - Fast inference engine for Transformer models
refact - WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
smartcat
seamless_communication - Foundational Models for State-of-the-Art Speech and Text Translation
codellama - Inference code for CodeLlama models