codealpaca
alpaca-electron
codealpaca | alpaca-electron | |
---|---|---|
20 | 8 | |
1,373 | 1,260 | |
- | - | |
4.4 | 5.9 | |
12 months ago | 27 days ago | |
Python | JavaScript | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
codealpaca
-
Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark!
CodeAlpaca 7B
-
OpenAI isn’t doing enough to make ChatGPT’s limitations clear
This is great!
Addressing the model limitations a bit: in the demonstration data that is provided to the base model, we should prevent computed or "looked up" answers.
I've seen some of the demonstration data that people are using to train instruction-tuned models and are being taught to respond by making up answers to solutions it shouldn't try to compute. Btw, the output is wrong.
{ "instruction": "What would be the output of the following JavaScript snippet?", "input": "let area = 6 * 5;\nlet radius = area / 3.14;", "output": "The output of the JavaScript snippet is the radius, which is 1.91." }, [1]
The UI note for now would get us very far but by filtering out demonstrations that retrieve or compute information should be filtered out.
Symbol tuning [2] is addressing the quality of demonstrations but we can take it further by removing retrievals and computations altogether.
Bonus: we can demonstrate how to make it respond so that the user/agent be informed of how to compute or retrieve.
1: https://github.com/sahil280114/codealpaca/commit/0d265112c70...
2: https://arxiv.org/abs/2305.08298
- How to Finetune GPT Like Large Language Models on a Custom Dataset
- Ask HN: Those with success using GPT-4 for programming – what are you doing?
-
Is there a colab or guide for fine tuning a 13b model for instruction following?
I found guides like this: https://github.com/sahil280114/codealpaca
-
Can LLMs do static code analysis?
Try, https://github.com/sahil280114/codealpaca, or we’re you trying to stick with more generalist models?
-
LoRA in LLaMAc++? Converting to 4bit? How to use models that are split into multiple .bin ?
Oh, I see. That makes sense. I'm also sleep deprived over here so my reading comprehension is a bit low ;|. Well in that case check out this link: https://github.com/sahil280114/codealpaca
-
Cerebras-GPT: A Family of Open, Compute-Efficient, Large Language Models
Sorry for the late reply, as I said Flan-UL2 (or Flan-T5 if you want lighter models) fine-tuned against a dataset like CodeAlpaca's[0] is probably the best solution if it's intended for commercial use (otherwise LLaMa should perform better).
[0]: https://github.com/sahil280114/codealpaca
- CodeAlpaca – Instruction following code generation model
alpaca-electron
-
Are you sure you are focusing on the right things? (venting)
I sympathize. There are some efforts here and there but it's not something that resonates with the enthusiast crowd much. An abandoned example here: ItsPi3141/alpaca-electron
- Guess I am kinda famous now
-
one-click install LLM desktop apps
Look up troublechute on youtube. Or alpaca electron
- What's the most basic NVIDIA graphics card that will work with mainstream 7B GPU models?
-
Locally Hosted ChatGPT3 or Higher
I recently tried alpaca electron with the 7b model. I am surprised how well this runs on my own hardware with very little CPU and RAM consumption.
- Running oobabooga with Alpaca on Apple Silicon (M1/M2)
- Optimization Of Computational Power & Data Transfer For Elly (Global AI)
-
Cerebras-GPT: A Family of Open, Compute-Efficient, Large Language Models
Here's alpaca running in electron. Not exactly one click but close.
https://github.com/ItsPi3141/alpaca-electron
What are some alternatives?
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
llm-code - An OpenAI LLM based CLI coding assistant.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llm-humaneval-benchmarks
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
awesome-ai-coding - Awesome AI Coding
catai - UI for 🦙model . Run AI assistant locally ✨
openplayground-api - A reverse engineered Python API wrapper for OpenPlayground (nat.dev)
flan-alpaca - This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
supercharger - Supercharge Open-Source AI Models
dalai - The simplest way to run LLaMA on your local machine