stacksort
codealpaca
stacksort | codealpaca | |
---|---|---|
36 | 20 | |
1,238 | 1,375 | |
- | - | |
2.3 | 4.4 | |
11 months ago | 12 months ago | |
JavaScript | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stacksort
- Stacksort (2013)
-
So Hows the Hackathon Going?
Ah, good ol' stacksort
-
Stack Overflow Will Charge AI Giants for Training Data
Reminiscent of stacksort
-
[R] CodeAlpaca - Instruction following model to generate code
Yea but is it better than stacksort?
-
What do you use usually use ChatGPT for?
How does that even work tho? Like, my understanding is that GPT is trained in a limited set of parameters and to add anything, you have to redo the whole training. Is it connected to google and just like a fancier version of Stacksort?
-
So you're a programmer? Name ALL algorithms!
There is only one sorting algorithm Stacksort
- Why choose a JS CDN when you can let them compete for your love and affection?
-
How the Dead Internet Theory is fast becoming reality
Semi-related: Someone wrote a sort algorithm that searches StackOverflow for sorting functions and runs them until it returns the correct answer.
https://gkoberger.github.io/stacksort/
Inspired by:
http://xkcd.com/1185/
-
Personally I like merge sort
stacksort
-
a developers worst nightmare
But blindly running stuff from SO is the road to success
codealpaca
-
Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark!
CodeAlpaca 7B
-
OpenAI isn’t doing enough to make ChatGPT’s limitations clear
This is great!
Addressing the model limitations a bit: in the demonstration data that is provided to the base model, we should prevent computed or "looked up" answers.
I've seen some of the demonstration data that people are using to train instruction-tuned models and are being taught to respond by making up answers to solutions it shouldn't try to compute. Btw, the output is wrong.
{ "instruction": "What would be the output of the following JavaScript snippet?", "input": "let area = 6 * 5;\nlet radius = area / 3.14;", "output": "The output of the JavaScript snippet is the radius, which is 1.91." }, [1]
The UI note for now would get us very far but by filtering out demonstrations that retrieve or compute information should be filtered out.
Symbol tuning [2] is addressing the quality of demonstrations but we can take it further by removing retrievals and computations altogether.
Bonus: we can demonstrate how to make it respond so that the user/agent be informed of how to compute or retrieve.
1: https://github.com/sahil280114/codealpaca/commit/0d265112c70...
2: https://arxiv.org/abs/2305.08298
- How to Finetune GPT Like Large Language Models on a Custom Dataset
- Ask HN: Those with success using GPT-4 for programming – what are you doing?
-
Is there a colab or guide for fine tuning a 13b model for instruction following?
I found guides like this: https://github.com/sahil280114/codealpaca
-
Can LLMs do static code analysis?
Try, https://github.com/sahil280114/codealpaca, or we’re you trying to stick with more generalist models?
-
LoRA in LLaMAc++? Converting to 4bit? How to use models that are split into multiple .bin ?
Oh, I see. That makes sense. I'm also sleep deprived over here so my reading comprehension is a bit low ;|. Well in that case check out this link: https://github.com/sahil280114/codealpaca
-
Cerebras-GPT: A Family of Open, Compute-Efficient, Large Language Models
Sorry for the late reply, as I said Flan-UL2 (or Flan-T5 if you want lighter models) fine-tuned against a dataset like CodeAlpaca's[0] is probably the best solution if it's intended for commercial use (otherwise LLaMa should perform better).
[0]: https://github.com/sahil280114/codealpaca
- CodeAlpaca – Instruction following code generation model
What are some alternatives?
fuckitjs - The Original Javascript Error Steamroller
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
deskreen - Deskreen turns any device with a web browser into a secondary screen for your computer. ⭐️ Star to support our work!
alpaca-electron - The simplest way to run Alpaca (and other LLaMA-based local LLMs) on your own computer
stackanswers.vim - Vim plugin to fetch and display answers from Stack Overflow
llm-code - An OpenAI LLM based CLI coding assistant.
turksort - 👥 Sorting powered by human intelligence
llm-humaneval-benchmarks
stack-overflow-import - Import arbitrary code from Stack Overflow as Python modules.
awesome-ai-coding - Awesome AI Coding
stalin-sort - Add a stalin sort algorithm in any language you like ❣️ if you like give us a ⭐️
openplayground-api - A reverse engineered Python API wrapper for OpenPlayground (nat.dev)