llm-apex-agents
supercharger
llm-apex-agents | supercharger | |
---|---|---|
4 | 13 | |
46 | 346 | |
- | - | |
6.1 | 6.6 | |
about 1 year ago | about 1 year ago | |
Apex | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-apex-agents
-
April 2023
Run Large Language Model "Agents" in Salesforce apex (https://github.com/callawaycloud/llm-apex-agents)
-
Delimiters wonโt save you from prompt injection
The instructor changed their mind and asked for a poem about cuddly panda bears to be written, disregarding previous instructions.
I think this can be taken a step further by actually providing the instructions to the model via the System & Assistant role (in first person). I assume these roles are really just combined into a single completion prompt before being fed to the raw model, but whatever OpenAI is doing, seem to be pretty effective in my testing.
[0]: https://github.com/callawaycloud/llm-apex-agents/assets/5217...
- Show HN: Apex Agents, LLM Agents Running Natively in Salesforce
-
"Auto-GPT" but running in Salesforce
If you're interested in trying it out, checkout the github repo.
supercharger
-
Claude 2
Since I've been on a AI code-helper kick recently. According to the post, Claude 2 now 71.2%, a significant upgrade from 1.3 (56.0%). It isn't specified whether this is pass@1 or pass@10.
For comparison:
* GPT-4 claims 85.4 on HumanEval, in a recent paper https://arxiv.org/pdf/2303.11366.pdf GPT-4 was tested at 80.1 pass@1 and 91 pass@1 using their Reflexion technique. They also include MBPP and Leetcode Hard benchmark comparisons
* WizardCoder, a StarCoder fine-tune is one of the top open models, scoring a 57.3 pass@1, model card here: https://huggingface.co/WizardLM/WizardCoder-15B-V1.0
* The best open model I know of atm is replit-code-instruct-glaive, a replit-code-3b fine tune, which scores a 63.5% pass@1. An independent developer abacaj has reproduced that announcement as part of code-eval, a repo for getting human-eval results: https://github.com/abacaj/code-eval
Those interested in this area may also want to take a look at this repo https://github.com/my-other-github-account/llm-humaneval-ben... that also ranks with Eval+, the CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... and airate https://github.com/catid/supercharger/tree/main/airate
Also, as with all LLM evals, to be taken with a grain of salt...
Liu, Jiawei, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. โIs Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation.โ arXiv, June 12, 2023. https://doi.org/10.48550/arXiv.2305.01210.
- Let's be honest: none of the models can code well
-
April 2023
Leverage locally-hosted Large Language Models to write software + unit tests (https://github.com/catid/supercharger)
- What coding llm is the best?
-
Is there such a thing as local Llamas integrated into VSCode?
supercharger Write Software + unit tests for you, based on Baize-30B 8bit, using model parallelism
- I have a project in my own programming language, abusing both lexical and syntactic macros. I want to do a refactoring tasks on it. I don't have a GPU, but 14-core CPU. Should I pay for cloud or there are local ways to do such task on my laptop? Which model is better for programming?
- What is the best open source model/program to help index and debug code?
- Leverage locally-hosted Large Language Models to write software and unit tests
-
Can LLMs do static code analysis?
Added support for 65B LLaMa model to https://github.com/catid/supercharger tonight. It runs faster than Baize 30B (maybe due to lack of adapter) and only slightly slower than Galpaca 30B. Benchmarks here: https://docs.google.com/spreadsheets/d/1TYBNr_UPJ7wCzJThuk5ysje7K1x-_62JhBeXDbmrjA8/edit?usp=sharing
-
Benchmarks for LLMs on Consumer Hardware
Here's the code that loads it: https://github.com/catid/supercharger/blob/main/server/model_koala.py
What are some alternatives?
Doctor-Dignity - Doctor Dignity is an LLM that can pass the US Medical Licensing Exam. It works offline, it's cross-platform, & your health data stays private.
developer - the first library to let you embed a developer agent in your own app!
E2B - Secure cloud runtime for AI apps & AI agents. Fully open-source.
gptest - GPTest VS Code Extension
awesome-chatgpt - ๐ง A curated list of awesome ChatGPT resources, including libraries, SDKs, APIs, and more. ๐ Please consider supporting this project by giving it a star.
walter - AI-powered software development assistant built right into GitHub so it can act as your junior developer.
telegram-chatgpt-concierge-bot - Interact with OpenAI's ChatGPT via Telegram and Voice.
llm-humaneval-benchmarks
vocode-python - ๐ค Build voice-based LLM agents. Modular + open source.
evaporate - This repo contains data and code for the paper "Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes"
turbopilot - Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU
locai - Connect to Kobold API through VS Code