phasellm
turbopilot
phasellm | turbopilot | |
---|---|---|
14 | 15 | |
443 | 3,839 | |
- | - | |
8.9 | 10.0 | |
3 months ago | 8 months ago | |
Python | C++ | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
phasellm
-
Ask HN: Any recommended AI tools to analyze data and generate insights?
If you're looking for an open source solution you can customize, check out the ResearchLLM demo: https://phasellm.com/researchllm
Code: https://github.com/wgryc/phasellm/tree/main/demos-and-produc...
- PhaseLLM Eval: run batch LLM jobs and evals via visual front-end (MIT licensed)
-
To everyone who is using alternative bots (e.g. Claude) - your comparisons?
Using Claude, Cohere, GPT-4, OpenAssistant. Formally swapping between them using PhaseLLM (open source library similar to LangChain).
-
April 2023
Large language model evaluation and workflow framework from Phase AI. (https://github.com/wgryc/phasellm)
- Ask HN: Freelancer? Seeking freelancer? (June 2023)
-
ResearchGPT: Automated Data Analysis and Interpretation
Fantastic questions! Re: working/not working at times -- this is still an issue. It's why I'm building PhaseLLM more broadly (https://github.com/wgryc/phasellm) -- need a robust pipeline that can also "reset" parts of itself if an LLM makes errors or mistakes.
You can see my prompts in this file: https://github.com/wgryc/phasellm/blob/main/demos-and-produc... I autogenerate a fairly big starting prompt and keep resubmitting it. It describes the data set extensively, which helps quite a bit.
That being said, a lot more can be done here around prompt optimization + making this more robust.
- ResearchGPT: LLMs to write stats code, analyze, and interpret results for you
-
Best way to use GPT offline with own content?
That being said, you might want to actually run head-to-head tests between models. PhaseLLM (free, open source) allows you to build a workflow and plug and play various models (including Dolly 2.0 and GPT-4). Then you can run tests to see how much worse/better the various LLMs are and if that's acceptable for your use case.
-
12-Apr-2023 AI Summary
Large language model evaluation and workflow framework from Phase AI. (https://github.com/wgryc/phasellm)
- PhaseLLM: Standardized Chat LLM API (Cohere, Claude, GPT) + Evaluation Framework
turbopilot
- New version of Turbopilot released!
-
GGML for Falcoder7B, SantaCoder 1B, TinyStarCoder 160M
fyi https://github.com/ravenscroftj/turbopilot
-
April 2023
TurboPilot: self-hosted copilot clone which uses the library behind llama.cpp to run the 6 Billion Parameter Salesforce Codegen model in 4GiB of RAM. (https://github.com/ravenscroftj/turbopilot)
-
Which Models Best for Programming?
This repo has a potential
-
[D] What Repos/Tools Should We Pay Attention To?
Right now https://github.com/ggerganov/llama.cpp is the dominant back-end for querying models, but forks and alternatives like https://github.com/ravenscroftj/turbopilot keep popping up. Increasingly, models submitted to huggingface explicitly note in their READMEs that the model is not compatible with llama.cpp, and that a different back-end must be used.
-
newbie seeking impressive llama models, am i missing something?
There's turbopilot. I haven't tried it yet, but it looks promising.
- LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
-
LLM specialized in programming ?
Turbopilot | open source LLM code completion engine and Copilot alternative
-
Locally running models like Chatgpt for Emacs?
This 6B parameters tool (based on README) could be runned with 4 Gb of RAM. https://github.com/ravenscroftj/turbopilot
-
What models and setup is good for generating code
there is an interesting link https://github.com/ravenscroftj/turbopilot/wiki/Converting-and-Quantizing-The-Models , just wondering if anyone have done this with 16b and put the weights somewhere
What are some alternatives?
awesome-chatgpt - 🧠A curated list of awesome ChatGPT resources, including libraries, SDKs, APIs, and more. 🌟 Please consider supporting this project by giving it a star.
tabby - Self-hosted AI coding assistant
telegram-chatgpt-concierge-bot - Interact with OpenAI's ChatGPT via Telegram and Voice.
fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
ggml - Tensor library for machine learning
rel-events - The relevant React Events Library.
prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai
kivy - Open source UI framework written in Python, running on Windows, Linux, macOS, Android and iOS
simpleAI - An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.