Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 8 Python vicuna Projects
-
DB-GPT
AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
-
InternGPT
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
-
willow-inference-server
Open source, local, and self-hosted highly optimized language inference server supporting ASR/STT, TTS, and LLM across WebRTC, REST, and WS
-
ExpertLLaMA
An opensource ChatBot built with ExpertPrompting which achieves 96% of ChatGPT's capability.
-
h2o-wizardlm
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
chat-llama-discord-bot
A Discord Bot for chatting with LLaMA, Vicuna, Alpaca, MPT, or any other Large Language Model (LLM) supported by text-generation-webui or llama.cpp.
Interact your data and environment using the local GPT (https://github.com/csunny/DB-GPT)
You can also create an issue and ask the developers for help.
Project mention: [R] Meet Beaver-7B: a Constrained Value-Aligned LLM via Safe RLHF Technique | /r/MachineLearning | 2023-05-16
I think this perspective comes from a lack of historical experience and hands-on experience overall.
Nvidia more broadly has very impressive support for their GPUs. If you look at the support lifecycles for their Jetson hardware over time it's significantly worse. I encourage you to look at what support lifecycles have looked like, with the most "egregious" example being dropping of support for the Jetson Nano in from what I recall was within a couple of years.
Another consideration - Jetson is optimized for power efficiency/form-factor and on a per $ basis CUDA performance is terrible. The power efficiency and form-factor come at significant cost. See this discussion from one of my projects[0]. I evaluated the use of WIS on an Orin that I have and from what I can recall it was significantly slower than a GTX 1070 which is... Unimpressive.
In the end what do I care what people use, I'm offering the perspective and experience of someone who has actually used the Jetson line for many years and frequently struggled with all of these issues and more.
[0] - https://github.com/toverainc/willow-inference-server/discuss...
Project mention: ExpertPrompting: Instructing Large Language Models to be Distinguished Experts | /r/singularity | 2023-05-25The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts. In this paper, we propose ExpertPrompting to elicit the potential of LLMs to answer as distinguished experts. We first utilize In-Context Learning to automatically synthesize detailed and customized descriptions of the expert identity for each specific instruction, and then ask LLMs to provide answer conditioned on such agent background. Based on this augmented prompting strategy, we produce a new set of instruction-following data using GPT-3.5, and train a competitive open-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation to show that 1) the expert data is of significantly higher quality than vanilla answers, and 2) ExpertLLaMA outperforms existing open-source opponents and achieves 96\% of the original ChatGPT's capability. All data and the ExpertLLaMA model will be made publicly available at this https URL.
Project mention: Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning | /r/aipromptprogramming | 2023-05-29
Python vicuna related posts
- Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning
- GitHub - csunny/DB-GPT: Interact your data and environment using the local GPT, no data leaks, 100% privately, 100% security
- DB-GPT - OSS to interact with your local LLM
- Show HN: DB-GPT, an LLM tool for database
- Show HN: DB-GPT, an LLM tool for database
- Show HN: DB-GPT, an LLM tool for database
- DB-GPT - OSS to interact with your local LLM
-
A note from our sponsor - InfluxDB
www.influxdata.com | 30 Apr 2024
Index
What are some of the best open-source vicuna projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | DB-GPT | 10,943 |
2 | InternGPT | 3,121 |
3 | safe-rlhf | 1,149 |
4 | xllm | 348 |
5 | willow-inference-server | 316 |
6 | ExpertLLaMA | 289 |
7 | h2o-wizardlm | 274 |
8 | chat-llama-discord-bot | 113 |
Sponsored