Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 15 vicuna Open-Source Projects
-
DB-GPT
AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
-
InternGPT
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
willow-inference-server
Open source, local, and self-hosted highly optimized language inference server supporting ASR/STT, TTS, and LLM across WebRTC, REST, and WS
-
ExpertLLaMA
An opensource ChatBot built with ExpertPrompting which achieves 96% of ChatGPT's capability.
-
vicuna-installation-guide
The "vicuna-installation-guide" provides step-by-step instructions for installing and configuring Vicuna 13 and 7B
-
h2o-wizardlm
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning
-
chat-llama-discord-bot
A Discord Bot for chatting with LLaMA, Vicuna, Alpaca, MPT, or any other Large Language Model (LLM) supported by text-generation-webui or llama.cpp.
-
ollama-ai
A Ruby gem for interacting with Ollama's API that allows you to run open source AI LLMs (Large Language Models) locally.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
You can also create an issue and ask the developers for help.
You might reuse simple LLaMA tokenizer right in your Go code, look there:
https://github.com/gotzmann/llama.go/blob/8cc54ca81e6bfbce25...
I think this perspective comes from a lack of historical experience and hands-on experience overall.
Nvidia more broadly has very impressive support for their GPUs. If you look at the support lifecycles for their Jetson hardware over time it's significantly worse. I encourage you to look at what support lifecycles have looked like, with the most "egregious" example being dropping of support for the Jetson Nano in from what I recall was within a couple of years.
Another consideration - Jetson is optimized for power efficiency/form-factor and on a per $ basis CUDA performance is terrible. The power efficiency and form-factor come at significant cost. See this discussion from one of my projects[0]. I evaluated the use of WIS on an Orin that I have and from what I can recall it was significantly slower than a GTX 1070 which is... Unimpressive.
In the end what do I care what people use, I'm offering the perspective and experience of someone who has actually used the Jetson line for many years and frequently struggled with all of these issues and more.
[0] - https://github.com/toverainc/willow-inference-server/discuss...
Project mention: Show HN: Collider – the platform for local LLM debug and inference at warp speed | news.ycombinator.com | 2023-11-30
Project mention: Running Open-Source AI Models Locally with Ruby | news.ycombinator.com | 2024-02-05> Although there’s no dedicated gem for Ollama yet
https://rubygems.org/gems/ollama-ai
https://github.com/gbaptista/ollama-ai
> A Ruby gem for interacting with Ollama's API that allows you to run open source AI LLMs (Large Language Models) locally
To be fair, this just depends on Faraday and wrap the http API - it still doesn't automate ollama install etc.
vicuna related posts
-
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning
-
GitHub - csunny/DB-GPT: Interact your data and environment using the local GPT, no data leaks, 100% privately, 100% security
-
DB-GPT - OSS to interact with your local LLM
-
Show HN: DB-GPT, an LLM tool for database
-
Show HN: DB-GPT, an LLM tool for database
-
Show HN: DB-GPT, an LLM tool for database
-
DB-GPT - OSS to interact with your local LLM
-
A note from our sponsor - InfluxDB
www.influxdata.com | 3 Jun 2024
Index
What are some of the best open-source vicuna projects? This list will help you:
Project | Stars | |
---|---|---|
1 | DB-GPT | 11,374 |
2 | InternGPT | 3,144 |
3 | text-generation-webui-colab | 2,051 |
4 | llama.go | 1,181 |
5 | safe-rlhf | 1,188 |
6 | LLaMA-Cult-and-More | 421 |
7 | xllm | 357 |
8 | AgentLLM | 356 |
9 | willow-inference-server | 333 |
10 | ExpertLLaMA | 289 |
11 | vicuna-installation-guide | 288 |
12 | h2o-wizardlm | 284 |
13 | booster | 126 |
14 | chat-llama-discord-bot | 115 |
15 | ollama-ai | 107 |
Sponsored