Python vicuna

Open-source Python projects categorized as vicuna

Top 7 Python vicuna Projects

  • DB-GPT

    AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents

    Project mention: (2/2) May 2023 | /r/dailyainews | 2023-06-02

    Interact your data and environment using the local GPT (https://github.com/csunny/DB-GPT)

  • InternGPT

    InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)

    Project mention: How do I use the programs on Github? | /r/github | 2023-06-16

    You can also create an issue and ask the developers for help.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

  • safe-rlhf

    Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback

    Project mention: [R] Meet Beaver-7B: a Constrained Value-Aligned LLM via Safe RLHF Technique | /r/MachineLearning | 2023-05-16
  • xllm

    🦖 X—LLM: Cutting Edge & Easy LLM Finetuning

    Project mention: X–LLM: Cutting Edge and Easy LLM Finetuning | news.ycombinator.com | 2023-11-16
  • ExpertLLaMA

    An opensource ChatBot built with ExpertPrompting which achieves 96% of ChatGPT's capability.

    Project mention: ExpertPrompting: Instructing Large Language Models to be Distinguished Experts | /r/singularity | 2023-05-25

    The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts. In this paper, we propose ExpertPrompting to elicit the potential of LLMs to answer as distinguished experts. We first utilize In-Context Learning to automatically synthesize detailed and customized descriptions of the expert identity for each specific instruction, and then ask LLMs to provide answer conditioned on such agent background. Based on this augmented prompting strategy, we produce a new set of instruction-following data using GPT-3.5, and train a competitive open-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation to show that 1) the expert data is of significantly higher quality than vanilla answers, and 2) ExpertLLaMA outperforms existing open-source opponents and achieves 96\% of the original ChatGPT's capability. All data and the ExpertLLaMA model will be made publicly available at this https URL.

  • willow-inference-server

    Open source, local, and self-hosted highly optimized language inference server supporting ASR/STT, TTS, and LLM across WebRTC, REST, and WS

    Project mention: Brave Leo now uses Mixtral 8x7B as default | news.ycombinator.com | 2024-01-27

    I think this perspective comes from a lack of historical experience and hands-on experience overall.

    Nvidia more broadly has very impressive support for their GPUs. If you look at the support lifecycles for their Jetson hardware over time it's significantly worse. I encourage you to look at what support lifecycles have looked like, with the most "egregious" example being dropping of support for the Jetson Nano in from what I recall was within a couple of years.

    Another consideration - Jetson is optimized for power efficiency/form-factor and on a per $ basis CUDA performance is terrible. The power efficiency and form-factor come at significant cost. See this discussion from one of my projects[0]. I evaluated the use of WIS on an Orin that I have and from what I can recall it was significantly slower than a GTX 1070 which is... Unimpressive.

    In the end what do I care what people use, I'm offering the perspective and experience of someone who has actually used the Jetson line for many years and frequently struggled with all of these issues and more.

    [0] - https://github.com/toverainc/willow-inference-server/discuss...

  • h2o-wizardlm

    Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning

    Project mention: Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning | /r/aipromptprogramming | 2023-05-29
  • LearnThisRepo.com

    Learn 300+ open source libraries for free using AI. LearnThisRepo lets you learn 300+ open source repos including Postgres, Langchain, VS Code, and more by chatting with them using AI!

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2024-01-27.

Python vicuna related posts

Index

What are some of the best open-source vicuna projects in Python? This list will help you:

Project Stars
1 DB-GPT 9,875
2 InternGPT 3,064
3 safe-rlhf 1,060
4 xllm 323
5 ExpertLLaMA 285
6 willow-inference-server 265
7 h2o-wizardlm 256
The modern API for authentication & user identity.
The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
workos.com