NExT-GPT
InternGPT
NExT-GPT | InternGPT | |
---|---|---|
1 | 5 | |
2,882 | 3,135 | |
- | 1.5% | |
9.3 | 8.8 | |
4 months ago | 6 months ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
NExT-GPT
InternGPT
-
How do I use the programs on Github?
You can also create an issue and ask the developers for help.
- InternGPT
- DragGAN demo is now live!! Best AI Tool For Editing Images
- Web based multimodal ChatGPT - InternGPT
What are some alternatives?
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
langchain-chatbot - Chatbot using LLM chat model and Langchain, LangSmith.
gpt_academic - 为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。
MiniGPT-4-discord-bot - A true multimodal LLaMA derivative -- on Discord!
InternChat - InternGPT / InternChat allows you to interact with ChatGPT by clicking, dragging and drawing using a pointing device. [Moved to: https://github.com/OpenGVLab/InternGPT]
xllm - 🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
Video-LLaMA - [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
codeinterpreter-api - 👾 Open source implementation of the ChatGPT Code Interpreter
Otter - 🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
Multi-Modality-Arena - Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
unilm - Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
agentchain - Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks