Instruct2Act
LLMSurvey
Instruct2Act | LLMSurvey | |
---|---|---|
1 | 3 | |
273 | 9,126 | |
9.2% | 11.5% | |
3.8 | 6.4 | |
about 2 months ago | 13 days ago | |
Python | Python | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Instruct2Act
-
[R]Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
Code: https://github.com/OpenGVLab/Instruct2Act
LLMSurvey
-
Ask HN: Textbook Regarding LLMs
Here’s another one - it’s older but has some interesting charts and graphs.
https://arxiv.org/abs/2303.18223
-
Share your favorite materials: intersection of LLMs and business applications
There have recently been some some nice early surveys on progress, pitfalls, future research directions:
- A Survey of LLMs https://arxiv.org/abs/2303.18223
- A Survey of Large Language Models
What are some alternatives?
Caption-Anything - Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with diverse controls for user preferences. https://huggingface.co/spaces/TencentARC/Caption-Anything https://huggingface.co/spaces/VIPLab/Caption-Anything
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
openscene - [CVPR'23] OpenScene: 3D Scene Understanding with Open Vocabularies
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
InternChat - InternGPT / InternChat allows you to interact with ChatGPT by clicking, dragging and drawing using a pointing device. [Moved to: https://github.com/OpenGVLab/InternGPT]
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
sam-clip - Use Grounding DINO, Segment Anything, and CLIP to label objects in images.
HugNLP - CIKM2023 Best Demo Paper Award. HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
safe-rlhf - Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
InternGPT - InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
Qwen - The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.