camel
EasyLM
camel | EasyLM | |
---|---|---|
5 | 8 | |
4,477 | 2,247 | |
4.0% | - | |
8.9 | 7.7 | |
3 days ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
camel
-
Auto-GPT seems nearly unusable
Look into the following: https://github.com/lightaime/camel
-
Hmmmm.... perhaps there's a leap forward here
Anyway - have a look at the demos on https://www.camel-ai.org/. It absolutely blew me away. I gave it a task to make a lemonade game in python. It...just did it.
-
AI — weekly megathread!
CAMEL (Communicative Agents for “Mind” Exploration of LLM Society) - AI agents interacting with each other and collaborating. For e.g., two ChatGPT agents playing roles as a python programmer and a stock trader collaborating on developing a trading bot for stock market. [ Colab of the demo | Project website]
-
6-Apr-2023
CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society (https://github.com/lightaime/camel)
EasyLM
- Maxtext: A simple, performant and scalable Jax LLM
- How To Fine-Tune LLaMA, OpenLLaMA, And XGen, With JAX On A GPU Or A TPU
-
Open-sourced LLMs are adept at mimicking ChatGPT’s style but not its factuality. There exists a substantial capabilities gap, which requires better base LM.
Title: The False Promise of Imitating Proprietary LLLs Authors: Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, Dawn Song Word Count: 3400 Average Reading Time: 18-20 minutes Source Code: https://github.com/young-geng/EasyLM Additional Links: https://huggingface.co/young-geng/koala-eval, https://huggingface.co/young-geng/koala
-
Paid dev gig: develop a basic LLM PEFT finetuning utility
Check out easyLM https://github.com/young-geng/EasyLM
-
OpenLLaMA Releases 7B/3B Checkpoints with 700B/600B Tokens
We release the weights in two formats: an EasyLM format to be use with our EasyLM framework, and a PyTorch format to be used with the Hugging Face transformers library.
-
OpenLLaMA: An Open Reproduction of LLaMA
I am quite new to this, I would like to get it running. Would the process roughly be:
1. Get a machine with decent GPU, probably rent cloud GPU.
2. On that machine download the weights/model/vocab files from https://huggingface.co/openlm-research/open_llama_7b_preview...
3. Install Anaconda. Clone https://github.com/young-geng/EasyLM/.
4. Install EasyLM:
conda env create -f scripts/gpu_environment.yml
- Koala: A Dialogue Model for Academic Research [Finetuned Llama-13B on a dataset generated by ChatGPT]
What are some alternatives?
multi_agent_path_planning - Python implementation of a bunch of multi-robot path-planning algorithms.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
langchainjs - 🦜🔗 Build context-aware reasoning applications 🦜🔗
Open-Llama - The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.
dialop - DialOp: Decision-oriented dialogue environments for collaborative language agents
brev-cli - Connect your laptop to cloud computers. Follow to stay updated about our product