ColossalAI
determined


ColossalAI | determined | |
---|---|---|
42 | 10 | |
39,061 | 3,092 | |
0.2% | 0.8% | |
9.7 | 9.8 | |
5 days ago | 10 days ago | |
Python | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ColossalAI
- FLaNK AI-April 22, 2024
- Making large AI models cheaper, faster and more accessible
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with a RLHF Pipeline
> open-source a complete RLHF pipeline ... based on the LLaMA pre-trained model
I've gotten to where when I see "open source AI" I now know it's "well, except for $some_other_dependencies"
Anyway: https://scribe.rip/@yangyou_berkeley/colossalchat-an-open-so... and https://github.com/hpcaitech/ColossalAI#readme (Apache 2) can save you some medium.com heartache at least
-
Meet ColossalChat: An Open-Source AI Solution For Cloning ChatGPT With A Complete RLHF Pipeline
Quick Read: https://www.marktechpost.com/2023/04/01/meet-colossalchat-an-open-source-ai-solution-for-cloning-chatgpt-with-a-complete-rlhf-pipeline/ Github: https://github.com/hpcaitech/ColossalAI Examples: https://chat.colossalai.org/
-
A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data
One of the current methods for training competing models is to have ChatGPT literally create prompt -> completion data sets. That's what was used for https://github.com/hpcaitech/ColossalAI. A model based off of the Llama weights released by facebook, then fine tuned on ChatGPT3.5 prompt + completions. So yes, there is a good chance that google is literally using ChatGPT in the training loop.
- Colossal-AI: open-source RLHF pipeline based on LLaMA pre-trained model
- ColossalChat
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with RLHF Pipeline
Here's the github from the article:
https://github.com/hpcaitech/ColossalAI
-
Open source solution replicates ChatGPT training process
The article talks about their RLHF implementation briefly. There’s details on their RLHF implementation here: https://github.com/hpcaitech/ColossalAI/blob/a619a190df71ea3...
-
how can I make my own chatGPT?
Here’s the project on GitHub: https://github.com/hpcaitech/ColossalAI
determined
-
Open Source Advent Fun Wraps Up!
17. Determined AI | Github | tutorial
-
ML Experiments Management with Git
Use Determined if you want a nice UI https://github.com/determined-ai/determined#readme
- Determined: Deep Learning Training Platform
-
Queueing/Resource Management Solutions for Self Hosted Workstation?
I looked up and found [Determined Platform](determined.ai), tho it looks a very young project that I don't know if it's reliable enough.
-
Ask HN: Who is hiring? (June 2022)
- Developer Support Engineer (~1/3 client facing, triaging feature requests and bug reports, etc; 2/3 debugging/troubleshooting)
We are developing enterprise grade artificial intelligence products/services for AI engineering teams and fortune 500 companies and need more software devs to fill the increasing demand.
Find out more at https://determined.ai/. If AI piques your curiosity or you want to interface with highly skilled engineers in the community, apply within (search "determined ai" at careers.hpe.com and drop me a message at asnell AT hpe PERIOD com).
-
How to train large deep learning models as a startup
Check out Determined https://github.com/determined-ai/determined to help manage this kind of work at scale: Determined leverages Horovod under the hood, automatically manages cloud resources and can get you up on spot instances, T4's, etc. and will work on your local cluster as well. Gives you additional features like experiment management, scheduling, profiling, model registry, advanced hyperparameter tuning, etc.
Full disclosure: I'm a founder of the project.
-
[D] managing compute for long running ML training jobs
These are some of the problems we are trying to solve with the Determined training platform. Determined can be run with or without k8s - the k8s version inherits some of the scheduling problems of k8s, but the non-k8s version uses a custom gang scheduler designed for large scale ML training. Determined offers a priority scheduler that allows smaller jobs to run while being able to schedule a large distributed job whenever you need, by setting a higher priority.
-
Cerebras’ New Monster AI Chip Adds 1.4T Transistors
Ah I see - I think we're pretty much on the same page in terms of timetables. Although if you include TPU, I think it's fair to say that custom accelerators are already a moderate success.
Updated my profile. I've been working on DL training platforms and distributed training benchmarking for a bit so I've gotten a nice view into the GPU/TPU battle.
Shameless plug: you should check out the open-source training platform we are building, Determined[1]. One of the goals is to take our hard-earned expertise on training infrastructure and build a tool where people don't need to have that infrastructure expertise. We don't support TPUs, partially because a lack of demand/TPU availability, and partially because our PyTorch TPU experiments were so unimpressive.
[1] GH: https://github.com/determined-ai/determined, Slack: https://join.slack.com/t/determined-community/shared_invite/...
-
[D] Software stack to replicate Azure ML / Google Auto ML on premise
Take a look at Determined https://github.com/determined-ai/determined
-
AWS open source news and updates No.41
determined is an open-source deep learning training platform that makes building models fast and easy. This project provides a CloudFormation template to bootstrap you into AWS and then has a number of tutorials covering how to manage your data, train and then deploy inference endpoints. If you are looking to explore more open source machine learning projects, then check this one out.
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Dagger.jl - A framework for out-of-core and parallel execution
Megatron-LM - Ongoing research training transformer models at scale
aws-virtual-gpu-device-plugin - AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads
DeepFaceLive - Real-time face swap for PC streaming or video calls
goofys - a high-performance, POSIX-ish Amazon S3 file system written in Go
ivy - Convert Machine Learning Code Between Frameworks
adaptdl - Resource-adaptive cluster scheduler for deep learning training.
fairscale - PyTorch extensions for high performance and large scale training.
cfn-diagram - CLI tool to visualise CloudFormation/SAM/CDK stacks as visjs networks, draw.io or ascii-art diagrams.
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
pocketsphinx - A small speech recognizer

