DeepFaceLive
DISCONTINUED
ColossalAI
Our great sponsors
DeepFaceLive | ColossalAI | |
---|---|---|
55 | 41 | |
13,912 | 37,465 | |
- | 3.2% | |
8.4 | 9.7 | |
10 months ago | 3 days ago | |
Python | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepFaceLive
- Is it possible to sync a lip and facial expression animation with audio in real time?
-
Is there a way to do facial rigs on AI images?
A more lifelike deformer would be running a 'deepfake' layer over your face motion into your 2D character face, but so far I haven't tried it yet. Here is some example of a well known open source 'faceswapper' : https://github.com/iperov/DeepFaceLive
- Animate your stable diffusion portraits
- Selfhosted AI
- Deepfakes in High-Resolution Created From a Single Photo
-
AI MoistCritical roasts the fuck out of Athene
Live video feed deepfake: DeepFaceLive
- Stop Developing This Technology
-
Keanu Reeves started streaming on Twitch
DeepFaceLive
Edit: Ah this is literally DFL, Keanu is another default face now: https://github.com/iperov/DeepFaceLive
I guess DeepFaceLive. Based on DeepFaceLab. Almost all DeepFakes are made with it.
ColossalAI
-
Open source solution replicates ChatGPT training process
The article talks about their RLHF implementation briefly. There’s details on their RLHF implementation here: https://github.com/hpcaitech/ColossalAI/blob/a619a190df71ea3...
-
An Open-Source Version of ChatGPT is Coming [News]
Need to deploy the inference model with Colossal AI.
-
Training dreambooth/embeddings on an RTX 3060 - possible?
It’s a framework for alot of pipeline parallelism optimizations that can allow you to not have to fit the whole model in vram. https://www.hpc-ai.tech/blog/diffusion-pretraining-and-hardware-fine-tuning-can-be-almost-7x-cheaper Tutorial here: https://github.com/hpcaitech/ColossalAI/blob/main/examples/images/dreambooth/README.md I have amd cards so I haven’t tried this yet but am thinking of converting my amd gpu server over to nvidia bc of this
-
A complete open-source solution for accelerating Stable Diffusion
Hey forks. We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning. It help reduce the pretraining cost by 6.5 times, and the hardware cost of fine-tuning by 7 times, while simultaneously speeding up the processes.
Open source address: https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
We also write a blog post about it. https://medium.com/@yangyou_berkeley/diffusion-pretraining-and-hardware-fine-tuning-can-be-almost-7x-cheaper-85e970fe207b
Glad to know your thoughts about our work!
Just to make the links clickable:
https://github.com/hpcaitech/ColossalAI/tree/main/examples/i...
https://medium.com/@yangyou_berkeley/diffusion-pretraining-a...
-
We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning!
Open source address: https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion
- Colossal-AI releases a complete open-source Stable Diffusion pretraining and fine-tuning solution that reduces the pretraining cost by 6.5 times, and the hardware cost of fine-tuning by 7 times, while simultaneously speeding up the processes
-
Colossal-AI Seamlessly Accelerates Large Models at Low Costs with Hugging Face
Portal Project address: https://github.com/hpcaitech/ColossalAI Reference https://arxiv.org/abs/2202.05924v2 https://arxiv.org/abs/2205.11487 https://github.com/features/copilot https://github.com/huggingface/transformers https://www.forbes.com/sites/forbestechcouncil/2022/03/25/six-ai-trends-to-watch-in-2022/?sh=4dc51f82be15 https://www.infoq.com/news/2022/06/meta-opt-175b/
-
The 10 Trending Python Repositories on GitHub (May 2022)
ColossalAI
What are some alternatives?
DeepFaceLab - DeepFaceLab is the leading software for creating deepfakes.
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Megatron-LM - Ongoing research training transformer models at scale
determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
fairscale - PyTorch extensions for high performance and large scale training.
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
libreddit - Private front-end for Reddit
web2img - Bundle web files into a single image
Lemmy - 🐀 A link aggregator and forum for the fediverse
PaddlePaddle - PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
PaddleNLP - 👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
ivy - The Unified AI Framework