FastFold
torchsynth
FastFold | torchsynth | |
---|---|---|
2 | 2 | |
506 | 319 | |
- | 1.6% | |
0.0 | 6.1 | |
10 months ago | 13 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FastFold
-
👉 Impressed With AlphaFold? Checkout This Protein Structure Prediction Model (FastFold) That Reduces AlphaFold’s Training Time From 11 Days To 67 Hours
Code for https://arxiv.org/abs/2203.00854 found: https://github.com/hpcaitech/FastFold
Github: https://github.com/hpcaitech/FastFold
torchsynth
-
Is there any AI sound generator that is not voice?
One Billion Audio Sounds from GPU-enabled Modular Synthesis - Synthesizing modular synths. Code here.
-
Massively Parallel Rendering of Complex Closed-Form Implicit Surfaces (2020)
https://www.cv-foundation.org/openaccess/content_cvpr_2016/p...
This concept has not (yet) been applied in audio ML. We have a paper in submission---will be on ArXiv soon---where we share a GPU-enabled modular synthesizer that is 16000x faster than realtime, concurrently released with a 1-billion audio sample corpus that is 100x larger than any audio dataset in the literature. Here's the code: https://github.com/torchsynth/torchsynth
What are some alternatives?
openfold - Trainable, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2
koila - Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.
warp-drive - Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
bittensor - Internet-scale Neural Networks
torchrec - Pytorch domain library for recommendation systems
TorchGA - Train PyTorch Models using the Genetic Algorithm with PyGAD
TransformerEngine - A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
autocvd - Tool to automatically set CUDA_VISIBLE_DEVICES based on GPU utilization. Usable from command line and code.