FastFold
torchrec
FastFold | torchrec | |
---|---|---|
2 | 1 | |
506 | 1,728 | |
- | 1.3% | |
0.0 | 9.8 | |
10 months ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FastFold
-
👉 Impressed With AlphaFold? Checkout This Protein Structure Prediction Model (FastFold) That Reduces AlphaFold’s Training Time From 11 Days To 67 Hours
Code for https://arxiv.org/abs/2203.00854 found: https://github.com/hpcaitech/FastFold
Github: https://github.com/hpcaitech/FastFold
torchrec
-
Pytorch Introduces ‘TorchRec’: A Python-based PyTorch Domain Library For Recommendation Systems (RecSys)
TorchRec is a new PyTorch domain library for Recommendation Systems. This library includes standard sparsity and parallelism primitives, allowing researchers to create and implement cutting-edge customization models. CONTINUE READING
What are some alternatives?
torchsynth - A GPU-optional modular synthesizer in pytorch, 16200x faster than realtime, for audio ML researchers.
Federated-Recommendation-Neural-Collaborative-Filtering - Federated Neural Collaborative Filtering (FedNCF). Neural Collaborative Filtering utilizes the flexibility, complexity, and non-linearity of Neural Network to build a recommender system. Aim to federate this recommendation system.
openfold - Trainable, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2
federeco - implementation of federated neural collaborative filtering algorithm
warp-drive - Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
TransformerEngine - A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
NVTabular - NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.
autocvd - Tool to automatically set CUDA_VISIBLE_DEVICES based on GPU utilization. Usable from command line and code.
LLMRec - [WSDM'2024 Oral] "LLMRec: Large Language Models with Graph Augmentation for Recommendation"
NewsMTSC - Target-dependent sentiment classification in news articles reporting on political events. Includes a high-quality data set of over 11k sentences and a state-of-the-art classification model.
LargeBatchCTR - Large batch training of CTR models based on DeepCTR with CowClip.