torchrec
FastFold
torchrec | FastFold | |
---|---|---|
1 | 2 | |
1,732 | 506 | |
1.5% | - | |
9.8 | 0.0 | |
6 days ago | 10 months ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
torchrec
-
Pytorch Introduces ‘TorchRec’: A Python-based PyTorch Domain Library For Recommendation Systems (RecSys)
TorchRec is a new PyTorch domain library for Recommendation Systems. This library includes standard sparsity and parallelism primitives, allowing researchers to create and implement cutting-edge customization models. CONTINUE READING
FastFold
-
👉 Impressed With AlphaFold? Checkout This Protein Structure Prediction Model (FastFold) That Reduces AlphaFold’s Training Time From 11 Days To 67 Hours
Code for https://arxiv.org/abs/2203.00854 found: https://github.com/hpcaitech/FastFold
Github: https://github.com/hpcaitech/FastFold
What are some alternatives?
Federated-Recommendation-Neural-Collaborative-Filtering - Federated Neural Collaborative Filtering (FedNCF). Neural Collaborative Filtering utilizes the flexibility, complexity, and non-linearity of Neural Network to build a recommender system. Aim to federate this recommendation system.
torchsynth - A GPU-optional modular synthesizer in pytorch, 16200x faster than realtime, for audio ML researchers.
federeco - implementation of federated neural collaborative filtering algorithm
openfold - Trainable, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2
warp-drive - Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
NVTabular - NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.
TransformerEngine - A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
LLMRec - [WSDM'2024 Oral] "LLMRec: Large Language Models with Graph Augmentation for Recommendation"
autocvd - Tool to automatically set CUDA_VISIBLE_DEVICES based on GPU utilization. Usable from command line and code.
NewsMTSC - Target-dependent sentiment classification in news articles reporting on political events. Includes a high-quality data set of over 11k sentences and a state-of-the-art classification model.
LargeBatchCTR - Large batch training of CTR models based on DeepCTR with CowClip.