libffm
DeepLearningExamples
Our great sponsors
libffm | DeepLearningExamples | |
---|---|---|
- | 7 | |
1,594 | 12,607 | |
- | 2.4% | |
0.0 | 6.1 | |
about 3 years ago | 24 days ago | |
C++ | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
libffm
We haven't tracked posts mentioning libffm yet.
Tracking mentions began in Dec 2020.
DeepLearningExamples
-
A small example from Tacotron2 trained on Brandon "Atrioc" Ewing
GitHub Used: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2
- Retraining Single Shot MultiBox Detector model on a custom data set?
-
Nvidia Scientists Take Top Spots in 2021 Brain Tumor Segmentation Challenge
Disclosure: I used to work on Google Cloud.
I dunno, their A100 results took about 20-30 minutes on 8 x A100s [1]. 8xA100s is like $24/hr on GCP at on-demand rates.
The efficiency was okay but not linear, so if you were more cost constrained you might go with 1xA100 for $3/hr and have ~2.5hr training times.
Getting that performance out of a GPU is more challenging than getting access to the GPUs. All the major cloud providers offer them.
(Nit: GCP deployed the 40 GiB cards rather than the later 80 GiB parts, but let's ignore that).
but it often doesn't matter
[1] https://github.com/NVIDIA/DeepLearningExamples/tree/master/P...
-
Tacotron2 CPU Inferencing
Entrypoint.py file in tacotron2 folder: source code
-
Skyrim Voice Synthesis Mega Tutorial
For those asking about differences to xVASynth, the models trained with xVASynth are the FastPitch models (https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/FastPitch). As a quick explainer:
-
Modders develop AI based app for creating new voice lines using neural speech synthesis.
There's another separate tool set from Nvidia that's on GitHub that the creator used to train the models. I'm not going to pretend like I understand it, but you can find it here.
-
[R] Data Movement Is All You Need: A Case Study on Optimizing Transformers
The Nvidia's implementation of BERT has a long way to go (I don't know about the implementations of input independent gradient computations in their backprop). But, there are scaled benchmarks on DGX A100's -https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT
What are some alternatives?
fastFM - fastFM: A Library for Factorization Machines
lidar-harmonization - Code release for Intensity Harmonization for Airborne LiDAR
implicit - Fast Python Collaborative Filtering for Implicit Feedback Datasets
alpaca_eval - An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
annoy - Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
Megatron-LM - Ongoing research training transformer models at scale
spotlight - Deep recommender models using PyTorch.
ontogpt - LLM-based ontological extraction tools, including SPIRES
TensorRec - A TensorFlow recommendation algorithm and framework in Python.
llm-search - Querying local documents, powered by LLM
deep_navigation - Deep Learning based wall/corridor following P3AT robot (ROS, Tensorflow 2.0)
notebooks - Notebooks illustrating the use of Norse, a library for deep-learning with spiking neural networks.