OpenNMT-py
pytorch-tutorial
Our great sponsors
OpenNMT-py | pytorch-tutorial | |
---|---|---|
6 | 3 | |
6,574 | 29,128 | |
1.1% | - | |
8.7 | 0.0 | |
7 days ago | 9 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OpenNMT-py
-
Making a custom Google Translate equivalent / web translation filter for my conlang?
I already tried this with OpenNMT.
-
Cutting edge language translation models
fairseq and OpenNMT are very good starting points if you want to train your NMT model from scratch.
- How Telegram Messenger circumvents Google Translate's API
-
WEBNLG challenge 2017 on Google Colab error
It looks like this uses the version of OpenNMT implemented in torch, which has been deprecated. You will be much better off using the pytorch implementation of OpenNMT or the transformers library. In fact, I would recommend taking a look at the GEM benchmark, since it also uses the WebNLG dataset. Here is a tutorial to get started, you can change the dataset here to WebNLG instead of CommonGen.
-
Help with Neural Machine Translation
Umm... open-nmt This is a library maintained since 2016 for NMT
-
Oop concepts for pytorch
However, you do not need to use much OOP when training models with pytorch. Most of the time it is just inheriting a class and overwriting functions. You might need more advanced stuff if you were writing a framework on top of it, something like ONMT
pytorch-tutorial
-
PyTorch - What does contiguous() do?
I was going through this example of a LSTM language model on github (link).What it does in general is pretty clear to me. But I'm still struggling to understand what calling contiguous() does, which occurs several times in the code.
-
How to 'practice' pytorch after finishing its basic tutorial?
I tried to move straight to practicing implementing papers and trying to understand other people's codes but failed miserably. I feel like there was too much of a gap between the basic tutorial and being able to implement ideas into code....hence the question: Is there any resource/way to practice pytorch in general? I did find this and this, but I just wanted to hear what others have gone through to become better at PyTorch up to the point they can build stuff from their own ideas
- [P] Probabilistic Machine Learning: An Introduction, Kevin Murphy's 2021 e-textbook is out
What are some alternatives?
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
mixture-of-experts - PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
tensor2tensor - Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
InceptionTime - InceptionTime: Finding AlexNet for Time Series Classification
Transformer-Models-from-Scratch - implementing various transformer models for various tasks
Conv-TasNet - A PyTorch implementation of Conv-TasNet described in "TasNet: Surpassing Ideal Time-Frequency Masking for Speech Separation" with Permutation Invariant Training (PIT).
Opus-MT - Open neural machine translation models and web services
pytorch-grad-cam - Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
OpenNMT - Open Source Neural Machine Translation in Torch (deprecated)
BigGAN-PyTorch - The author's officially unofficial PyTorch BigGAN implementation.
LibreTranslate - Free and Open Source Machine Translation API. Self-hosted, offline capable and easy to setup.
bonito - A PyTorch Basecaller for Oxford Nanopore Reads