Opus-MT
deepmind-research
Our great sponsors
Opus-MT | deepmind-research | |
---|---|---|
3 | 29 | |
527 | 12,802 | |
8.7% | 2.2% | |
4.8 | 0.6 | |
3 days ago | 7 days ago | |
Python | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Opus-MT
-
“sync,corrected by elderman” issue in ML translation datasets spread on internet
- mention on GitHub repo of a translation model https://github.com/Helsinki-NLP/Opus-MT/issues/62
I'm curious to see if anyone else has interesting encounters with this
-
How worried are you about AI taking over music?
Yes, most models these days, except the exceptionally large ones, are possible to train on a laptop. Of course it helps if your laptop has Nvidia CUDA GPU, but even if it doesn't you can rent an AWS 4 core/16GB GPU instance for 0.5 cents an hour. 24 hours of training time would be quite a lot for most models, unless you're trying to train a FB any to any language type model, but typically the big huge models are not the most interesting ones, and you can get very good results, and interesting models with substantially smaller sets of data. Opus MT models are only one language to one language, but they're about 300MB a model, and the quality rivals FB's models, and the speed is substantially faster. I don't have as many examples from the music space, as it's still a fairly under explored area, but Google has released Magenta which is a pretrained Tensorflow music model(actually a group of 3-4 models).
- Helsinki-NLP/Opus-MT: Open neural machine translation models and web services
deepmind-research
- This A.I. Subculture's Motto: Go, Go, Go. The eccentric pro-tech movement known as "Effective Accelerationism" wants to unshackle powerful A.I., and party along the way.
-
How worried are you about AI taking over music?
Deepmind 63
-
Are there Notebooks of AlphaFold 1?
Found some here and here.
-
Trying to port this non-standard Tensorflow model to Pytorch and not sure if I'm missing anything
I am trying to make a physics-simulation model based on DeepMind's research, with its source code found here https://github.com/deepmind/deepmind-research/tree/master/learning_to_simulate . The thing that mainly confuses me is how to properly implement the embedding situation found at https://github.com/deepmind/deepmind-research/blob/master/learning_to_simulate/learned_simulator.py on lines 78 and 152.
-
[D] Is it possible to use machine learning to create 3D images for the purpose of 3D printing?
Yes. There's a fair bit of research into using ML to generate 3D models. Early work, like Neural Radiance Fields (NeRF) generated a voxel model, which could be used for 3D printing, but it would be low resolution, like blowing up a tiny image vs an SVG vector file. However, more recent research can generate polygonal models from a video taken of a real object. Polygonal models are much better for 3D printing.
- DeepMind Research – code to accompany DeepMind publications
- Skilful precipitation nowcasting using deep generative models of radar - Dr. Piotr Mirowski - Zoom
-
[R] Skilful precipitation nowcasting using deep generative models of radar - Link to a free online lecture by the author in comments (deepmind research published in nature)
Skilful precipitation nowcasting using deep generative models of radar https://www.nature.com/articles/s41586-021-03854-z https://deepmind.com/blog/article/nowcasting https://github.com/deepmind/deepmind-research/tree/master/nowcasting
-
Deepmind Open-Sources DM21: A Deep Learning Model For Quantum Chemistry
Github: https://github.com/deepmind/deepmind-research/tree/master/density_functional_approximation_dm21
-
[P] Choosing a self-supervised learning framework that's easy to use
BYOL - again, it seems that it's not optimized for running on multiple GPUs.
What are some alternatives?
OPUS-MT-train - Training open neural machine translation models
jaxline
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch
dm-haiku - JAX-based neural network library
fastText - Library for fast text representation and classification.
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Neural-Machine-Translated-communication-system - The model is designed to train a single and large neural network in order to predict correct translation by reading the given sentence.
flax - Flax is a neural network library for JAX that is designed for flexibility.
tensor2tensor - Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
alphafold_pytorch - An implementation of the DeepMind's AlphaFold based on PyTorch for research
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
swav - PyTorch implementation of SwAV https//arxiv.org/abs/2006.09882