Our great sponsors
-
PyTorch 2,600 contributors
-
Fairseq 1.1k contributors
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
-
Tensorflow 238k contributors
-
fasttext 50
-
deepmind-research
This repository contains implementations and illustrative code to accompany DeepMind publications
Deepmind 63
-
Vue 356 contributors 202k stars
-
React(also FB) 1.6k contributors 201k stars
-
Sonar
Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
-
Bootstrap
The most popular HTML, CSS, and JavaScript framework for developing responsive, mobile first projects on the web.
Bootstrap 1.3k contributors 162k stars
-
ohmyzsh
🙃 A delightful community-driven (with 2,100+ contributors) framework for managing your zsh configuration. Includes 300+ optional plugins (rails, git, macOS, hub, docker, homebrew, node, php, python, etc), 140+ themes to spice up your morning, and an auto-update tool so that makes it easy to keep up with the latest updates from the community.
Ohmyzsh 2.1k contributors 155k stars
-
Python 940 contributors 152k stars
-
Flutter (Google) 1.1k contributors 150k stars
-
Linux 18k contributors 146k stars
-
Yes, most models these days, except the exceptionally large ones, are possible to train on a laptop. Of course it helps if your laptop has Nvidia CUDA GPU, but even if it doesn't you can rent an AWS 4 core/16GB GPU instance for 0.5 cents an hour. 24 hours of training time would be quite a lot for most models, unless you're trying to train a FB any to any language type model, but typically the big huge models are not the most interesting ones, and you can get very good results, and interesting models with substantially smaller sets of data. Opus MT models are only one language to one language, but they're about 300MB a model, and the quality rivals FB's models, and the speed is substantially faster. I don't have as many examples from the music space, as it's still a fairly under explored area, but Google has released Magenta which is a pretrained Tensorflow music model(actually a group of 3-4 models).