Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
deepmind-research
This repository contains implementations and illustrative code to accompany DeepMind publications
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Bootstrap
The most popular HTML, CSS, and JavaScript framework for developing responsive, mobile first projects on the web.
-
ohmyzsh
🙃 A delightful community-driven (with 2,300+ contributors) framework for managing your zsh configuration. Includes 300+ optional plugins (rails, git, macOS, hub, docker, homebrew, node, php, python, etc), 140+ themes to spice up your morning, and an auto-update tool so that makes it easy to keep up with the latest updates from the community.
PyTorch 2,600 contributors
Fairseq 1.1k contributors
Tensorflow 238k contributors
fasttext 50
Deepmind 63
Vue 356 contributors 202k stars
React(also FB) 1.6k contributors 201k stars
Bootstrap 1.3k contributors 162k stars
Ohmyzsh 2.1k contributors 155k stars
Python 940 contributors 152k stars
Flutter (Google) 1.1k contributors 150k stars
Linux 18k contributors 146k stars
Yes, most models these days, except the exceptionally large ones, are possible to train on a laptop. Of course it helps if your laptop has Nvidia CUDA GPU, but even if it doesn't you can rent an AWS 4 core/16GB GPU instance for 0.5 cents an hour. 24 hours of training time would be quite a lot for most models, unless you're trying to train a FB any to any language type model, but typically the big huge models are not the most interesting ones, and you can get very good results, and interesting models with substantially smaller sets of data. Opus MT models are only one language to one language, but they're about 300MB a model, and the quality rivals FB's models, and the speed is substantially faster. I don't have as many examples from the music space, as it's still a fairly under explored area, but Google has released Magenta which is a pretrained Tensorflow music model(actually a group of 3-4 models).