Our great sponsors
-
TTS
:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts) (by mozilla)
There's the demo server which has a simple web UI where you can input text to be spoken, but in regards to setting it up locally it's not that suited for a non developer
https://github.com/mozilla/TTS/tree/master/TTS/server
https://github.com/mozilla/TTS/wiki/Build-instructions-for-s...
There's also a version in docker: https://github.com/synesthesiam/docker-mozillatts
And various Colabs too, which are fairly easy to get going with: https://github.com/mozilla/TTS/wiki/TTS-Notebooks-and-Tutori...
-
The price of GPU inference can be brutal, but there's a lot you can do on the infra side to improve it:
- Spot instances
- Aggressive autoscaling
- Micro batching
Can reduce inference compute spend by huge amounts (90% is not uncommon). ML, especially anything involving realtime inference, is an area where effective platform engineering makes a ridiculous difference even in the earliest days.
Source: I help maintain open source ML infra for GPU inference and think about compute spend way too much https://github.com/cortexlabs/cortex
-
SonarQube
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.