deepdream
mxnet
Our great sponsors
deepdream | mxnet | |
---|---|---|
6 | 4 | |
13,211 | 20,644 | |
- | - | |
0.0 | 4.1 | |
over 1 year ago | 6 months ago | |
C++ | ||
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deepdream
- Stable Audio: Fast Timing-Conditioned Latent Audio Diffusion
-
List of AI-Models
Click to Learn more...
-
Kedu Ihe Bụ Simulacrum Subreddit?
Neural Style images are created with Tensorflow 2. Deep Dream images are created with Caffe. Wombo images are created with the Wombo Art app.
-
I have no experience in coding, but is there an easy way for me to create generated monsters by randomly picking art components I've made and puting them together?
Maybe https://github.com/google/deepdream?
-
I have read Neuromancer to an AI and this is how she imagines it!
Github
-
Trippy Deepdream
That wasnt an app, its this: https://github.com/google/deepdream
mxnet
-
List of AI-Models
Click to Learn more...
-
Introduction to deep learning hardware in the cloud
Build – Choose a machine learning framework (such as TensorFlow, PyTorch, Apache MXNet, etc.)
-
just released my Clojure AI book
Clojure and Python also have bindings to the Apache MXNet library. Is there a reason why you didn't use them in some of your projects?
-
Can Apple's M1 help you train models faster and cheaper than Nvidia's V100?
> But you still lose something, e.g. if you use half precision on V100 you get virtually double speed, if you do on a 1080 / 2080 you get... nothing because it's not supported.
That's not true. FP16 is supported and can be fast on 2080, although some frameworks fail to see the speed-up. I filed a bug report about this a year ago: https://github.com/apache/incubator-mxnet/issues/17665
What consumer GPUs lack is ECC and fast FP64.
What are some alternatives?
big-sleep - A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
Caffe - Caffe: a fast open framework for deep learning.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
PaddlePaddle - PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
Caffe2
Keras - Deep Learning for humans
mlpack - mlpack: a fast, header-only C++ machine learning library
markovify - A simple, extensible Markov chain generator.
Porcupine - On-device wake word detection powered by deep learning
scikit-learn - scikit-learn: machine learning in Python
Theano - Theano was a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It is being continued as PyTensor: www.github.com/pymc-devs/pytensor