accelerate-cuda
DEPRECATED: Accelerate backend for NVIDIA GPUs (by AccelerateHS)
accelerate
Embedded language for high-performance array computations (by AccelerateHS)
accelerate-cuda | accelerate | |
---|---|---|
- | 10 | |
57 | 913 | |
- | 0.3% | |
0.0 | 7.5 | |
almost 8 years ago | 2 months ago | |
Haskell | Haskell | |
BSD 3-clause "New" or "Revised" License | BSD 3-clause "New" or "Revised" License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
accelerate-cuda
Posts with mentions or reviews of accelerate-cuda.
We have used some of these posts to build our list of alternatives
and similar projects.
We haven't tracked posts mentioning accelerate-cuda yet.
Tracking mentions began in Dec 2020.
accelerate
Posts with mentions or reviews of accelerate.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-09-12.
-
Why Haskell?
Well what kind of values and how many updates? You might have to call an external library to get decent performance, like you would use NumPy in Python. This might be of interest: https://www.acceleratehs.org/
-
Should I use newer ghc?
Someone has opened a PR for accelerate here https://github.com/AccelerateHS/accelerate/pull/525 (sadly seems not actively maintained at the moment, but that can always change if people care enough). I agree for an executable you should freeze your dependencies and compiler version, and using 8.10 is fine. Although there are tons of improvements in 9.2+
-
Haskell deep learning tutorials [Blog]
Backprop is a neat library. However, I guess its use case is if you actually don't want to go for anything standard like Torch or TF (perhaps for research?) For instance, if I were to use something like Accelerate for GPU acceleration, or some other computation-oriented library, then I would mix it with Backprop. Previously, I have benefited from Backprop in a ConvNet tutorial and I liked it.
-
I made a petition to get the accelerate project for Haskell some funding.
Wait, really? Here's a conversation I had with him: https://github.com/AccelerateHS/accelerate/discussions/528
-
Who is researching array languages these days?
I know Accelerate is being developed at Utrecht University in the Netherlands. You can look at publications by Trevor McDonell to get a taste of what they are doing.
-
Next Decade in Languages: User Code on the GPU
I’m personally a big fan of http://www.acceleratehs.org / https://github.com/AccelerateHS/accelerate-llvm
-
Introduction to Doctests in Haskell
Looking for a few projects that make use of it, I found accelerate, hawk, polysemy and pretty-simple, so I'll be interested to poke around in their code and see how they have things set up.
-
Monthly Hask Anything (March 2022)
There's accelerate for GPU computing and hmatrix for bindings to BLAS and LAPACK.
-
Idris2+WebGL, part #12: Linear algebra with linear types... not great
I'm toying with the idea of replacing vector values with vector generators, where e.g. v1 + v2 is not evaluated to a new vector, but to a vector program. This is similar to the approaches of Accelerate and TensorFlow. On the flip side, I don't think I could get rid of the overhead, and I expect much smaller computation loads than aforementioned libraries, so overheads could be very significant. The added benefit of using vector generators is that the generator could not only be evaluated, but also be turned into a Latex formula.
What are some alternatives?
When comparing accelerate-cuda and accelerate you can also consider the following projects:
accelerate-llvm - LLVM backend for Accelerate
dhall - Maintainable configuration files
accelerate-bignum - Fixed-length large integer arithmetic for Accelerate
accelerate-fft - FFT library for Haskell based on the embedded array language Accelerate