hashable-accelerate
accelerate
Our great sponsors
hashable-accelerate | accelerate | |
---|---|---|
- | 9 | |
0 | 886 | |
- | 0.5% | |
0.0 | 5.3 | |
over 3 years ago | 18 days ago | |
Haskell | Haskell | |
BSD 3-clause "New" or "Revised" License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hashable-accelerate
We haven't tracked posts mentioning hashable-accelerate yet.
Tracking mentions began in Dec 2020.
accelerate
-
Should I use newer ghc?
Someone has opened a PR for accelerate here https://github.com/AccelerateHS/accelerate/pull/525 (sadly seems not actively maintained at the moment, but that can always change if people care enough). I agree for an executable you should freeze your dependencies and compiler version, and using 8.10 is fine. Although there are tons of improvements in 9.2+
-
Haskell deep learning tutorials [Blog]
Backprop is a neat library. However, I guess its use case is if you actually don't want to go for anything standard like Torch or TF (perhaps for research?) For instance, if I were to use something like Accelerate for GPU acceleration, or some other computation-oriented library, then I would mix it with Backprop. Previously, I have benefited from Backprop in a ConvNet tutorial and I liked it.
-
I made a petition to get the accelerate project for Haskell some funding.
Wait, really? Here's a conversation I had with him: https://github.com/AccelerateHS/accelerate/discussions/528
-
Who is researching array languages these days?
I know Accelerate is being developed at Utrecht University in the Netherlands. You can look at publications by Trevor McDonell to get a taste of what they are doing.
-
Next Decade in Languages: User Code on the GPU
I’m personally a big fan of http://www.acceleratehs.org / https://github.com/AccelerateHS/accelerate-llvm
-
Introduction to Doctests in Haskell
Looking for a few projects that make use of it, I found accelerate, hawk, polysemy and pretty-simple, so I'll be interested to poke around in their code and see how they have things set up.
-
Monthly Hask Anything (March 2022)
There's accelerate for GPU computing and hmatrix for bindings to BLAS and LAPACK.
-
Idris2+WebGL, part #12: Linear algebra with linear types... not great
I'm toying with the idea of replacing vector values with vector generators, where e.g. v1 + v2 is not evaluated to a new vector, but to a vector program. This is similar to the approaches of Accelerate and TensorFlow. On the flip side, I don't think I could get rid of the overhead, and I expect much smaller computation loads than aforementioned libraries, so overheads could be very significant. The added benefit of using vector generators is that the generator could not only be evaluated, but also be turned into a Latex formula.
What are some alternatives?
accelerate-kullback-liebler
dhall - Maintainable configuration files
FAI - Haskell Foreign Accelerate Interface
accelerate-bignum - Fixed-length large integer arithmetic for Accelerate
containers-accelerate
accelerate-cuda - DEPRECATED: Accelerate backend for NVIDIA GPUs
hyper-haskell-server - The strongly hyped Haskell interpreter.
accelerate-examples - Examples for the Accelerate language
accelerate-fft - FFT library for Haskell based on the embedded array language Accelerate
feldspar-compiler - This is the compiler for the Feldspar Language.
binaryen - DEPRECATED in favor of ghc wasm backend, see https://www.tweag.io/blog/2022-11-22-wasm-backend-merged-in-ghc