Bolt VS ArrayFire

Compare Bolt vs ArrayFire and see what are their differences.

Bolt

Bolt is a C++ template library optimized for GPUs. Bolt provides high-performance library implementations for common algorithms such as scan, reduce, transform, and sort. (by HSA-Libraries)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Bolt ArrayFire
3 6
370 4,413
- 0.7%
0.0 7.1
about 8 years ago 29 days ago
C++ C++
GNU General Public License v3.0 or later BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Bolt

Posts with mentions or reviews of Bolt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-17.
  • AMD's CDNA 3 Compute Architecture
    7 projects | news.ycombinator.com | 17 Dec 2023
    this is frankly starting to sound a lot like the ridiculous "blue bubbles" discourse.

    AMD's products have generally failed to catch traction because their implementations are halfassed and buggy and incomplete (despite promising more features, these are often paper features or career-oriented development from now-departed developers). all of the same "developer B" stuff from openGL really applies to openCL as well.

    http://richg42.blogspot.com/2014/05/the-truth-on-opengl-driv...

    AMD has left a trail of abandoned code and disappointed developers in their wake. These two repos are the same thing for AMD's ecosystem and NVIDIA's ecosystem, how do you think the support story compares?

    https://github.com/HSA-Libraries/Bolt

    https://github.com/NVIDIA/thrust

    in the last few years they have (once again) dumped everything and started over, ROCm supported essentially no consumer cards and rotated support rapidly even in the CDNA world. It offers no binary compatibility support story, it has to be compiled for specific chips within a generation, not even just "RDNA3" but "Navi 31 specifically". Etc etc. And nobody with consumer cards could access it until like, six months ago, and that still is only on windows, consumer cards are not even supported on linux (!).

    https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...

    This is on top of the actual problems that still remain, as geohot found out. Installing ROCm is a several-hour process that will involve debugging the platform just to get it to install, and then you will probably find that the actual code demos segfault when you run them.

    AMD's development processes are not really open, and actual development is silo'd inside the company with quarterly code dumps outside. The current code is not guaranteed to run on the actual driver itself, they do not test it even in the supported configurations.

    it hasn't got traction because it's a low-quality product and nobody can even access it and run it anyway.

  • High quality OpenCL compute libraries
    1 project | /r/OpenCL | 16 Oct 2022
    what I'm saying is there are options on that that make it more likely for what you're looking to exist; I haven't surveyed the existing libs as much but without templates and the integration of single source you're not bound to find libraries to exist; it's why opencl doesn't have those things really; however I name droped the amd targetted OpenCL thrust equivalent - https://github.com/HSA-Libraries/Bolt - I don't know if you can really achieve opencl multi-accelerator compatibility with it though.
  • Nvidia in the Valley
    5 projects | news.ycombinator.com | 26 Sep 2022
    OpenCL had a bit of a "second-mover curse" where instead of trying to solve one problem (GPGPU acceleration) it tried to solve everything (a generalized framework for heterogeneous dispatch) and it just kinda sucks to actually use. It's not that it's slower or faster, in principle it should be the same speed when dispatched to the hardware (+/- any C/C++ optimization gotchas of course), but it just requires an obscene amount of boilerplate to "draw the first triangle" (or, launch the first kernel), much like Vulkan.

    HIP was supposed to rectify this, but now you're buying into AMD's custom language and its limitations... and there are limitations, things that CUDA can do that HIP can't (texture unit access was an early one - and texture units aren't just for texturing, they're for coalescing all kinds of 2d/3d/higher-dimensional memory access). And AMD has a history of abandoning these projects after a couple years and leaving them behind and unsupported... like their Thrust framework counterpart, Bolt, which hasn't been updated in 8 years now.

    https://github.com/HSA-Libraries/Bolt

    The old bit about "Vendor B" leaving behind a "trail of projects designed to pad resumes and show progress to middle managers" still reigns absolutely true with AMD. AMD has a big uphill climb in general to shake this reputation about being completely unserious with their software... and I'm not even talking about drivers here.

    http://richg42.blogspot.com/2014/05/the-truth-on-opengl-driv...

ArrayFire

Posts with mentions or reviews of ArrayFire. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-27.
  • Learn WebGPU
    9 projects | news.ycombinator.com | 27 Apr 2023
    Loads of people have stated why easy GPU interfaces are difficult to create, but we solve many difficult things all the time.

    Ultimately I think CPUs are just satisfactory for the vast vast majority of workloads. Servers rarely come with any GPUs to speak of. The ecosystem around GPUs is unattractive. CPUs have SIMD instructions that can help. There are so many reasons not to use GPUs. By the time anyone seriously considers using GPUs they're, in my imagination, typically seriously starved for performance, and looking to control as much of the execution details as possible. GPU programmers don't want an automagic solution.

    So I think the demand for easy GPU interfaces is just very weak, and therefore no effort has taken off. The amount of work needed to make it as easy to use as CPUs is massive, and the only reason anyone would even attempt to take this on is to lock you in to expensive hardware (see CUDA).

    For a practical suggestion, have you taken a look at https://arrayfire.com/ ? It can run on both CUDA and OpenCL, and it has C++, Rust and Python bindings.

  • seeking C++ library for neural net inference, with cross platform GPU support
    1 project | /r/Cplusplus | 12 Sep 2022
    What about Arrayfire. https://github.com/arrayfire/arrayfire
  • [D] Deep Learning Framework for C++.
    7 projects | /r/MachineLearning | 12 Jun 2022
    Low-overhead — not our goal, but Flashlight is on par with or outperforming most other ML/DL frameworks with its ArrayFire reference tensor implementation, especially on nonstandard setups where framework overhead matters
  • [D] Neural Networks using a generic GPU framework
    2 projects | /r/MachineLearning | 4 Jan 2022
    Looking for frameworks with Julia + OpenCL I found array fire. It seems quite good, bonus points for rust bindings. I will keep looking for more, Julia completely fell off my radar.
  • Windows 11 va bloquer les bidouilles qui facilitent l'emploi d'un navigateur alternatif à Edge
    1 project | /r/france | 25 Nov 2021
  • Arrayfire progressive performance decline?
    1 project | /r/rust | 9 Jun 2021
    Your Problem may be the lazy evaluation, see this issue: https://github.com/arrayfire/arrayfire/issues/1709

What are some alternatives?

When comparing Bolt and ArrayFire you can also consider the following projects:

Boost.Compute - A C++ GPU Computing Library for OpenCL

Thrust - [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl

moodycamel - A fast multi-producer, multi-consumer lock-free concurrent queue for C++11

VexCL - VexCL is a C++ vector expression template library for OpenCL/CUDA/OpenMP

Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

junction - Concurrent data structures in C++

CUB - THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.

HPX - The C++ Standard Library for Parallelism and Concurrency