stu VS gpgpu-loadbalancerx

Compare stu vs gpgpu-loadbalancerx and see what are their differences.

gpgpu-loadbalancerx

Simple load-balancing library for balancing GPGPU workloads between a GPU and a CPU or any number of devices in a computer or multiple computers. (by tugrul512bit)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
stu gpgpu-loadbalancerx
1 4
37 1
- -
6.7 2.6
about 1 month ago about 2 years ago
C++ C++
GNU General Public License v3.0 only GNU General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

stu

Posts with mentions or reviews of stu. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-02-14.
  • C++ Show and Tell - Experiment
    12 projects | /r/cpp | 14 Feb 2022
    Stu – Build tool written in C++14, intended for large data science projects (rather than for compilation). Can be compared to Make, but with special features that are hard/impossible to recreate, e.g. output/plot.[languages.txt].eps will build all files output/plot.$lang.eps, for $lang taken from the file languages.txt. It all sounds very simple but has turned out to be extremely useful for generating the website http://konect.cc/ ; for years I had used and researched other tools, and none was really adequate.

gpgpu-loadbalancerx

Posts with mentions or reviews of gpgpu-loadbalancerx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-02-14.
  • vectorAdd.cu sample load-balanced on 3 GPUs
    1 project | /r/CUDA | 25 Feb 2022
    /** * Copyright 1993-2015 NVIDIA Corporation. All rights reserved. * * Please refer to the NVIDIA end user license agreement (EULA) associated * with this source code for terms and conditions that govern your use of * this software. Any use, reproduction, disclosure, or distribution of * this software and related documentation outside the terms of the EULA * is strictly prohibited. * */ /** * Vector addition: C = A + B. * * This sample is a very basic sample that implements element by element * vector addition. It is the same as the sample illustrating Chapter 2 * of the programming guide with some additions like error checking. */ #include // For the CUDA runtime routines (prefixed with "cuda_") #include #include // for load balancing between 3 different GPUs // https://github.com/tugrul512bit/gpgpu-loadbalancerx/blob/main/LoadBalancerX.h #include "LoadBalancerX.h" /** * CUDA Kernel Device code * * Computes the vector addition of A and B into C. The 3 vectors have the same * number of elements numElements. */ __global__ void vectorAdd(const float *A, const float *B, float *C, int numElements) { int i = blockDim.x * blockIdx.x + threadIdx.x; if (i < numElements) { C[i] = A[i] + B[i]; } } #include #include int main(void) { int numElements = 15000000; int numElementsPerGrain = 500000; size_t size = numElements * sizeof(float); float *h_A = (float *)malloc(size); float *h_B = (float *)malloc(size); float *h_C = (float *)malloc(size); for (int i = 0; i < numElements; ++i) { h_A[i] = rand()/(float)RAND_MAX; h_B[i] = rand()/(float)RAND_MAX; } /* * default tutorial vecAdd logic cudaMemcpy(d_A, h_A, size, cudaMemcpyHostToDevice); cudaMemcpy(d_B, h_B, size, cudaMemcpyHostToDevice); int threadsPerBlock = 256; int blocksPerGrid =(numElements + threadsPerBlock - 1) / threadsPerBlock; vectorAdd<<>>(d_A, d_B, d_C, numElements); cudaGetLastError(); cudaMemcpy(h_C, d_C, size, cudaMemcpyDeviceToHost); */ /* load-balanced 3-GPU version setup */ class GrainState { public: int offset; int range; std::map d_A; std::map d_B; std::map d_C; ~GrainState(){ for(auto a:d_A) cudaFree(a.second); for(auto b:d_B) cudaFree(b.second); for(auto c:d_C) cudaFree(c.second); } }; class DeviceState { public: int gpuId; int amIgpu; }; LoadBalanceLib::LoadBalancerX lb; lb.addDevice(LoadBalanceLib::ComputeDevice({0,1})); // 1st cuda gpu in computer lb.addDevice(LoadBalanceLib::ComputeDevice({1,1})); // 2nd cuda gpu in computer lb.addDevice(LoadBalanceLib::ComputeDevice({2,1})); // 3rd cuda gpu in computer // lb.addDevice(LoadBalanceLib::ComputeDevice({3,0})); // CPU single core for(int i=0;i( [&,i](DeviceState gpu, GrainState& grain){ if(gpu.amIgpu) { cudaSetDevice(gpu.gpuId); cudaMalloc((void **)&grain.d_A[gpu.gpuId], numElementsPerGrain*sizeof(float)); cudaMalloc((void **)&grain.d_B[gpu.gpuId], numElementsPerGrain*sizeof(float)); cudaMalloc((void **)&grain.d_C[gpu.gpuId], numElementsPerGrain*sizeof(float)); } }, [&,i](DeviceState gpu, GrainState& grain){ if(gpu.amIgpu) { cudaSetDevice(gpu.gpuId); cudaMemcpyAsync(grain.d_A[gpu.gpuId], h_A+i, numElementsPerGrain*sizeof(float), cudaMemcpyHostToDevice); cudaMemcpyAsync(grain.d_B[gpu.gpuId], h_B+i, numElementsPerGrain*sizeof(float), cudaMemcpyHostToDevice); } }, [&,i](DeviceState gpu, GrainState& grain){ if(gpu.amIgpu) { int threadsPerBlock = 1000; int blocksPerGrid =numElementsPerGrain/1000; vectorAdd<<>>(grain.d_A[gpu.gpuId], grain.d_B[gpu.gpuId], grain.d_C[gpu.gpuId], numElements-i); } else { for(int j=0;j de(3); for(int i=0;i<100;i++) { nanoseconds += lb.run(); } for(auto v:de) std::cout<
  • I created a load-balancer for multi-gpu projects.
    1 project | /r/gpgpu | 23 Feb 2022
  • C++ Show and Tell - Experiment
    12 projects | /r/cpp | 14 Feb 2022
    Here is Nvidia's vectorAdd example modified for 3-GPU load balancing.

What are some alternatives?

When comparing stu and gpgpu-loadbalancerx you can also consider the following projects:

libletlib - C++ framework for the impatient.

Blackjack_V1.02 - Extension of my old Blackjack game with Qt for C++

osmanip - A cross-platform library for output stream manipulation using ANSI escape sequences.

SHA256-Implementation - A program that implements the SHA256 algorithm and generates the binary+hexdigest of a string input.

TensorComprehensions - A domain specific language to express machine learning workloads.

ftl - Freestanding template library

dmpower - Interactive terminal D&D helper toolbox program for Dungeon Masters, players, and worldbuilders.