spack VS slurm

Compare spack vs slurm and see what are their differences.

spack

A flexible package manager that supports multiple versions, configurations, platforms, and compilers. (by spack)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
spack slurm
52 6
3,949 2,333
2.3% 4.4%
10.0 10.0
1 day ago 1 day ago
Python C
Apache-2.0 or MIT GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

spack

Posts with mentions or reviews of spack. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-06.
  • Autodafe: "freeing your freeing your project from the clammy grip of autotools."
    4 projects | news.ycombinator.com | 6 Apr 2024
    > Are we talking about the same autotools?

    Yes. Instead of figuring out how to do something particular with every single software package, I can do a --with-foo or --without-bar or --prefix=/opt/baz-1.2.3, and be fairly confident that it will work the way I want.

    Certainly with package managers or (FreeBSD) Ports a lot is taken care of behind the scenes, but the above would also help the package/port maintainers as well. Lately I've been using Spack for special-needs compiles, but maintainer ease also helps there, but there are still cases one a 'fully manual' compile is still done.

    > Suffice it to say, I prefer to work with handwritten makefiles.

    Having everyone 'roll their own' system would probably be worse, because any "mysteriously failure" then has to be debugged specially for each project.

    Have you tried Spack?

    * https://spack.io

    * https://spack.readthedocs.io/en/latest/

  • FreeBSD has a(nother) new C compiler: Intel oneAPI DPC++/C++
    2 projects | news.ycombinator.com | 7 Mar 2024
    Well, good luck with that, cause it's broken.

    Previous release miscompiled Python [1]

    Current release miscompiles bison [2]

    [1] https://github.com/spack/spack/issues/38724

    [2] https://github.com/spack/spack/issues/37172#issuecomment-181...

  • Essential Command Line Tools for Developers
    29 projects | dev.to | 15 Jan 2024
    gh is available via Homebrew, MacPorts, Conda, Spack, Webi, and as a…
  • The Curious Case of MD5
    4 projects | news.ycombinator.com | 3 Jan 2024
    > I can't count the number of times I've seen people say "md5 is fine for use case xyz" where in some counterintuitive way it wasn't fine.

    I can count many more times that people told me that md5 was "broken" for file verification when, in fact, it never has been.

    My main gripe with the article is that it portrays the entire legal profession as "backwards" and "deeply negligent" when they're not actually doing anything unsafe -- or even likely to be unsafe. And "tech" knows better. Much of tech, it would seem, has no idea about the use cases and why one might be safe or not. They just know something's "broken" -- so, clearly, we should update.

    > Just use a safe one, even if you think you "don't need it".

    Here's me switching 5,700 or so hashes from md5 to sha256 in 2019: https://github.com/spack/spack/pull/13185

    Did I need it? No. Am I "compliant"? Yes.

    Really, though, the main tangible benefit was that it saved me having to respond to questions and uninformed criticism from people unnecessarily worried about md5 checksums.

  • Spack Package Manager v0.21.0
    1 project | news.ycombinator.com | 12 Nov 2023
  • Show HN: FlakeHub – Discover and publish Nix flakes
    2 projects | news.ycombinator.com | 22 Aug 2023
  • Nixhub: Search Historical Versions of Nix Packages
    3 projects | news.ycombinator.com | 20 Jul 2023
    [1] https://github.com/spack/spack/blob/develop/var/spack/repos/...
  • Cython 3.0 Released
    4 projects | news.ycombinator.com | 18 Jul 2023
    In Spack [1] we can express all these constraints for the dependency solver, and we also try to always re-cythonize sources. The latter is because bundled cythonized files are sometimes forward incompatible with Python, so it's better to just regenerate those with an up to date cython.

    [1] https://github.com/spack/spack/

  • Linux server for physics simulations
    1 project | /r/linux | 7 Jul 2023
    You want to look at the tools used for HPC systems, these are generally very well tried and tested and can be setup for single machine usage. Remote access - we use ssh, but web interfaces such as Open On Demand exist - https://openondemand.org/. For managing Jobs, Slurm is currently the most popular option - https://slurm.schedmd.com/documentation.html. For a module system (to load software and libraries per user), Spack is a great - https://spack.io/. You might also want to consider containerisation options, https://apptainer.org/ is a good option.
  • Simplest way to get latest gcc for any platform ?
    3 projects | /r/cpp | 31 May 2023
    git clone https://github.com/spack/spack.git ./spack/bin/spack install gcc

slurm

Posts with mentions or reviews of slurm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-13.
  • ntasks and submit.lua in Slurm
    1 project | /r/HPC | 14 Jul 2023
    I'm trying to have Slurm automatically switch partitions to a specific one via the job_sutmit.lua plugin whenever our users request strictly more than 8 cpus. But trying to extract or calculate ahead of time how many cpus will be allocated or requested isn't trivial (to me). Are there attributes in job_submit that could help out with this task? For example, I don't see any job->desc.ntasks attribute in https://github.com/SchedMD/slurm/blob/master/src/plugins/job_submit/lua/job_submit_lua.c. Any information or documentation on how to leverage job_submit.lua would be appreciated.
  • job scheduling for scientific computing on k8s?
    5 projects | /r/kubernetes | 13 May 2023
    Do you have a reason to use kubernetes besides it’s the $CURRENT tech? Why not stick with what you’re already familiar with (batch job managers) and use SLURM, a workload and resource manager, like many others in HPC? Do the researchers need to schedule against Nvidia GPU resources now or in the future? Nvidia themselves recommend SLURM.
  • What’s the path to working on supercomputers or quantum computing?
    1 project | /r/ECE | 19 Nov 2022
    Quantum computing and supercomputers are two different things. Quantum computers are currently an area of research, there isn't a version ready for use apart from some prototypes, and it will probably stay that way for while. Also, quantum computing will most likely not be a completly new architecture, that all of the chips we use will adopt, but an addition to current chipsets for some important but special tasks. Supercomputers, or HPC (High performance clusters) are classic computers, just that they are huge. They use derivatives of "off-the-shelf", but high end, hardware. There is a lot of interesting work in designing such systems, a lot of challenging problems in distributed systems theory, but they aren't a complete detached industry. Using them for work, not designing them, doesn't require a EECS type degree, they guy who sit's next to me in the office, uses a supercomputer to predict protein folding, he is by training a doctor and now does computational microbiology. The applications for massive compute power (often times "just brute force the solution instead of spending years in the lab") are almost endless, but to use them it's not that important to understand the full details of how they are constructed, domain knowledge in the application domain is much more important. If you know how your cluster is structured, and knowledge of slurm etc. will enable you to use the supercomputer just fine, again, they aren't that different from regular computers, just that you workstation might have 1 CPU and your supercomputer has 500. Hiding this complexity is done by slurm or any other resource manager. It's open source as well :) https://github.com/SchedMD/slurm
  • Open source / part time research in the world of HPC?
    3 projects | /r/HPC | 15 May 2022
  • Brand New HPC Sysadmin at a Major University, Where to Start?
    6 projects | /r/HPC | 28 Oct 2021
    SLURM (distributed by OpenHPC) If you have shared storage then this is the industry standard solution that is both open source and free (extremely popular in the top 500 list). You can pair this with a high speed network or not depending on your research workloads.
  • Is it possible to let slurmdbd connect to mysql over unix sockets?
    1 project | /r/SLURM | 6 Sep 2021

What are some alternatives?

When comparing spack and slurm you can also consider the following projects:

HomeBrew - 🍺 The missing package manager for macOS (or Linux)

Ansible - Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy and maintain. Automate everything from code deployment to network configuration to cloud management, in a language that approaches plain English, using SSH, with no agents to install on remote systems. https://docs.ansible.com.

nixpkgs - Nix Packages collection & NixOS

ohpc - OpenHPC Integration, Packaging, and Test Repo

nix-processmgmt - Experimental Nix-based process management framework

Grafana - The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

mfem - Lightweight, general, scalable C++ library for finite element methods

flux-operator - Deploy a Flux MiniCluster to Kubernetes with the operator

NixOS-docker - DEPRECATED! Dockerfiles to package Nix in a minimal docker container

prometheus - The Prometheus monitoring system and time series database.