spack
ohpc
spack | ohpc | |
---|---|---|
53 | 28 | |
4,232 | 857 | |
0.9% | 0.7% | |
10.0 | 9.2 | |
6 days ago | 5 days ago | |
Python | C | |
Apache-2.0 or MIT | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spack
- Spack – a multi-platform, multi-version package manager for OS X, Windows, Linux
-
Autodafe: "freeing your freeing your project from the clammy grip of autotools."
> Are we talking about the same autotools?
Yes. Instead of figuring out how to do something particular with every single software package, I can do a --with-foo or --without-bar or --prefix=/opt/baz-1.2.3, and be fairly confident that it will work the way I want.
Certainly with package managers or (FreeBSD) Ports a lot is taken care of behind the scenes, but the above would also help the package/port maintainers as well. Lately I've been using Spack for special-needs compiles, but maintainer ease also helps there, but there are still cases one a 'fully manual' compile is still done.
> Suffice it to say, I prefer to work with handwritten makefiles.
Having everyone 'roll their own' system would probably be worse, because any "mysteriously failure" then has to be debugged specially for each project.
Have you tried Spack?
* https://spack.io
* https://spack.readthedocs.io/en/latest/
-
FreeBSD has a(nother) new C compiler: Intel oneAPI DPC++/C++
Well, good luck with that, cause it's broken.
Previous release miscompiled Python [1]
Current release miscompiles bison [2]
[1] https://github.com/spack/spack/issues/38724
[2] https://github.com/spack/spack/issues/37172#issuecomment-181...
-
Essential Command Line Tools for Developers
gh is available via Homebrew, MacPorts, Conda, Spack, Webi, and as a…
-
The Curious Case of MD5
> I can't count the number of times I've seen people say "md5 is fine for use case xyz" where in some counterintuitive way it wasn't fine.
I can count many more times that people told me that md5 was "broken" for file verification when, in fact, it never has been.
My main gripe with the article is that it portrays the entire legal profession as "backwards" and "deeply negligent" when they're not actually doing anything unsafe -- or even likely to be unsafe. And "tech" knows better. Much of tech, it would seem, has no idea about the use cases and why one might be safe or not. They just know something's "broken" -- so, clearly, we should update.
> Just use a safe one, even if you think you "don't need it".
Here's me switching 5,700 or so hashes from md5 to sha256 in 2019: https://github.com/spack/spack/pull/13185
Did I need it? No. Am I "compliant"? Yes.
Really, though, the main tangible benefit was that it saved me having to respond to questions and uninformed criticism from people unnecessarily worried about md5 checksums.
- Spack Package Manager v0.21.0
- Show HN: FlakeHub – Discover and publish Nix flakes
-
Nixhub: Search Historical Versions of Nix Packages
[1] https://github.com/spack/spack/blob/develop/var/spack/repos/...
-
Cython 3.0 Released
In Spack [1] we can express all these constraints for the dependency solver, and we also try to always re-cythonize sources. The latter is because bundled cythonized files are sometimes forward incompatible with Python, so it's better to just regenerate those with an up to date cython.
[1] https://github.com/spack/spack/
-
Linux server for physics simulations
You want to look at the tools used for HPC systems, these are generally very well tried and tested and can be setup for single machine usage. Remote access - we use ssh, but web interfaces such as Open On Demand exist - https://openondemand.org/. For managing Jobs, Slurm is currently the most popular option - https://slurm.schedmd.com/documentation.html. For a module system (to load software and libraries per user), Spack is a great - https://spack.io/. You might also want to consider containerisation options, https://apptainer.org/ is a good option.
ohpc
- interesting read
-
Rocky strikes back at Red Hat
We have plenty of licensed RHEL, but in isolated environments the hurdle of connecting to a Satellite server or their subscription hub on the internet is too high -- at least with Rocky and the ilk available. For this set up, the licensing model doesn't match reality, at least not easily.
Are we really going to build out compatible configuration management, monitoring, logging, etc? -- it's not a seamless transition. How much time do we have to put towards this?
And yes -- there is software compatibility issues. Look at the OpenHPC software distribution, it's designed for SUSE or Enterprise Linux: https://github.com/openhpc/ohpc/wiki/2.X
-
job scheduling for scientific computing on k8s?
I recommend you just stick with HPC centric tools are workflows. Your scientists aren’t going to learn k8s as you said. SLURM is the scheduler you want and if you’re new to HPC, I recommend taking a look at https://openhpc.community
-
HPC usage etiquette.
the general consensus is that pam_slurm_adopt is the better module (that's just one dude's opinion but his citations are good) - the advantage is that not only will it gatekeep SSH access, it'll also drop their SSH session into the cgroups that are constraining the user's resource limits, which also means their CPU usage will show up in sacct for the job (if the user has multiple jobs running on a node their ssh session may get dropped into the wrong one, no help for that)
- HPC OS for Non-expert
- How useful/important is OpenStack for HPC?
- Wanting to setup a cluster
-
Essential skills for new HPC Admin?
Check this: https://openhpc.community/ (this helped me a lot when I started. I'm no longer the admin of such systems)
-
Looking to optimize research lab resources...
Overall, if you're already in a RedHat-based environment, an installation of OpenHPC is pretty straightforward. Their reference implementation assumes you have a head node for the scheduler that all other nodes NAT through, but that's not a 100% requirement as much as a common setup. It also assumes you can reformat the compute nodes and dedicate them to HPC work, so if you need to keep the systems available as normal workstations, you'll need to deviate a bit. You could also use the OpenHPC instructions as a guide for what packages to install, but it may take longer to get everything right.
-
xcat education ?
https://github.com/openhpc/ohpc/wiki/1.3.X Newer versions of OpenHPC don't seem to releasing XCat guides anymore unfortunately.
What are some alternatives?
HomeBrew - 🍺 The missing package manager for macOS (or Linux)
EasyBuild - EasyBuild - building software with ease
nixpkgs - Nix Packages collection & NixOS
slurm - Slurm: A Highly Scalable Workload Manager
nix-processmgmt - Experimental Nix-based process management framework
openpbs - An HPC workload manager and job scheduler for desktops, clusters, and clouds.
Ansible - Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy and maintain. Automate everything from code deployment to network configuration to cloud management, in a language that approaches plain English, using SSH, with no agents to install on remote systems. https://docs.ansible.com.
deepops - Tools for building GPU clusters
NixOS-docker - DEPRECATED! Dockerfiles to package Nix in a minimal docker container
infrastructure - The infrastructure monorepo for the Rocky Linux project. This project will be archived/deprecated in the future.
poetry2nix - Convert poetry projects to nix automagically [maintainer=@adisbladis,@cpcloud]
almalinux.org - almalinux.org official web site sources.