cupy
legion
cupy | legion | |
---|---|---|
21 | 11 | |
7,787 | 647 | |
1.2% | 1.2% | |
9.9 | 9.9 | |
5 days ago | 23 days ago | |
Python | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cupy
- CuPy: NumPy and SciPy for GPU
-
Keras 3.0
I did not expect anything interesting, but this is actually cool.
> A full implementation of the NumPy API. Not something "NumPy-like" — just literally the NumPy API, with the same functions and the same arguments.
I suppose it's like https://cupy.dev/
- Progress on No-GIL CPython
-
Fedora 40 Eyes Dropping Gnome X11 Session Support
What was the difference in runtime performance, and did you try CuPy?
https://github.com/cupy/cupy :
> CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy acts as a drop-in replacement to run existing NumPy/SciPy code on NVIDIA CUDA or AMD ROCm platforms.
Projects using CuPy:
-
How does one optimize their functions?
It's more effort though. You will likely have to format your data in specific ways for the GPU to efficiently process it. I've done this kind of thing with PyTorch tensors, but there are also math-specific libraries like CuPy. If you only have millions, Numpy should be fine.
-
Speed Up Your Physics Simulations (250x Faster Than NumPy) Using PyTorch. Episode 1: The Boltzmann Distribution
I'd also recommend checking out CuPy which aims to fully re-implement the Numpy api for CUDA GPUs, while taking advantage of Nvidia's specialized libraries like cuBLAS, cuRAND, cuSOLVER etc. The tradeoff being that it only works with Nvidia GPUs.
-
ELI5: Why doesn't numpy work on GPUs?
u/Spataner's answer is great. If you WANT GPU-enabled numpy functions, I would check out CuPy: https://cupy.dev/
-
Help!!! Training neural net in vs code
Not sure how VS Code is relevant here as it's just you IDE, shouldn't have any influence on this. Now, seeing as you're using numpy (which has no gpu support), you could try and use something like CuPy in place of numpy. I'm not sure about the interoperability because I've never used this myself, but if you're lucky it could be as simple as just replacing all numpy calls with the same CuPy calls (or replacing all import numpy as np with import cupy as np ).
-
What's the best thing/library you learned this year ?
Cupy replicates the numpy and scipy APIs but runs on the GPU.
-
Making Python fast for free – adventures with mypyc
For that, you can use cupy[0], PyTorch[1] or Tensorflow[2]. They all mimic the numpy's API with the possibility to use your GPU.
[0] https://cupy.dev/
legion
- Legion 24.03.0 – Control Replication
-
Antithesis of a One-in-a-Million Bug: Taming Demonic Nondeterminism
I work on a distributed runtime system for heterogeneous supercomputers [1].
As an example of the sort of bug we regularly deal with, I am at this exact moment tracking down a freeze that occurs on 8,192 nodes of a supercomputer [2]. That means I'm using about 64,000 GPUs and about half a million CPU cores. The smallest node count I've seen my issue is 2,048 nodes and at that scale it only happens about 10% of the time.
We've been debating internally whether Antithesis could help us or not. On the one hand, the fuzzing to explore the state space, and deterministic reproduction, are exactly what we want. On the other hand, we believe our state space is much larger than what you see in a typical distributed database. (And not just because of the sheer scale of things, but even on a single node we have state machines with order hundreds to thousands of states in them.) Based on the post here and the "scenario" count explored in CouchDB, I'm not convinced you'd be able to handle us. :-)
I'd be curious what you think. Happy to discuss here, or contact info in profile.
[1]: https://legion.stanford.edu/
[2]: https://www.olcf.ornl.gov/frontier/
-
Progress on No-GIL CPython
Parallelism in CS is a bit like security in CS. People know it matters in the abstract senses but you really only get into it if you look for the training specifically. We're getting better at both over time: just as more languages/libraries/etc. are secure by default, more now are parallel by default. There's a ways to go, but I'm glad we didn't do this prematurely, because the technology has improved a lot in the last decade. Look for example at what we can do (safely!) with Rayon in Rust vs (unsafely!) with OpenMP in C++.
And there are things even further afield like what I work on [1][2][3].
[1]: https://legion.stanford.edu/
[2]: https://regent-lang.org/
[3]: https://github.com/nv-legate/cunumeric
-
Mojo is now available on Mac
Chapel has at least several full-time developers at Cray/HPE and (I think) the US national labs, and has had some for almost two decades. That's much more than $100k.
Chapel is also just one of many other projects broadly interested in developing new programming languages for "high performance" programming. Out of that large field, Chapel is not especially related to the specific ideas or design goals of Mojo. Much more related are things like Codon (https://exaloop.io), and the metaprogramming models in Terra (https://terralang.org), Nim (https://nim-lang.org), and Zig (https://ziglang.org).
But Chapel is great! It has a lot of good ideas, especially for distributed-memory programming, which is its historical focus. It is more related to Legion (https://legion.stanford.edu, https://regent-lang.org), parallel & distributed Fortran, ZPL, etc.
-
Announcing Chapel 1.32
I should also note that there is Pygion if you want to use Python. Not a lot of great reference material right now, but there's the paper:
https://legion.stanford.edu/pdfs/pygion2019.pdf
And code samples:
https://github.com/StanfordLegion/legion/tree/stable/binding...
-
Is anyone using PyPy for real work?
We use PyPy for performing verification of our software stack [1], and also for profiling tools [2]. The verification tool is basically a complete reimplementation of our main product, and therefore encodes a massive amount of business logic (and therefore difficult to impossible to rewrite in another language). As with other users, we found the switch to PyPy was seamless and provides us with something like a 2.5x speedup out of the box, with (I think) higher speedups in some specific cases.
We eventually rewrote the profiler tool in Rust for additional speedups, but as mentioned for the verification engine, it's probably too complicated to ever do that so we really appreciate drop-in tools like PyPy that can speed up our code.
[1]: https://github.com/StanfordLegion/legion/blob/master/tools/l...
[2]: https://github.com/StanfordLegion/legion/blob/master/tools/l...
-
Make your programs run faster by better using the data cache (2020)
Legion is also doing something like that: https://legion.stanford.edu/
-
Is Parallel Programming Hard, and, If So, What Can You Do About It? [pdf]
If you really want to dig into it you can read up on the tutorials and/or papers from the Legion project: https://legion.stanford.edu/
But briefly, these task-based programs preserve sequential semantics. That means (whatever the system actually does when running your program), as long as you follow the rules, the parallelism should be invisible to the execution of the program.
-
Ask HN: Who is hiring? (September 2022)
Computer Science Research Dept., SLAC National Accelerator Laboratory | Research Scientist / Engineer | Menlo Park, CA or REMOTE, VISA | Full Time
We're a research group within SLAC, headed by Alex Aiken (https://theory.stanford.edu/~aiken/). We focus on fundamental CS research that has the potential to impact science, mainly in the areas of high-performance and distributed computing, programming languages, compilers, networks, operating systems, etc. One of our major projects is Legion, a forward-looking programming system for distributed computing (https://legion.stanford.edu/). Legion has been used to create new programming languages (https://regent-lang.org/), seamless distributed NumPy (https://developer.nvidia.com/cunumeric), and a drop-in replacement for Keras and PyTorch (https://flexflow.ai/), among many other things.
We are looking for strong scientists and engineers to join our group. For clarity (because these terms vary by industry/company), scientists mainly focus on producing research results (e.g., papers and research software) while engineers mainly focus on software development and deliverables (e.g., system or application implementation). For scientist positions please expect to provide a CV with relevant publications.
The official application links are below, but please feel free to contact me directly if you have questions. (My HN username @slac.stanford.edu)
Scientist (Computer Science):
https://erp-hprdext.erp.slac.stanford.edu/psp/hprdext/EMPLOY...
Engineer (Computer Science):
https://erp-hprdext.erp.slac.stanford.edu/psp/hprdext/EMPLOY...
We've had some reports that the application site doesn't work well in Google Chrome. You might want to apply in Firefox.
-
The Underwhelming Impact of Software Engineering Research (April 2022)
There are some points in the middle, but it's rare. I worked on one of these [1]. We've been building the system for just over ten years, and are starting to see some truly killer apps being built on top of it [2, 3].
While it has some great benefits once you arrive, the upfront costs are enormous. You basically need to find a funding source (or sources) that will pay for this product while you're building it. Also, in order for the research payoff to be worth it, you need both the product itself, and subsequent innovations it enables, to be research-worthy. Not all areas of research can support this. On top of it all, even when you do this, you'll still spend years of effort in activities that are essentially not research. You're basically responsible for all of your own customer support, sales, marketing, etc.---like a startup, but without the financial upside if you succeed. Yes there is recognition and so on, but the payoffs aren't as dramatic. Most people aren't ready to commit to this path.
Keep in mind that you can't build this in 5 years either. So a single generation of PhD students can't get it done. The only reason we were successful is because the key staff on the project stuck around for 5+ years after their PhDs because we all believed in doing the work.
Given all that, I don't hold it against people at all who just want to build prototypes and then move on to the next thing. It's way less risky and higher reward relative to the costs.
[1]: https://legion.stanford.edu/
[2]: https://flexflow.ai/
[3]: https://developer.nvidia.com/cunumeric
What are some alternatives?
cunumeric - An Aspiring Drop-In Replacement for NumPy at Scale
pldb - PLDB: a Programming Language Database. A computable encyclopedia about programming languages.
Numba - NumPy aware dynamic Python compiler using LLVM
preshed - 💥 Cython hash tables that assume keys are pre-hashed
scikit-cuda - Python interface to GPU-powered libraries
arkouda - Arkouda (αρκούδα): Interactive Data Analytics at Supercomputing Scale :bear:
TensorFlow-object-detection-tutorial - The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
legate.sparse
bottleneck - Fast NumPy array functions written in C
HTR-solver - Hypersonic Task-based Research (HTR) solver for the Navier-Stokes equations at hypersonic Mach numbers including finite-rate chemistry for dissociating air and multicomponent transport.
dpnp - Data Parallel Extension for NumPy
soleil-x - Soleil-X is a turbulence/particle/radiation solver written in the Regent language for execution with the Legion runtime.