AMDGPU.jl
julia
AMDGPU.jl | julia | |
---|---|---|
6 | 350 | |
265 | 44,534 | |
0.4% | 0.5% | |
9.0 | 10.0 | |
11 days ago | 3 days ago | |
Julia | Julia | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AMDGPU.jl
-
Why is AMD leaving ML to nVidia?
For myself, I use Julia to write my own software (that is run on AMD supercomputer) on Fedora system, using 6800XT. For my experience, everything worked nicely. To install you need to install rocm-opencl package with dnf, AMD Julia package (AMDGPU.jl), add yourself to video group and you are good to go. Also, Julia's KernelAbstractions.jl is a good to have, when writing portable code.
-
[GUIDE] How to install ROCm for GPU Julia programming via Distrobox
The Julia package AMDGPU.jl provides a Julia interface for AMD GPU (ROCm) programming. As they say, the package is being developed for Julia 1.7, 1.9 and above, but not 1.8. Therefore I downloaded the Julia binary of version 1.7.3 from the older releases Julia page.
-
First True Exascale Supercomputer
This is exciting news! What's also exciting is that it's not just C++ that can run on this supercomputer; there is also good (currently unofficial) support for programming those GPUs from Julia, via the AMDGPU.jl library (note: I am the author/maintainer of this library). Some of our users have been able to run AMDGPU.jl's testsuite on the Crusher test system (which is an attached testing system with the same hardware configuration as Frontier), as well as their own domain-specific programs that use AMDGPU.jl.
What's nice about programming GPUs in Julia is that you can write code once and execute it on multiple kinds of GPUs, with excellent performance. The KernelAbstractions.jl library makes this possible for compute kernels by acting as a frontend to AMDGPU.jl, CUDA.jl, and soon Metal.jl and oneAPI.jl, allowing a single piece of code to be portable to AMD, NVIDIA, Intel, and Apple GPUs, and also CPUs. Similarly, the GPUArrays.jl library allows the same behavior for idiomatic array operations, and will automatically dispatch calls to BLAS, FFT, RNG, linear solver, and DNN vendor-provided libraries when appropriate.
I'm personally looking forward to helping researchers get their Julia code up and running on Frontier so that we can push scientific computing to the max!
Library link: <https://github.com/JuliaGPU/AMDGPU.jl>
-
IA et Calcul scientifique dans Kubernetes avec le langage Julia, K8sClusterManagers.jl
GitHub - JuliaGPU/AMDGPU.jl: AMD GPU (ROCm) programming in Julia
-
Cuda.jl v3.3: union types, debug info, graph APIs
https://github.com/JuliaGPU/AMDGPU.jl
https://github.com/JuliaGPU/oneAPI.jl
These are both less mature than CUDA.jl, but are in active development.
- Unified programming model for all devices – will it catch on?
julia
-
Top Paying Programming Technologies 2024
34. Julia - $74,963
-
Optimize sgemm on RISC-V platform
I don't believe there is any official documentation on this, but https://github.com/JuliaLang/julia/pull/49430 for example added prefetching to the marking phase of a GC which saw speedups on x86, but not on M1.
-
Dart 3.3
3. dispatch on all the arguments
the first solution is clean, but people really like dispatch.
the second makes calling functions in the function call syntax weird, because the first argument is privileged semantically but not syntactically.
the third makes calling functions in the method call syntax weird because the first argument is privileged syntactically but not semantically.
the closest things to this i can think of off the top of my head in remotely popular programming languages are: nim, lisp dialects, and julia.
nim navigates the dispatch conundrum by providing different ways to define free functions for different dispatch-ness. the tutorial gives a good overview: https://nim-lang.org/docs/tut2.html
lisps of course lack UFCS.
see here for a discussion on the lack of UFCS in julia: https://github.com/JuliaLang/julia/issues/31779
so to sum up the answer to the original question: because it's only obvious how to make it nice and tidy like you're wanting if you sacrifice function dispatch, which is ubiquitous for good reason!
-
Julia 1.10 Highlights
https://github.com/JuliaLang/julia/blob/release-1.10/NEWS.md
-
Best Programming languages for Data Analysis📊
Visit official site: https://julialang.org/
-
Potential of the Julia programming language for high energy physics computing
No. It runs natively on ARM.
julia> versioninfo() Julia Version 1.9.3 Commit bed2cd540a1 (2023-08-24 14:43 UTC) Build Info: Official https://julialang.org/ release
-
Rust std:fs slower than Python
https://github.com/JuliaLang/julia/issues/51086#issuecomment...
So while this "fixes" the issue, it'll introduce a confusing time delay between you freeing the memory and you observing that in `htop`.
But according to https://jemalloc.net/jemalloc.3.html you can set `opt.muzzy_decay_ms = 0` to remove the delay.
Still, the musl author has some reservations against making `jemalloc` the default:
https://www.openwall.com/lists/musl/2018/04/23/2
> It's got serious bloat problems, problems with undermining ASLR, and is optimized pretty much only for being as fast as possible without caring how much memory you use.
With the above-mentioned tunables, this should be mitigated to some extent, but the general "theme" (focusing on e.g. performance vs memory usage) will likely still mean "it's a tradeoff" or "it's no tradeoff, but only if you set tunables to what you need".
-
Eleven strategies for making reproducible research the norm
I have asked about Julia's reproducibility story on the Guix mailing list in the past, and at the time Simon Tournier didn't think it was promising. I seem to recall Julia itself didnt have a reproducible build. All I know now is that github issue is still not closed.
https://github.com/JuliaLang/julia/issues/34753
-
Julia as a unifying end-to-end workflow language on the Frontier exascale system
I don't really know what kind of rebuttal you're looking for, but I will link my HN comments from when this was first posted for some thoughts: https://news.ycombinator.com/item?id=31396861#31398796. As I said, in the linked post, I'm quite skeptical of the business of trying to assess relative buginess of programming in different systems, because that has strong dependencies on what you consider core vs packages and what exactly you're trying to do.
However, bugs in general suck and we've been thinking a fair bit about what additional tooling the language could provide to help people avoid the classes of bugs that Yuri encountered in the post.
The biggest class of problems in the blog post, is that it's pretty clear that `@inbounds` (and I will extend this to `@assume_effects`, even though that wasn't around when Yuri wrote his post) is problematic, because it's too hard to write. My proposal for what to do instead is at https://github.com/JuliaLang/julia/pull/50641.
Another common theme is that while Julia is great at composition, it's not clear what's expected to work and what isn't, because the interfaces are informal and not checked. This is a hard design problem, because it's quite close to the reasons why Julia works well. My current thoughts on that are here: https://github.com/Keno/InterfaceSpecs.jl but there's other proposals also.
-
Getaddrinfo() on glibc calls getenv(), oh boy
Doesn't musl have the same issue? https://github.com/JuliaLang/julia/issues/34726#issuecomment...
I also wonder about OSX's libc. Newer versions seem to have some sort of locking https://github.com/apple-open-source-mirror/Libc/blob/master...
but older versions (from 10.9) don't have any lockign: https://github.com/apple-oss-distributions/Libc/blob/Libc-99...
What are some alternatives?
Vulkan.jl - Using Vulkan from Julia
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
oneAPI.jl - Julia support for the oneAPI programming toolkit.
NetworkX - Network Analysis in Python
KernelAbstractions.jl - Heterogeneous programming in Julia
Lua - Lua is a powerful, efficient, lightweight, embeddable scripting language. It supports procedural programming, object-oriented programming, functional programming, data-driven programming, and data description.
NeuralPDE.jl - Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation
rust-numpy - PyO3-based Rust bindings of the NumPy C-API
ROCm - AMD ROCmâ„¢ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
Numba - NumPy aware dynamic Python compiler using LLVM
GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.
F# - Please file issues or pull requests here: https://github.com/dotnet/fsharp