Petalisp
Optimization.jl
Our great sponsors
Petalisp | Optimization.jl | |
---|---|---|
17 | 3 | |
424 | 658 | |
- | 3.3% | |
8.5 | 9.6 | |
about 2 months ago | 9 days ago | |
Common Lisp | Julia | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Petalisp
- Petalisp: Elegant High Performance Computing
- Is there a tutorial for automatic differentiation with petalisp?
-
Is there a language with lisp syntax but C semantics?
While not "as fast as C" (C is not the absolute pinnacle of performance), Common Lisp is incredibly fast compared to the majority of programming languages around today. There is even a huge amount of ongoing work being done to make it faster still. We are seeing many interesting projects that make better use of the hardware in your computer (e.g. https://github.com/marcoheisig/Petalisp).
-
Common Lisp Implementations in 2023
i think lisp-stat library is actually being developed. however one numerical cl library that doesnt get enough mention and is being constantly developed is petalisp for HPC
https://github.com/marcoheisig/Petalisp
-
numericals - Performance of NumPy with the goodness of Common Lisp
However, if you have a lisp library that puts those semantics to use, then you could get it to employ magicl/ext-blas and cl-bmas to speed it up. (petalisp looks relevant, but I lack the background to compare it with APL.)
-
New Lisp-Stat Release
> his means cl pagckages can be "done".
this is true if there is nothing functional that can be added to a package. however its very much not true for ml frameworks right now. new things are being added all the time in the field. however even in the package i linked you have the necessary ingredients for any deep learning model: cuda and back propagation. the other person mentioned convolution which i think is pretty trivial to implement but still, if you expect everything for you to be ready made then you should probably stick to tf and pytorch. if you want to explore the cutting edge and push the boundaries then i think common lisp is a good tool. as an aside it might also be interesting to note that a common lisp package (Petalisp) is being used for high performance computing by a german university
https://github.com/marcoheisig/Petalisp
- The Julia language has a number of correctness flaws
-
When a young programmer who has been using C for several years is convinced that C is the best possible programming language and that people who don't prefer it just haven't use it enough, what is the best argument for Lisp vs C, given that they're already convinced in favor of C?
One trick is that Common Lisp can generate and compile code at runtime, whereas static languages typically do not have a compiler available at runtime. This lets you make your own lazy person's JIT/staged compiler, which is useful if some part of the problem is not known at compile-time. Such an approach has been used at least for array munging, type munging and regular expression munging.
Optimization.jl
-
SciPy: Interested in adopting PRIMA, but little appetite for more Fortran code
Interesting response. I develop the Julia SciML organization https://sciml.ai/ and we'd be more than happy to work with you to get wrappers for PRIMA into Optimization.jl's general interface (https://docs.sciml.ai/Optimization/stable/). Please get in touch and we can figure out how to set this all up. I personally would be curious to try this out and do some benchmarks against nlopt methods.
-
Help me to choose an optimization framework for my problem
There are also Optimization and Nonconvex , which seem like umbrella packages and I am not sure what methods to use inside these packages. Any help on these?
-
The Julia language has a number of correctness flaws
> but would you say most packages follow or enforce SemVer?
The package ecosystem pretty much requires SemVer. If you just say `PackageX = "1"` inside of a Project.toml [compat], then it will assume SemVer, i.e. any version 1.x is non-breaking an thus allowed, but not version 2. Some (but very few) packages do `PackageX = ">=1"`, so you could say Julia doesn't force SemVar (because a package can say that it explicitly believes it's compatible with all future versions), but of course that's nonsense and there will always be some bad actors around. So then:
> Would enforcing a stricter dependency graph fix some of the foot guns of using packages or would that limit composability of packages too much?
That's not the issue. As above, the dependency graphs are very strict. The issue is always at the periphery (for any package ecosystem really). In Julia, one thing that can amplify it is the fact that Requires.jl, the hacky conditional dependency system that is very not recommended for many reasons, cannot specify version requirements on conditional dependencies. I find this to be the root cause of most issues in the "flow" of the package development ecosystem. Most packages are okay, but then oh, I don't want to depend on CUDA for this feature, so a little bit of Requires.jl here, and oh let me do a small hack for OffSetArrays. And now these little hacky features on the edge are both less tested and not well versioned.
Thankfully there's a better way to do it by using multi-package repositories with subpackages. For example, https://github.com/SciML/GalacticOptim.jl is a global interface for lots of different optimization libraries, and you can see all of the different subpackages here https://github.com/SciML/GalacticOptim.jl/tree/master/lib. This lets there be a GalacticOptim and then a GalacticBBO package, each with versioning, but with tests being different while allowing easy co-development of the parts. Very few packages in the Julia ecosystem actually use this (I only know of one other package in Julia making use of this) because the tooling only recently was able to support it, but this is how a lot of packages should be going.
The upside too is that Requires.jl optional dependency handling is by far and away the main source of loading time issues in Julia (because it blocks precompilation in many ways). So it's really killing two birds with one stone: decreasing package load times by about 99% (that's not even a joke, it's the huge majority of the time for most packages which are not StaticArrays.jl) while making version dependencies stricter. And now you know what I'm doing this week and what the next blog post will be on haha.
What are some alternatives?
awesome-cl - A curated list of awesome Common Lisp frameworks, libraries and other shiny stuff.
StatsBase.jl - Basic statistics for Julia
JWM - Cross-platform window management and OS integration library for Java
OffsetArrays.jl - Fortran-like arrays with arbitrary, zero or negative starting indices.
cl-cuda - Cl-cuda is a library to use NVIDIA CUDA in Common Lisp programs.
avm - Efficient and expressive arrayed vector math library with multi-threading and CUDA support in Common Lisp.
magicl - Matrix Algebra proGrams In Common Lisp.
Distributions.jl - A Julia package for probability distributions and associated functions.
lish - Lisp Shell
StaticLint.jl - Static Code Analysis for Julia
diffrax - Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/