diffrax VS cligen

Compare diffrax vs cligen and see what are their differences.

diffrax

Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/ (by patrick-kidger)

cligen

Nim library to infer/generate command-line-interfaces / option / argument parsing; Docs at (by c-blake)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
diffrax cligen
21 32
1,230 489
- -
8.3 8.4
7 days ago 21 days ago
Python Nim
Apache License 2.0 ISC License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

diffrax

Posts with mentions or reviews of diffrax. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-03.
  • Ask HN: What side projects landed you a job?
    62 projects | news.ycombinator.com | 3 Dec 2023
  • [P] Optimistix, nonlinear optimisation in JAX+Equinox!
    3 projects | /r/MachineLearning | 14 Oct 2023
    Optimistix has high-level APIs for minimisation, least-squares, root-finding, and fixed-point iteration and was written to take care of these kinds of subroutines in Diffrax.
  • Show HN: Optimistix: Nonlinear Optimisation in Jax+Equinox
    2 projects | news.ycombinator.com | 10 Oct 2023
    Diffrax (https://github.com/patrick-kidger/diffrax).

    Here is the GitHub: https://github.com/patrick-kidger/optimistix

    The elevator pitch is Optimistix is really fast, especially to compile. It

  • Scientific computing in JAX
    4 projects | /r/ScientificComputing | 4 Apr 2023
    Sure. So I've got some PyTorch benchmarks here. The main take-away so far has been that for a neural ODE, the backward pass takes about 50% longer in PyTorch, and the forward (inference) pass takes an incredible 100x longer.
  • [D] JAX vs PyTorch in 2023
    5 projects | /r/MachineLearning | 9 Mar 2023
    FWIW this worked for me. :D My full-time job is now writing JAX libraries at Google. Equinox for neural networks, Diffrax for differential equation solvers, etc.
  • Returning to snake's nest after a long journey, any major advances in python for science ?
    7 projects | /r/Python | 24 Jan 2023
    It's relatively early days yet, but JAX is in the process of developing its nascent scientific computing / scientific machine learning ecosystem. Mostly because of its strong autodifferentiation capabilities, excellent JIT compiler etc. (E.g. to show off one of my own projects, Diffrax is the library of diffeq solvers for JAX.)
  • What's the best thing/library you learned this year ?
    12 projects | /r/Python | 16 Dec 2022
    Diffrax - solving ODEs with Jax and computing it's derivatives automatically functools - love partial and lru_cache fastprogress - simpler progress bar than tqdm
  • PyTorch 2.0
    4 projects | news.ycombinator.com | 2 Dec 2022
    At least prior to this announcement: JAX was much faster than PyTorch for differentiable physics. (Better JIT compiler; reduced Python-level overhead.)

    E.g for numerical ODE simulation, I've found that Diffrax (https://github.com/patrick-kidger/diffrax) is ~100 times faster than torchdiffeq on the forward pass. The backward pass is much closer, and for this Diffrax is about 1.5 times faster.

    It remains to be seen how PyTorch 2.0 will compare, or course!

    Right now my job is actually building out the scientific computing ecosystem in JAX, so feel free to ping me with any other questions.

  • Python 3.11 is much faster than 3.8
    11 projects | news.ycombinator.com | 26 Oct 2022
    https://github.com/patrick-kidger/diffrax

    Which are neural network and differential equation libraries for JAX.

    [Obligatory I-am-googler-my-opinions-do-not-represent- your-employer...]

  • Ask HN: What's your favorite programmer niche?
    8 projects | news.ycombinator.com | 15 Oct 2022
    Autodifferentiable programming!

    Neural networks are the famous example of this, of course -- but this can be extended to all of scientific computing. ODE/SDE solvers, root-finding algorithms, LQP, molecular dynamics, ...

    These days I'm doing all my work in JAX. (E.g. see Equinox or Diffrax: https://github.com/patrick-kidger/equinox, https://github.com/patrick-kidger/diffrax). A lot of modern work is now based around hybridising such techniques with neural networks.

    I'd really encourage anyone interested to learn how JAX works under-the-hood as well. (Look up "autodidax") Lots of clever/novel ideas in its design.

cligen

Posts with mentions or reviews of cligen. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-12.
  • CLI user experience case study
    12 projects | news.ycombinator.com | 12 Jan 2024
    There is also generating the whole thing from a function signature (e.g. https://github.com/c-blake/cligen ) since then CLauthors need not learn a new spec language, but then CLauthors must add back in helpful usage metadata/semantics and still need to learn a library API (but I like how those two things can be "gradual"). It's a hard space in which to find perfection, but I wish you luck in your attempt!
  • Things I've learned about building CLI tools in Python
    16 projects | news.ycombinator.com | 24 Oct 2023
    cligen also allows End-CL-users to adjust colorization of --help output like https://github.com/c-blake/cligen/blob/master/screenshots/di... using something like https://github.com/c-blake/cligen/wiki/Dark-BG-Config-File

    Last I knew, the argparse backing most Py CLI solutions did not support such easier (for many) to read help text, but the PyUniverse is too vast to be sure without much related work searching.

  • Removing Garbage Collection from the Rust Language (2013)
    9 projects | news.ycombinator.com | 11 Sep 2023
    20 milliseconds? On my 7 year old Linux box, this little Nim program https://github.com/c-blake/bu/blob/main/wsz.nim runs to completion in 275 microseconds when fully statically linked with musl libc on Linux. That's with a stripped environment (with `env -i`). It takes more like 318 microseconds with my usual 54 environment variables. The program only does about 17 system calls, though.

    Additionally, https://github.com/c-blake/cligen makes decent CLI tools a real breeze. If you like some of Go's qualities but the language seems too limited, you might like Nim: https://nim-lang.org. I generally find getting good performance much less of a challenge with Nim, but Nim is undeniably less well known with a smaller ecosystem and less corporate backing.

  • Writing Small CLI Programs in Common Lisp (2021)
    5 projects | news.ycombinator.com | 5 Sep 2023
    If you find this article interesting and are curious about Nim then you would probably also be curious about https://github.com/c-blake/cligen

    That allows adding just 1-line to a module to add a pretty complete CLI and then a string per parameter to properly document options (assuming an existing API using keyword arguments).

    It's also not hard to compile & link a static ELF binary with Nim.. I do it with MUSL libc on Linux all the time. I just toss into my ~/.config/nim/nim.cfg:

        @if musl:  # make nim c -d:musl .. foo static-link `foo` with musl
  • GNU Parallel, where have you been all my life?
    19 projects | news.ycombinator.com | 21 Aug 2023
    Sure. No problem.

    Even Windows has popen these days. There are some tiny popenr/popenw wrappers in https://github.com/c-blake/cligen/blob/master/cligen/osUt.ni...

    Depending upon how balanced work is on either side of the pipe, you usually can even get parallel speed-up on multicore with almost no work. For example, there is no need to use quote-escaped CSV parsing libraries when you just read from a popen()d translator program producing an easier format: https://github.com/c-blake/nio/blob/main/utils/c2tsv.nim

  • The Bipolar Lisp Programmer
    3 projects | news.ycombinator.com | 11 Aug 2023
    Nim is terse yet general and can be made even more so with effort. E.g., You can gin up a little framework that is even more terse than awk yet statically typed and trivially convertible to run much faster like https://github.com/c-blake/bu/blob/main/doc/rp.md

    You can statically introspect code to then generate related/translated ASTs to create nearly frictionless helper facilities like https://github.com/c-blake/cligen .

    You can do all of this without any real run-time speed sacrifices, depending upon the level of effort you put in / your expertise. Since it generates C/C++ or Javascript you get all the abilities of backend compilers almost out of the box, like profile-guided-optimization or for JS JIT compilation.

  • Ask HN: Why did Nim not catch-on like wild fire as Rust did?
    16 projects | news.ycombinator.com | 25 Jun 2023
    It's more that those tools were what come to mind when I specifically think of my exposure to the existence of rust. Its perhaps not that the tools were there, but that they were well known (and known for being written in rust).

    Anecdatapoint - I've never heard of literally a single one of the utilities listed on the bu page.

    Regarding cligen, right from the start clap wins on producing idiomatic output. Compare: https://github.com/c-blake/cligen#cligen-a-native-api-inferr...

        Usage:
  • Newbie looking at nim
    1 project | /r/nim | 10 Apr 2023
    cool example would be this which is a CLI generation library. It lets you describe command line interfacs simply using function signatures
  • Zig and Rust
    6 projects | news.ycombinator.com | 27 Mar 2023
    >Does nim have anything as polished and performant as clap and serde?

    "Polished" and "high quality" are more subjective/implicitly about adoption, IMO. "Performant" has many dimensions. I just tested the Nim https://github.com/c-blake/cligen vs clap: cligen used 5X less object file space (with all size optimization tweaks enabled in both), 20% less run-time memory for large argument lists, and the same run-time per argument (with march=native equivalents on both, within statistical noise). cligen has many features - "did you mean?/suggestions", color generated help and all that - I do not see obvious feature in clap docs missing in cligen. The Nim binary serde showing is unlikely as good but there are like 10 JSON packages and that seems maybe your primary concern.

    More to add color your point than disagree (and follow up on my "adoption") - your ideas about polish, quality, docs, etc. are part of feedback loop(s) you mentioned. More users => Users complain (What is confusing? What is missing? etc.) => things get fixed/cleaned up/improved => More users. Besides "performant" being multi-dimensional, the feedback loop is more of a "cyclic graph". :-) While I probably prefer Nim as much or more as @netbioserror, I am not too shocked by the mindshare capture. It seems to happen every 5..10 years or so in prog.langs.

    While many of your points are not invalid, tech is also a highly hype-driven & fad-driven realm. In my experience, the more experience with this meta-feature that someone has, the more skeptical they are of the latest thing (more rounds of regret, etc.). Also, that feedback graph is not a pure good. Things can get too popular too quickly with near permanent consequences. ipv4 got popular so quickly that we are still mostly stuck on it 40 years later as ipv6 struggles for penetration. Whatever your favorite PL is, it may also grow features too fast.

  • Self Hosted SaaS Alternatives
    17 projects | news.ycombinator.com | 5 Mar 2023
    You are welcome. Thanks are too rarely offered. :-)

    You may also be interested in word stemming ( such as used by snowball stemmer in https://github.com/c-blake/nimsearch ) or other NLP techniques, but I don't know how internationalized/multi-lingual that stuff is, but conceptually you might want "series of stemmed words" to be the content fragments of interest.

    Similarity scores have many applications. Weights on graph of cancelled downloads ranked by size might be one. :)

    Of course, for your specific "truncation" problem, you might also be able to just do an edit distance against the much smaller filenames and compare data prefixes in files or use a SHA256 of a content-based first slice. ( There are edit distance algos in Nim in https://github.com/c-blake/cligen/blob/master/cligen/textUt.... as well as in https://github.com/c-blake/suggest ).

    Or, you could do a little program like ndup/sh/ndup to create a "mirrored file tree" of such content-based slices then you could use any true duplicate-file finder (like https://github.com/c-blake/bu/blob/main/dups.nim) on the little signature system to identify duplicates and go from path suffixes in those clusters back to the main filesystem. Of course, a single KV store within one or two files would be more efficient than thousands of tiny files. There are many possibilities.

What are some alternatives?

When comparing diffrax and cligen you can also consider the following projects:

deepxde - A library for scientific machine learning and physics-informed learning

httpbeast - A highly performant, multi-threaded HTTP 1.1 server written in Nim.

tiny-cuda-nn - Lightning fast C++/CUDA neural network framework

bioawk - BWK awk modified for biological data

flax - Flax is a neural network library for JAX that is designed for flexibility.

nimforum - Lightweight alternative to Discourse written in Nim

juliaup - Julia installer and version multiplexer

loggedfs - LoggedFS - Filesystem monitoring with Fuse

equinox - Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/

lobster - The Lobster Programming Language

dm-haiku - JAX-based neural network library

walkdir - Rust library for walking directories recursively.