returnn
Nim
returnn | Nim | |
---|---|---|
4 | 347 | |
349 | 16,079 | |
0.6% | 0.5% | |
9.8 | 9.9 | |
10 days ago | 7 days ago | |
Python | Nim | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
returnn
-
Keras Core: Keras for TensorFlow, Jax, and PyTorch
That looks very interesting.
I actually have developed (and am developing) sth very similar, what we call the RETURNN frontend, a new frontend + new backends for our RETURNN framework. The new frontend is supporting very similar Python code to define models as you see in PyTorch or Keras, i.e. a core Tensor class, a base Module class you can derive, a Parameter class, and then a core functional API to perform all the computations. That supports multiple backends, currently mostly TensorFlow (graph-based) and PyTorch, but JAX was something I also planned. Some details here: https://github.com/rwth-i6/returnn/issues/1120
(Note that we went a bit further ahead and made named dimensions a core principle of the framework.)
(Example beam search implementation: https://github.com/rwth-i6/i6_experiments/blob/14b66c4dc74c0...)
One difficulty I found was how design the API in a way that works well both for eager-mode frameworks (PyTorch, TF eager-mode) and graph-based frameworks (TF graph-mode, JAX). That mostly involves everything where there is some state, or sth code which should not just execute in the inner training loop but e.g. for initialization only, or after each epoch, or whatever. So for example:
- Parameter initialization.
- Anything involving buffers, e.g. batch normalization.
- Other custom training loops? Or e.g. an outer loop and an inner loop (e.g. like GAN training)?
- How to implement sth like weight normalization? In PyTorch, the module.param is renamed, and then there is a pre-forward hook, which on-the-fly calculates module.param for each call for forward. So, just following the same logic for both eager-mode and graph-mode?
- How to deal with control flow context, accessing values outside the loop which came from inside, etc. Those things are naturally possible eager-mode, where you would get the most recent value, and where there is no real control flow context.
- Device logic: Have device defined explicitly for each tensor (like PyTorch), or automatically eagerly move tensors to the GPU (like TensorFlow)? Moving from one device to another (or CPU) is automatic or must be explicit?
I see that you have keras_core.callbacks.LambdaCallback which is maybe similar, but can you effectively update the logic of the module in there?
-
Python’s “Type Hints” are a bit of a disappointment to me
> warnings of IDEs are simple to ignore
This is unusual. In my experience, of codebases I have worked with or have seen, when there are type hints, there are almost all perfectly correct.
Also, you can setup the CI to check also for IDE warnings. For example, we use this script for PyCharm: https://github.com/rwth-i6/returnn/blob/master/tests/pycharm...
The test for PyCharm inspections only passes when there are no warnings.
Although, I have to admit, we explicitly exclude type warnings because here we have a couple of false positives. So in this respect, it actually agrees with the article.
But then we also do code review and there we are strict about having it all correct.
Yes, I see the argument of the article that the typing in Python is not perfect and you can easily fool it if you want, so you cannot 100% trust the types. But given good standard practice, it will only rarely happen that the type is not as expected and typing helps a lot. And IDE type warnings, or mypy checks still are useful tools and catch bugs for you, just not maybe 100% of all typing bugs but still maybe 80% of them or so.
> Isn’t it better to detect at least some errors than to detect none at all?
-
How to cleanup a branch (PR) with huge number of commits
I was trying to implement some new feature in some larger somewhat messy project (RETURNN but not so relevant).
So I created a new branch, also made a GitHub draft PR (here), and started working on it.
Nim
- 3 years of fulltime Rust game development, and why we're leaving Rust behind
-
Top Paying Programming Technologies 2024
22. Nim - $80,000
-
"14 Years of Go" by Rob Pike
I think the right answer to your question would be NimLang[0]. In reality, if you're seeking to use this in any enterprise context, you'd most likely want to select the subset of C++ that makes sense for you or just use C#.
[0]https://nim-lang.org/
- Odin Programming Language
-
Ask HN: Interest in a Rust-Inspired Language Compiling to JavaScript?
I don't think it's a rust-inspired language, but since it has strong typing and compiles to javascript, did you give a look at nim [0] ?
For what it takes, I find the language very expressive without the verbosity in rust that reminds me java. And it is also very flexible.
[0] : https://nim-lang.org/
-
The nim website and the downloads are insecure
I see a valid cert for https://nim-lang.org/
-
Nim
FYI, on the front page, https://nim-lang.org, in large type you have this:
> Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula.
-
Things I've learned about building CLI tools in Python
You better off with using a compiled language.
If you interested in a language that's compiled, fast, but as easy and pleasant as Python - I'd recommend you take a look at [Nim](https://nim-lang.org).
And to prove what Nim's capable of - here's a cool repo with 100+ cli apps someone wrote in Nim: [c-blake/bu](https://github.com/c-blake/bu)
-
Mojo is now available on Mac
Chapel has at least several full-time developers at Cray/HPE and (I think) the US national labs, and has had some for almost two decades. That's much more than $100k.
Chapel is also just one of many other projects broadly interested in developing new programming languages for "high performance" programming. Out of that large field, Chapel is not especially related to the specific ideas or design goals of Mojo. Much more related are things like Codon (https://exaloop.io), and the metaprogramming models in Terra (https://terralang.org), Nim (https://nim-lang.org), and Zig (https://ziglang.org).
But Chapel is great! It has a lot of good ideas, especially for distributed-memory programming, which is its historical focus. It is more related to Legion (https://legion.stanford.edu, https://regent-lang.org), parallel & distributed Fortran, ZPL, etc.
- NIR: Nim Intermediate Representation
What are some alternatives?
punctuator2 - A bidirectional recurrent neural network model with attention mechanism for restoring missing punctuation in unsegmented text
zig - General-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
enforce - Python 3.5+ runtime type checking for integration testing and data validation
go - The Go programming language
keras-nlp - Modular Natural Language Processing workflows with Keras
Odin - Odin Programming Language
recurrent-fwp - Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)
rust - Empowering everyone to build reliable and efficient software.
keras-core - A multi-backend implementation of the Keras API, with support for TensorFlow, JAX, and PyTorch.
crystal - The Crystal Programming Language
i6_experiments
v - Simple, fast, safe, compiled language for developing maintainable software. Compiles itself in <1s with zero library dependencies. Supports automatic C => V translation. https://vlang.io