autograd VS ideas4

Compare autograd vs ideas4 and see what are their differences.

autograd

Efficiently computes derivatives of numpy code. (by HIPS)

ideas4

An Additional 100 Ideas for Computing https://samsquire.github.io/ideas4/ (by samsquire)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
autograd ideas4
6 26
6,797 89
0.7% -
6.0 4.6
7 days ago 6 months ago
Python
MIT License -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

autograd

Posts with mentions or reviews of autograd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-28.
  • JAX – NumPy on the CPU, GPU, and TPU, with great automatic differentiation
    12 projects | news.ycombinator.com | 28 Sep 2023
    Actually, that's never been a constraint for JAX autodiff. JAX grew out of the original Autograd (https://github.com/hips/autograd), so differentiating through Python control flow always worked. It's jax.jit and jax.vmap which place constraints on control flow, requiring structured control flow combinators like those.
  • Autodidax: Jax Core from Scratch (In Python)
    4 projects | news.ycombinator.com | 11 Feb 2023
    I'm sure there's a lot of good material around, but here are some links that are conceptually very close to the linked Autodidax.

    There's [Autodidact](https://github.com/mattjj/autodidact), a predecessor to Autodidax, which was a simplified implementation of [the original Autograd](https://github.com/hips/autograd). It focuses on reverse-mode autodiff, not building an open-ended transformation system like Autodidax. It's also pretty close to the content in [these lecture slides](https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slid...) and [this talk](http://videolectures.net/deeplearning2017_johnson_automatic_...). But the autodiff in Autodidax is more sophisticated and reflects clearer thinking. In particular, Autodidax shows how to implement forward- and reverse-modes using only one set of linearization rules (like in [this paper](https://arxiv.org/abs/2204.10923)).

    Here's [an even smaller and more recent variant](https://gist.github.com/mattjj/52914908ac22d9ad57b76b685d19a...), a single ~100 line file for reverse-mode AD on top of NumPy, which was live-coded during a lecture. There's no explanatory material to go with it though.

  • Numba: A High Performance Python Compiler
    11 projects | news.ycombinator.com | 27 Dec 2022
    XLA is "higher level" than what Numba produces.

    You may be able to get the equivalent of jax via numba+numpy+autograd[1], but I haven't tried it before.

    IMHO, jax is best thought of as a numerical computation library that happens to include autograd, vmapping, pmapping and provides a high level interface for XLA.

    I have built a numerical optimisation library with it, and although a few things became verbose, it was a rather pleasant experience as the natural vmapping made everything a breeze, I didn't have to write the gradients for my testing functions, except for special cases that involved exponents and logs that needed a bit of delicate care.

    [1] https://github.com/HIPS/autograd

  • Run Your Own DALL·E Mini (Craiyon) Server on EC2
    16 projects | dev.to | 26 Jul 2022
    Next, we want the code in the https://github.com/hrichardlee/dalle-playground repo, and we want to construct a pip environment from the backend/requirements.txt file in that repo. We were almost able to use the saharmor/dalle-playground repo as-is, but we had to make one change to add the jax[cuda] package to the requirements.txt file. In case you haven’t seen jax before, jax is a machine-learning library from Google, roughly equivalent to Tensorflow or PyTorch. It combines Autograd for automatic differentiation and XLA (accelerated linear algebra) for JIT-compiling numpy-like code for Google’s TPUs or Nvidia’s CUDA API for GPUs. The CUDA support requires explicitly selecting the [cuda] option when we install the package.
  • Trade-Offs in Automatic Differentiation: TensorFlow, PyTorch, Jax, and Julia
    7 projects | news.ycombinator.com | 25 Dec 2021
    > fun fact, the Jax folks at Google Brain did have a Python source code transform AD at one point but it was scrapped essentially because of these difficulties

    I assume you mean autograd?

    https://github.com/HIPS/autograd

  • JAX - COMPARING WITH THE BIG ONES
    2 projects | /r/CryptocurrencyICO | 6 Sep 2021
    These four points lead to an enormous differentiation in the ecosystem: Keras, for example, was originally thought to be almost completely focused on point (4), leaving the other tasks to a backend engine. In 2015, on the other hand, Autograd focused on the first two points, allowing users to write code using only "classic" Python and NumPy constructs, providing subsequently many options for point (2). Autograd's simplicity greatly influenced the development of the libraries to follow, but it was penalized by the clear lack of the points (3) and (4), i.e. adequate techniques to speed up the code and sufficiently abstract modules for neural network development.

ideas4

Posts with mentions or reviews of ideas4. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-18.
  • WTF is going on with R7RS Large?
    2 projects | news.ycombinator.com | 18 Aug 2023
    https://github.com/samsquire/ideas4#334-knowledgegraph-progr...
  • Async rust – are we doing it all wrong?
    9 projects | news.ycombinator.com | 19 Jul 2023
    How would you do control flow and scheduling and parallelism and async efficiently with this code?

    `db.save()`, `download()` are IO intensive whereas `document.query("a")` and `parse` is CPU intensive.

    I think its work diagram looks like this: https://github.com/samsquire/dream-programming-language/blob...

    I've tried to design a multithreaded architecture that is scalable which combines lightweight threads + thread pools for work + control threads for IO epoll or liburing loops:

    Here's the high level diagram:

    https://github.com/samsquire/ideas5/blob/main/NonblockingRun...

    The secret is modelling control flow as a data flow problem and having a simple but efficient scheduler.

    I wrote about schedulers here and binpacking work into time:

    https://github.com/samsquire/ideas4#196-binpacking-work-into...

    I also have a 1:M:N lightweight thread scheduler/multiplexer:

    https://github.com/samsquire/preemptible-thread

  • It Took Me a Decade to Find the Perfect Personal Website Stack – Ghost+Fathom
    14 projects | news.ycombinator.com | 9 Jul 2023
    My blogging/journalling setup is simple.

    I just use GitHub. I just rely on the default repository view on GitHub.com

    I create a README.md and add markdown headings to the bottom or to the top (bottom if its a journal, top if it's a blog) and then when I get to 100-800 I create a new repository and repeat.

    https://github.com/samsquire/ideas (2013)

    https://github.com/samsquire/ideas4

    https://github.com/samsquire/ideas3

    https://github.com/samsquire/ideas2

  • Ask HN: Could you show your personal blog here?
    55 projects | news.ycombinator.com | 4 Jul 2023
    Thanks for posting this Ask HN question.

    I journal ideas and thoughts about computers and software. I am interested in software architecture, parallelism, async, coroutines, database internals, programming language implementation, software design and the web.

    https://github.com/samsquire/ideas (2013)

    https://github.com/samsquire/ideas2

    https://github.com/samsquire/ideas3

    https://github.com/samsquire/ideas4 <-- this is recent but needs editing

    https://github.com/samsquire/ideas5 <-- this is what I'm working on now

    https://github.com/samsquire/startups

    https://github.com/samsquire/blog <-- thoughts I want to write about, but incomplete

    I use README.md on GitHub and create a heading at the bottom for each entry. I use Typora on Windows or the GitHub web interface to edit.

  • Our Plan for Python 3.13
    10 projects | news.ycombinator.com | 15 Jun 2023
    My deep interest is multithreaded code. For a software engineer working on business software, I'm not sure if they should be spending too much time debugging multithreaded bugs because they are operating at the wrong level of abstraction from my perspective for business operations.

    I'm looking for an approach to writing concurrent code with parallelism that is elegant and easy to understand and hard to introduce bugs. This requires alternative programming approaches and in my perspective, alternative notations.

    One such design uses monotonic state machines which can only move in one direction. I've designed a syntax and written a parser and very toy runtime for the notation.

    https://github.com/samsquire/ideas5#56-stateful-circle-progr...

    https://github.com/samsquire/ideas4#558-state-machine-formul...

    The idea is inspired by LMAX Disruptor and queuing systems.

  • io_uring support for libuv – 8x increase in throughput
    3 projects | news.ycombinator.com | 28 May 2023
    This is really good. Thank you!

    I've been studying how to create an asynchronous runtime that works across threads. My goal: neither CPU and IO bound work slow down event loops.

    I've only written two Rust programs but in Rust you presumably you can use Rayon (CPU scheduling) and Tokio (IO scheduling)

    I wrote about using the LMAX Disruptor ringbuffer pattern between threads.

    https://github.com/samsquire/ideas4#51-rewrite-synchronous-c...

    I am designing a state machine formulation syntax that is thread safe and parallelises effectively. It looks like EBNF syntax or a bash pipeline. Parallel steps go in curly brackets. There is an implied interthread ringbuffer between pipes.

      states = state1 | {state1a state1b state1c} {state2a state2b state2d} | state3
  • What Is Type-Level Programming?
    3 projects | news.ycombinator.com | 3 May 2023
    This is very interesting and could lead to some futuristic programming technology.

    I kind of want to plot the state space of a program to see all available states.

    In my exploration of distributed systems, microservices and multithreaded systems, it is extremely helpful to try and see what potential states the system can be in. Global and local reasoning of these kinds of software is rather difficult.

    I've written about value tracing but I've not heard of treating values as types. I would love to be able to see the trajectory of a value through different states.

    https://github.com/samsquire/ideas4#571-value-calculus-varia...

    I've never written a TLA+ specification and I'm a complete beginner to this space but I've been trying to understand the dining philosophers one. TLA+ Toolbox is aware of discrete states in the state space, which is absolutely awesome. Types can inform us about future possible valid states.

    I began writing a visualisation of memory and animated the movement of memory around to try reveal patterns.

    https://replit.com/@Chronological/ProgrammingRTS#index.html

    If we see types or values as positions, we can create animations of the state space unfolding in front of us. This is the dream.

  • Late Architecture with Functional Programming
    2 projects | news.ycombinator.com | 30 Apr 2023
    Great comment!

    >I think late architecture is orthogonal to functional, imperative

    Absolutely. From a truly architectural view, procedural, functional, and method-oriented (current OO) are really only variations on the call/return architectural style. Good and sometimes important distinctions, but not really that far apart. They are very much about computing, results from inputs. That is an appropriate architecture for fewer and fewer programs.

    See Why Architecture Oriented Programming matters

    https://blog.metaobject.com/2019/02/why-architecture-oriente...

    and

    Can Programmers Escape the Gentle Tyranny of call/return?

    https://2020.programming-conference.org/details/salon-2020-p...

    > its solution is higher level than even functional programming

    Yes. Well, functional actually gets most of its utility from being lower level as far as paradigms go (less powerful). But yes.

    > and more abstract

    No. Well, yes, if expressed with current programming languages. But that's part of the problem set, not part of the solution set. We should be able to express our architectures less abstractly, more concretely, but for that we need linguistic support. Which is why I am working on that:

    http://objective.st

    > I want software architecture to be cheap and easy to change without breaking any existing behaviours. I don't know much research on this subject.

    There was quite a bit of research at CMU, for example on packaging mismatch. Famous paper Architectural Mismatch, Why Reuse is so hard, and the 10 year follow up in 2009: Architectural Mismatch: Why Reuse is Still So Hard

    https://repository.upenn.edu/cgi/viewcontent.cgi?article=107...*

    Not much has changed since.

    > https://github.com/samsquire/ideas4

    > https://devops-pipeline.com

    Will check those out. Dataflow is definitely a big part of it, with the extension of dataflow constraints (make, spreadsheets, "FRP"/"Rx"). But so is in-process REST with Storage Combinators!

    And breaking down barriers between scripting and "real" programming.

  • Service Mesh Use Cases
    2 projects | news.ycombinator.com | 11 Feb 2023
    Thanks for this.

    I have never deployed a server mesh or used one but I am designing something similar at the code layer. It is designed to route between server components. That is, at the architecture between threads in a multithreaded system.

    The problem I want to solve is that I want architecture to be trivially easy to change with minimal code changes. This is the promise and allure of enterprise service buses and messaging queues.

    I have managed RabbitMQ and I didn't enjoy it.

    If I want a system that can scale up and down and that multiples of any system object can be introduced or removed without drastic rewrites.

    I would like to decouple bottleneck from code and turn it into runtime configuration.

    My understanding of things such as Traefik and istio is that they are frustrating to set up.

    Specifically I am working on designing interthread communication patterns for multithreaded software.

    How do you design an architecture that is easy to change, scales and is flexible?

    I am thinking of a message routing definition format that is extremely flexible and allows any topology to be created.

    https://github.com/samsquire/ideas4#526-multiplexing-setting...

    I think there is application of the same pattern to the network layer too.

    Each communication event has associated with it an environment of keyvalues that look similar to this:

      petsserver1
  • Release engineering is exhausting so here's cargo-dist
    12 projects | news.ycombinator.com | 1 Feb 2023
    Thanks for remembering me :-)

    I would like things to run locally by default and then deployed to the cloud where they run.

    Should be easier to debug problems if I can get the code to my machine and investigate issues with tools that my computer has such as "strace", "perf" and debug logging that I liberally apply to the build script.

    In production we would have log aggregation and log search (such as ELK stack) and it is a good habit to get into the perspective of debugging production via tooling.

    But CICD feels before that tooling in the pipeline. You could wire up your CICD to log to ELK but I would prefer local deployable software.

    I think my focus on automating things means I want to be capable of seeing how the thing works without relying on a deployed black box in the cloud and using assumptions of how it works rather than direct investigation.

    One of my journal entries is almost a lamentation of all the things that need to be done to release and use software.

    This is that entry:

    https://github.com/samsquire/ideas4#5-permanent-softwareplat...

    I wonder if software could be deployed more like a URL that has all the information to configure a virtual machine. Docker over URL or something.

What are some alternatives?

When comparing autograd and ideas4 you can also consider the following projects:

Enzyme - High-performance automatic differentiation of LLVM and MLIR.

preemptible-thread - How to preempt threads in user space

SwinIR - SwinIR: Image Restoration Using Swin Transformer (official repository)

ideas2 - Another 85+ Ideas for Computing https://samsquire.github.io/ideas2/

jaxonnxruntime - A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.

wg-async - Working group dedicated to improving the foundations of Async I/O in Rust

autodidact - A pedagogical implementation of Autograd

ideas - a hundred ideas for computing - a record of ideas - https://samsquire.github.io/ideas/

fbpic - Spectral, quasi-3D Particle-In-Cell code, for CPU and GPU

saddle-data-graph - where does it come from, where does it go?

pure_numba_alias_sampling - Pure numba version of Alias sampling algorithm from L. Devroye's, "Non-Uniform Random Random Variate Generation"

periphery - A tool to identify unused code in Swift projects.