axon VS diamond-types

Compare axon vs diamond-types and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
axon diamond-types
15 15
1,446 1,428
1.9% -
7.5 9.2
20 days ago 6 days ago
Elixir Rust
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

axon

Posts with mentions or reviews of axon. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-14.
  • Would like some guidance on my learning for fine-tuned model applications (AI related) using Nx / Elixir
    1 project | /r/elixir | 30 Jun 2023
    My recommendation is to start with fast.ai to understand the machine learning part. Then, for the elixir bit, look at some of the notebooks in the Axon (elixir's NN library) github. I wrote a couple notebooks explaining how to train a basic NN using Axon. Here's one
  • Data wrangling in Elixir with Explorer, the power of Rust, the elegance of R
    7 projects | news.ycombinator.com | 14 Apr 2023
    José from the Livebook team. I don't think I can make a pitch because I have limited Python/R experience to use as reference.

    My suggestion is for you to give it a try for a day or two and see what you think. I am pretty sure you will find weak spots and I would be very happy to hear any feedback you may have. You can find my email on my GitHub profile (same username).

    In general we have grown a lot since the Numerical Elixir effort started two years ago. Here are the main building blocks:

    * Nx (https://github.com/elixir-nx/nx/tree/main/nx#readme): equivalent to Numpy, deeply inspired by JAX. Runs on both CPU and GPU via Google XLA (also used by JAX/Tensorflow) and supports tensor serving out of the box

    * Axon (https://github.com/elixir-nx/axon): Nx-powered neural networks

    * Bumblebee (https://github.com/elixir-nx/bumblebee): Equivalent to HuggingFace Transformers. We have implemented several models and that's what powers the Machine Learning integration in Livebook (see the announcement for more info: https://news.livebook.dev/announcing-bumblebee-gpt2-stable-d...)

    * Explorer (https://github.com/elixir-nx/explorer): Series and DataFrames, as per this thread.

    * Scholar (https://github.com/elixir-nx/scholar): Nx-based traditional Machine Learning. This one is the most recent effort of them all. We are treading the same path as scikit-learn but quite early on. However, because we are built on Nx, everything is derivable, GPU-ready, distributable, etc.

    Regarding visualization, we have "smart cells" for VegaLite and MapLibre, similar to how we did "Data Transformations" in the video above. They help you get started with your visualizations and you can jump deep into the code if necessary.

    I hope this helps!

  • Elixir and Rust is a good mix
    10 projects | news.ycombinator.com | 13 Apr 2023
    > I guess, why not use Rust entirely instead of as a FFI into Elixir or other backend language?

    Because Rust brings none of the benefits of the BEAM ecosystem to the table.

    I was an early Elixir adopter, not working currently as an Elixir developer, but I have deployed one of the largest Elixir applications for a private company in my country.

    I know it has limits, but the language itself is only a small part of the whole.

    Take ML, Jose Valim and Sean Moriarity have studied the problem, made a plan to tackle it and started solving it piece by piece [1] in a tightly integrated manner, it feels natural, as if Elixir always had those capabilities in a way that no other language does and to put the icing on the cake the community released Livebook [2] to interactively explore code and use the new tools in the simplest way possible, something that Python notebooks only dream of being capable of, after a decade of progress

    That's not to say that Elixir is superior as a language, but that the ecosystem is flourishing and the community is able to extract the 100% of the benefits from the tools and create new marvellously crafted ones, that push the limits forward every time, in such a simple manner, that it looks like magic.

    And going back to Rust, you can write Rust if you need speed or for whatever reason you feel it's the right tool for the job, it's totally integrated [3][4], again in a way that many other languages can only dream of, and it's in fact the reason I've learned Rust in the first place.

    The opposite is not true, if you write Rust, you write Rust, and that's it. You can't take advantage of the many features the BEAM offers, OTP, hot code reloading, full inspection of running systems, distribution, scalability, fault tolerance, soft real time etc. etc. etc.

    But of course if you don't see any advantage in them, it means you probably don't need them (one other option is that you still don't know you want them :] ). In that case Rust is as good as any other language, but for a backend, even though I gently despise it, Java (or Kotlin) might be a better option.

    [1] https://github.com/elixir-nx/nx https://github.com/elixir-nx/axon

    [2] https://livebook.dev/

    [3] https://github.com/rusterlium/rustler

    [4] https://dashbit.co/blog/rustler-precompiled

  • Bumblebee: GPT2, Stable Diffusion, and More in Elixir
    5 projects | news.ycombinator.com | 8 Dec 2022
    I've trained models using Jupyter and Livebook (though only smaller toy models [1]) so I can deposit my 2 cents here. Small disclaimer that I started with Jupyter, so in some sense my mental model was biased towards Jupyter.

    I think the biggest difference that'll trip you up coming from Jupyter is that Livebook enforces linear execution. You can't arbitrarily run cells in any order like you can in Jupyter - if you change an earlier cell all the subsequent cells have to be run in order. The only deviation from this is branches which allow you to capture the state at a certain point and create a new flow from there on. There's a section in [1] that explains how branching works and how you can use it when training models.

    The other difference is that if you do something that crashes in a cell, you'll lose the state of the entire branch and have to rerun from the beginning of the branch. Iirc if you stop a long running cell, that forces a rerun as well. That can also be painful when running training loops that run for a while, but there are some pretty neat workarounds you can do using Kino. Using those workarounds does break the reproducibility guarantees though.

    Personally while building NN models I find that I prefer the Jupyter execution model because for NNs, rerunning cells can be really time-consuming. Being able to quickly change some variables and run a cell out of order helps while I'm exploring/experimenting.

    Two things I love about Livebook though are 1) the file format makes version control super easy and 2) Kino allows for real interactivity in the notebook in a way that's much harder to do in Jupyter. So in Livebook you can easily create live updating charts, images etc that show training progress or have other kinds of interactivity.

    If you're interested to see what my model training workflow looks like with Livebook (and I have no idea if it's the best workflow!), check out the examples below [1][2]. Overall I'd say it definitely works well, you just have to shift your mental model a bit if you're coming from Jupyter. If I were doing something where rerunning cells wasn't expensive I would probably prefer the Livebook model.

    [1] https://github.com/elixir-nx/axon/blob/main/notebooks/genera...

  • Building an ML model using Axon and Livebook
    1 project | /r/elixir | 11 Oct 2022
  • ElixirConf 2022 - That's a wrap!
    7 projects | dev.to | 12 Sep 2022
    Machine learning is rapidly expanding within the Elixir ecosystem, with tools such as Nx, Axon, and Explorer being used both by individuals and companies such as Amplified, as mentioned above.
  • What's your opinion on Elixir?
    3 projects | /r/rust | 20 May 2022
    It's my professional daily driver since 2018 but I consider it an average-to-disappointing language and ecosystem on top of an incredible VM/runtime. For more specific thoughts, back in 2020 I've previously posted some critique here and very little of these concerns are improved in the interim. There is a vestigial ML story around libraries like Nx/Axon. LiveView is inadvisable in practice but is sort of the banner marketing device right now, which disappoints me.
  • Recognize Digits Using ML in Elixir
    2 projects | /r/elixir | 11 May 2022
    Yeah, as Mark said, I think the problem is related to this issue https://github.com/elixir-nx/axon/issues/244
  • Do Elixir's benefits still hold when interfacing with another language?
    2 projects | /r/elixir | 2 May 2022
  • [P] Axon: Deep Learning in Elixir
    1 project | /r/MachineLearning | 21 Dec 2021
    Repo: https://github.com/elixir-nx/axon

diamond-types

Posts with mentions or reviews of diamond-types. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-11.
  • Open source P2P alternative to Slack and Discord built on Tor and IPFS
    18 projects | news.ycombinator.com | 11 Sep 2023
    > I think far more interesting these days would be projects like Veilid, Hyphanet's Locutus

    I have not assessed Veilid yet but it's on my list and at a first glance seems like a very serious and informed attempt. I'm personal friends with Freenet / Hyphanet's Ian Clarke and spoke with him about Locutus when he was just getting started. It sounded awesome then and I will give this a second look too, though when he explained it to me it sounded like it had the same limitations with deletion that Nostr or the global IPFS network would have. It does seem important to note here that both Veilid and Locutus are much less mature and battle-tested than libp2p and Tor and have less Lindy longevity (longevity as a function of age.) We already suffer a lot from being on the bleeding edge, so it's nice to limit the number of bleeding edge tools we use. Libp2p, notably, has been rock solid for us and barely a time drain at all, apart from some unexpected interactions with Tor which are mostly about the lack of an official first-class Tor transport, which is specific to our use case and should start to change soon when Tor's Arti is ready.

    > and ultimately Nostr -- even though not truly P2P in that sense -- which already happens to have a first try going with nostrchat.io.

    Nostr and Bluesky both seem very promising for the open-world use case of social networking, and it has been amazing to see Nostr grow so rapidly as a community. I am rooting for this project and we might use it someday in Quiet for public feeds. Timed deletion is the user requirement that drives me away from building Quiet on Nostr. Based on conversations I've had with users doing sensitive work (and based on my own experience as a founder of Fight for the Future) timed deletion is extremely important to team security, and for deletion to be meaningful one needs more control over where the data is relayed than what Nostr provides in the default mode. A group that wanted trustworthy timed deletion would have to control their own private Nostr relay. Technically, a Tor relay could subvert the timed deletion of some Quiet messages just by capturing all traffic, but this is much less of a worry.

    > If P2P is something that is truly desired, I feel like projects like Briar (https://briarproject.org/how-it-works/) have solved this with Bramble (https://code.briarproject.org/briar/briar-spec/blob/master/p...) more eloquently than it could be done on top of IPFS.

    Bramble could work for us and I would recommend that anyone look into it. Briar is probably the most similar thing to Quiet that exists right now. There are big differences between Quiet and Briar, but we could definitely build Quiet on Bramble if it adequately supports iOS. My worry would be its maturity as a tool for people building things other than Briar. That could be worth the risk though and I do recommend anyone else reading this thread look at Bramble if you are doing something similar.

    > I could nevertheless imagine it being overtaken fairly quickly by other projects sporting a rather lightweight and more managable basis, that allows for increased development speed and ultimately for faster iteration on features that users might wish for (e.g. DMs, @-mentions, message deletion, mobile clients, you-name-it) -- without the need to invest heavily into e.g. performance (or reliability!) issues of the underlying framework.

    This is definitely something we will keep an eye on, and thank you for the thoughtful advice! My guess is that as soon as we have a significant number of real users we will need to build things that don't happen to be supported by whatever stack we choose (whether that is our current stack, Bramble, Veilid, Automerge, etc.) So the question is what's the easiest one to maintain and adapt. So far libp2p and IPFS have both been good in that department: implementations in many languages, active development, an absence of major problems showing signs of maturity (especially in libp2p), etc.

    Also, my 2 cents are (for anyone following along) that if I had to do this all over again I would use Tor + Libp2p + Automerge. Libp2p and Gossipsub are solid, flexible, and will be around a while. No need to reinvent the wheel. The conceptual framework behind Automerge and Briar/Bramble are pretty similar (sync state!) but the Automerge team exists to serve people building other apps, while the Bramble team mostly focuses on Briar AFAIK. What's nice about Automerge is that the community around it (Ink & Switch, Martin Kleppmann, and other academics) is all at the academic frontier, so the level of thought and anticipation of user needs that goes into their decisions is very thorough, even if the implementations lag behind the papers. If I was doing real-time text I would also look at the Briar project and Seph Gentle's work on Diamond Types, since that's where the most thought has gone into the raw performance you need for text CRDTs that can handle large documents: https://github.com/josephg/diamond-types

  • Elixir and Rust is a good mix
    10 projects | news.ycombinator.com | 13 Apr 2023
    But I think thats about it. Maybe there's more manually specified types in "normal" rust because most functions are smaller than that. But, it doesn't feel so bad. In this case I could probably even remove the explicit type annotation for that queue definition if I wanted to, but it makes the compiler's errors better leaving it in.

    [1] https://github.com/josephg/diamond-types/blob/66025b99dbe390...

  • Automerge 2.0
    11 projects | news.ycombinator.com | 30 Jan 2023
    diamond-types (for reference for others [0]) still only supports plain text, is that right? I was thinking of using it for more general use cases such as an offline habit tracker, which isn't text of course, but I was interested to hear more on the progress towards other data types such as generic JSON data.

    Currently for this use case I've been using autosurgeon [1] so far which has a nice Rust API for structs, even if it might be slower than yjs (or yrs, its Rust implementation) or diamond-types.

    [0] https://github.com/josephg/diamond-types

    [1] https://github.com/automerge/autosurgeon

  • You might not need a CRDT
    9 projects | news.ycombinator.com | 5 Dec 2022
    I'm working on a CRDT to solve this problem too[1]. How do you plan on implementing collaborative text editing on top of your event-reordering system? Off the top of my head I can't think of a way to implement text on your proposed system which would be performant and simple.

    [1] https://github.com/josephg/diamond-types

  • Generalizing coroutines - The Rust Language Design Team
    8 projects | /r/rust | 12 Jul 2022
    For example, this file implements a complex iterator via a struct and really complex next() method. This file was about 1/3rd the size before I manually rewrote it into a "continuation passing" style. I find it significantly harder to read and maintain in its current form.
  • WebAssembly 2.0 Working Draft
    21 projects | news.ycombinator.com | 19 Apr 2022
    > In this case, the bottleneck at 9 million LoC is not CPU cycles but memory usage. That's where I am considering pushing down into WebAssembly

    How often does this come up in practice? I can't think of many files I've opened which were 9 million lines long. And you say "LoC" (lines of code). Are you doing syntax highlighting on 9 million lines of source code in javascript? Thats impressive!

    > I guess my point is why do you need balanced trees? Is this a CRDT specific thing? Can you implement CRDT with just an array of lines / gap buffer?

    Of course! Its just going to be slower. I made a simple reference implementation of Yjs, Automerge and Sync9's list types in javascript here[1]. This code is not optimized, and it takes 30 seconds to process an editing trace that diamond types (in native rust) takes 0.01 seconds to process. We could speed that up - yjs does the same thing in 1 second. But I don't think javascript will ever run as fast as optimized rust code.

    The b-tree in diamond types is used for merging. If you're merging 2 branches, we need to map insert locations from the incoming branch into positions in the target (merged) branch. As items are inserted, the mapping changes dynamically. The benchmark I've been using for this is how long it takes to replay (and re-merge) all the changes in the most edited file in the nodejs git repository. That file has just shy of 1M single character insert / delete operations. If you're curious, the causal graph of changes looks like this[2].

    Currently it takes 250ms to re-merge the entire causal graph. This is much slower than I'd like, but we can cache the merged positions in about 4kb on disk or something so we only need to do it once. I also want to replace the b-tree with a skip list. I think that'll make the code faster and smaller.

    A gap buffer in javascript might work ok... if you're keen, I'd love to see that benchmark. The code to port is here: [3]

    > Undo support -> In which case, you only have to stack / remember the set of commands and not have to store the state on every change. I'm not sure if this overlaps with the data structure choice, other than implementation details.

    Yeah, I basically never store a snapshot of the state. Not on every change. Not really at all. Everything involves sending around patches. But you can't just roll back the changes when you undo.

    Eg: I type "aaa" at position 0 (the start of the document). You type "bbb" at the start of the document. The document is now "bbbaaa". I hit undo. What should happen? Surely, we delete the "aaa" - now at position 3.

    Translating from position 0 to position 3 is essentially the same algorithm we need to run in order to merge.

    > I was just looking into TypedArrays.

    I tried optimizing a physics library a few years ago by putting everything in typedarrays and it was weirdly slower than using raw javascript arrays. I have no idea why - but maybe thats fixed now.

    TypedArrays are useful, but they're no panacea. You could probably write a custom b-tree on top of a typedarray in javascript if you really want to - assuming your data also fits into typedarrays. But at that point you may as well just use wasm. It'll be way faster and more ergonomic.

    [1] https://github.com/josephg/reference-crdts

    [2] https://home.seph.codes/public/node_graph.svg

    [3] https://github.com/josephg/diamond-types/tree/master/src/lis...

  • I was wrong. CRDTs are the future
    4 projects | news.ycombinator.com | 16 Apr 2022
    Hi everyone! Author here. I'm happy to answer questions.

    I wrote this a couple years ago. Since then I've been working on my own CRDT called Diamond Types[1], which uses a lot of these ideas to be bonkers fast. I've built several OT based collaborative editing systems, and diamond types is much faster than any of them - though rust and wasm might be the real MVPs here. I wrote a follow-up to this article last year when I got that working, talking about how some of the optimizations work. That article is here[2].

    A fair bit has changed since I wrote that article. Yjs has started a rewrite in rust (called yrs[3]). And Automerge has apparently dramatically improved performance based on some of the ideas I talk about in this article. Oh, and diamond types has been rewritten from the ground up. Its now about 5x faster than it was last year, by completely changing the internal structure. But thats a story for another day.

    Unfortunately I still only support collaborative text editing. Adding full JSON support comes soon, after I document some more of the tricks I'm doing. Its really fun work!

    Why do I only support collaborative text editing? Because I care about performance, and text CRDT performance is hard because you have so many individual changes. (One for each keystroke!). Making text editing fast means everything is fast. But we've still got to do the work. To make that happen, my plan is to add full JSON editing support to diamond types using shelf[4]. Shelf is a super simple CRDT which fits in 100 lines of javascript.

    [1] https://github.com/josephg/diamond-types/

    [2] https://josephg.com/blog/crdts-go-brrr/

    [3] https://github.com/y-crdt/y-crdt/tree/main/yrs

    [4] https://github.com/dglittle/shelf

  • Conflict-Free Replicated Data Types (CRDT)
    4 projects | news.ycombinator.com | 10 Apr 2022
    Yep. I've done something very similar on top of Diamond Types for a little personal wiki. This page[1] is synced between all users who have the page open. Its a remarkably small piece of code, outside of the CRDT library itself (which is in rust via wasm). The way it works is:

    - On page load, the server sends the whole CRDT document to the browser, and the server streams changes from that point onwards.

    - When a change happens in the browser, it makes that change locally then and sends anything the server doesn't know about upstream.

    - Whenever the server finds out about a new change, it re-broadcasts that change to any subscribed browser streams.

    I'm using the braid HTTP protocol for changes - but we could easily switch to a SSE or websocket solution. It doesn't really matter.

    At the moment I'm just using flat files for storage, but there's nothing stopping you using a database instead, except that its a bit awkward to use efficient CRDT packing techniques in a database.

    [1] https://wiki.seph.codes/hn

    Code is here, if anyone is interested. The whole thing is a few hundred lines all up: https://github.com/josephg/diamond-types/tree/0cb5d7ecf49364...

  • Writing Redux Reducers in Rust
    3 projects | /r/rust | 6 Apr 2022
    With each change we just send the missing operations. Https://wiki.seph.codes/reddit if you want to mess around and see it in action via wasm. The code which runs this wiki is here.
  • Investigating Memory Allocations in Rust
    2 projects | /r/rust | 15 Jan 2022
    Another way to trace allocations in rust is to inject some code in a global allocator. Then you can use any in-program code you like to print / track / trace allocations. For example, I wrote this code in a library I’m working on so I can track and print out how many total bytes have been allocated, and how many allocation calls have been made.

What are some alternatives?

When comparing axon and diamond-types you can also consider the following projects:

nx - Multi-dimensional arrays (tensors) and numerical definitions for Elixir

crdt-benchmarks - A collection of CRDT benchmarks

livebook - Automate code & data workflows with interactive Elixir notebooks

y-crdt - Rust port of Yjs

explorer - Series (one-dimensional) and dataframes (two-dimensional) for fast and elegant data exploration in Elixir

dotted-logootsplit - A delta-state block-wise sequence CRDT

dplyr - dplyr: A grammar of data manipulation

teletype-crdt - String-wise sequence CRDT powering peer-to-peer collaborative editing in Teletype for Atom.

explorer - An open source block explorer

comic-shanns - a classy font

fen_gen - Generate Forsyth-Edward notations from chess board images

automerge - A JSON-like data structure (a CRDT) that can be modified concurrently by different users, and merged again automatically.