heinsen_routing VS safari

Compare heinsen_routing vs safari and see what are their differences.

heinsen_routing

Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks. (by glassroom)

safari

Convolutions for Sequence Modeling (by HazyResearch)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
heinsen_routing safari
7 5
160 841
0.0% 1.2%
2.7 3.5
about 1 year ago about 1 month ago
Python Assembly
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

heinsen_routing

Posts with mentions or reviews of heinsen_routing. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-27.
  • What can LLMs never do?
    4 projects | news.ycombinator.com | 27 Apr 2024
    At one point I experimented a little with transformers that had access to external memory searchable via KNN lookups https://github.com/lucidrains/memorizing-transformers-pytorc... or via routed queries with https://github.com/glassroom/heinsen_routing . Both approaches seemed to work for me, but I had to put that work on hold for reasons outside my control.
  • A Surprisingly Effective Way to Estimate Token Importance in LLM Prompts
    1 project | news.ycombinator.com | 12 Sep 2023
    Simple and in hindsight, obvious:

    1. Run the text through a document embeddding model and save the embedding.

    2. Remove one token at a time, and compute the cosine similarity of the new document embedding to the original one.

    3. Compute importance as a function of the change in cosine similarity.

    Nice.

    Also check out https://github.com/glassroom/heinsen_routing . It takes n embeddings and outputs m embeddings, and also gives you an n×m matrix with credit assignments, without having to remove tokens one by one, which can be prohibitively slow for long texts.

  • Unlimiformer: Long-Range Transformers with Unlimited Length Input
    3 projects | news.ycombinator.com | 5 May 2023
    After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.

    I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.

  • Scaling Transformer to 1M tokens and beyond with RMT
    6 projects | news.ycombinator.com | 23 Apr 2023
    Here's a list of tools for scaling up transformer context that have github repos:

    * FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention

    * Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing

    * RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM

    * RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...

    In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...

    If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.

  • From Deep to Long Learning
    6 projects | news.ycombinator.com | 9 Apr 2023
    I imagine you could, maybe by using something like this https://github.com/glassroom/heinsen_routing#sequence-to-vec... ... but I doubt you'd be able to match the training efficiency of triangular masking in auto-regressive transformers. With routing, you'd have to train the model one time-step at a time, instead of all time-steps in parallel like a masked auto-regressive transformer.
  • New algorithm can route sequences with 1M+ token embeddings in one GPU
    1 project | news.ycombinator.com | 20 Dec 2022

safari

Posts with mentions or reviews of safari. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-05.
  • MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers
    1 project | news.ycombinator.com | 29 Nov 2023
    > Also, we know that transformers can scale

    Do we have strong evidence that other models don't scale or have we just put more time into transformers?

    Convolutional resnets look to scale on vision and language: (cv) https://arxiv.org/abs/2301.00808, (cv) https://arxiv.org/abs/2110.00476, (nlp) https://github.com/HazyResearch/safari

    MLPs also seem to scale: (cv) https://arxiv.org/abs/2105.01601, (cv) https://arxiv.org/abs/2105.03404

    I mean I don't see a strong reason to turn away from attention as well but I also don't think anyone's thrown a billion parameter MLP or Conv model at a problem. We've put a lot of work into attention, transformers, and scaling these. Thousands of papers each year! Definitely don't see that for other architectures. The ResNet Strikes back paper is a great paper for one reason being that it should remind us all to not get lost in the hype and that our advancements are coupled. We learned a lot of training techniques since the original ResNet days and pushing those to ResNets also makes them a lot better and really closes the gaps. At least in vision (where I research). It is easy to railroad in research where we have publish or perish and hype driven reviewing.

  • Unlimiformer: Long-Range Transformers with Unlimited Length Input
    3 projects | news.ycombinator.com | 5 May 2023
    After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.

    I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.

  • How big a breakthrough is this "Hyena" architecture?
    1 project | /r/ArtificialInteligence | 28 Apr 2023
  • Hyena: This new technology could blow away GPT-4 and everything like it
    1 project | /r/u_waynerad | 26 Apr 2023
    Code: https://github.com/HazyResearch/safari
  • Scaling Transformer to 1M tokens and beyond with RMT
    6 projects | news.ycombinator.com | 23 Apr 2023
    the code is here https://github.com/hazyresearch/safari you should try it and let us know your verdict.

What are some alternatives?

When comparing heinsen_routing and safari you can also consider the following projects:

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

flash-attention - Fast and memory-efficient exact attention

TruthfulQA - TruthfulQA: Measuring How Models Imitate Human Falsehoods

block-recurrent-transformer-pytorch - Implementation of Block Recurrent Transformer - Pytorch

iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.

recurrent-memory-transformer - [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.

block-recurrent-transformer-py