Turing Machines Are Recurrent Neural Networks (1996)

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • xformers

    Hackable and optimized Transformers building blocks, supporting a composable construction.

  • In 2016 Transformers didn't exist and the state of the art for neural network based NLP was using LSTMs that had a limit of maybe 100 words at most.

    With new implementations like xformers[1] and flash attention[2] it is unclear where the length limit is on modern transformer models.

    Flash Attention can currently scale up to 64,000 tokens on an A100.

    [1] https://github.com/facebookresearch/xformers/blob/main/HOWTO...

    [2] https://github.com/HazyResearch/flash-attention

  • flash-attention

    Fast and memory-efficient exact attention

  • In 2016 Transformers didn't exist and the state of the art for neural network based NLP was using LSTMs that had a limit of maybe 100 words at most.

    With new implementations like xformers[1] and flash attention[2] it is unclear where the length limit is on modern transformer models.

    Flash Attention can currently scale up to 64,000 tokens on an A100.

    [1] https://github.com/facebookresearch/xformers/blob/main/HOWTO...

    [2] https://github.com/HazyResearch/flash-attention

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts