-
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I think the main concern is that, due to the resources put into LLM research for finding new ways to refine and improve them, that work can then be used by projects that do go the extra mile and create things that are more than just LLMs. For example, RWKV is similar to an LLM but will actually change its own model after every processed token, thus letting it remember things longer-term without the use of 'context tokens'.
Related posts
-
Do LLMs need a context window?
-
Paving the way to efficient architectures: StripedHyena-7B
-
Understanding Deep Learning
-
Q-Transformer: Scalable Reinforcement Learning via Autoregressive Q-Functions
-
"If you see a startup claiming to possess top-secret results leading to human level AI, they're lying or delusional. Don't believe them!" - Yann LeCun, on the conspiracy theories of "X company has reached AGI in secret"