MemGPT
mlx-examples
MemGPT | mlx-examples | |
---|---|---|
15 | 31 | |
9,252 | 5,038 | |
- | 7.8% | |
9.9 | 9.7 | |
7 days ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MemGPT
-
AI21 Labs Unveils Jamba: The First Production-Grade Mamba-Based AI Model
On a side note: working over longer contexts also reminds me of MemGPT(https://github.com/cpacker/MemGPT)
- FLaNK Weekly 18 Dec 2023
-
At this point we donβt necessarily need higher context windows. We need better truncation.
The MemGPT project is taking on this challenge.
-
Putting Together the Pieces of Transformative AI
Long Term Memory - Voyager, MemGPTand LongMem
-
[R] MemGPT: Towards LLMs as Operating Systems - UC Berkeley 2023 - Is able to create unbounded/infinite LLM context!
Blog: https://memgpt.ai/
-
MemGPT: Towards LLMs as Operating Systems - UC Berkeley 2023 - Is able to create unbounded/infinite LLM context!
Github: https://github.com/cpacker/MemGPT
-
MemGPT β LLMs with self-editing memory for unbounded context
Hey all, MemGPT authors here! Happy to answer any questions about the implementation.
If you want to try it out yourself, we have a Discord bot up-and-running on the MemGPT server (https://discord.gg/9GEQrxmVyE) where you can see the memory editing in action - as you chat you'll see MemGPT update its profile about you (and itself).
Everything's open source, so can also try running MemGPT locally using the code here: https://github.com/cpacker/MemGPT. In the repo we also have a document-focused example where you can chat with MemGPT about the LlamaIndex API docs.
- MemGPT β a combination of OS and GPT
mlx-examples
- MLX-Whisper
- FLaNK AI Weekly for 29 April 2024
- DBRX on Apple MLX
- Why the M2 is more advanced that it seemed
- MLX: Speculative Decoding
- Mixtral on MLX
- Qwen on MLX
- FLaNK Weekly 18 Dec 2023
- MLX: Fine-tune Llama 7B or Mistral 7B with 32GB
-
Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
I was able to get it running on MLX on my M2 Max machine within a couple minutes using their example: https://github.com/ml-explore/mlx-examples/tree/main/whisper
What are some alternatives?
llama.cpp - LLM inference in C/C++
llama-cpp-python - Python bindings for llama.cpp
tidybot - TidyBot: Personalized Robot Assistance with Large Language Models
cog-whisper-diarization - Cog implementation of transcribing + diarization pipeline with Whisper & Pyannote
LongMem - Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".
FLaNK-OpenAi - Chat
Efficient-LLMs-Survey - Efficient Large Language Models: A Survey
furnace - a multi-system chiptune tracker compatible with DefleMask modules
FLiPStackWeekly - FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...
FLaNK-ContinuousSQL
LLMCompiler - [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling
mlx - MLX: An array framework for Apple silicon