JARVIS
dalai
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
JARVIS
- FLaNK Stack 26 February 2024
-
Overview: AI Assembly Architectures
Jarvis: github.com/microsoft/JARVIS
-
When will we get JARVIS?
You can build it yourself now. https://github.com/microsoft/JARVIS
- How to build the Geth (networked intelligence, decentralized AGI)
-
Off-topic: What NVIDIA GPU do I need to run privateGPT or Alpaca-Lora for code translations, debugging, unit tests, etc?
https://github.com/microsoft/JARVIS (when ready says >=24GB VRAM)
-
Apple announces Apple Silicon Mac Pro powered by M2 Ultra
Can be. There are projects that run fully locally like Microsoft’s Jarvis. https://github.com/microsoft/JARVIS
-
April 2023
JARVIS, a system to connect LLMs with ML community (https://github.com/microsoft/JARVIS)
- Nvidia's GH200 AI supercomputers could build 'giant' AI models more powerful than GPT-4
-
A Lightweight HuggingGPT Implementation w/ Langchain + Thoughts on Why JARVIS Fails to Deliver
HuggingGPT is a clever idea to boost the capabilities of LLM Agents, and enable them to solve “complicated AI tasks with different domains and modalities”. In short, it uses ChatGPT to plan tasks, select models from Hugging Face (HF), format inputs, execute each subtask via the HF Inference API, and summarise the results. JARVIS tries to generalise this idea, and create a framework to “connect LLMs with the ML community”, which Microsoft Research claims “paves a new way towards advanced artificial intelligence”.
- Edit videos through intuitive ChatGPT conversations
dalai
-
Ask HN: What are the capabilities of consumer grade hardware to work with LLMs?
I agree, I've definitely seen way more information about running image synthesis models like Stable Diffusion locally than I have LLMs. It's counterintuitive to me that Stable Diffusion takes less RAM than an LLM, especially considering it still needs the word vectors. Goes to show I know nothing.
I guess it comes down to the requirement of a very high end (or multiple) GPU that makes it impractical for most vs just running it in Colab or something.
Tho there are some efforts:
https://github.com/cocktailpeanut/dalai
-
Meta to release open-source commercial AI model
If you're just looking to play with something locally for the first time, this is the simplest project I've found and has a simple web UI: https://github.com/cocktailpeanut/dalai
It works for 7B/13B/30B/65B LLaMA and Alpaca (fine-tuned LLaMA which definitely works better). The smaller models at least should run on pretty much any computer.
- How can I run a large language model locally?
- meirl
-
FreedomGPT: AI with no censorship
I am not against easy mode options dude, for example I used to run GANs through command line. I replaced them with Upscayl when I found it. Convenience is king after all. Something about this one isn't right though. They are advertising it as a model they built meanwhile their own github show it to be a frontend of LLAMA. Why aren't they honest about it? Why use bots to spam about it? This causes me to not trust the executable they share to 1 to 1 compliation of the source code neither. I would still recommend looking for more decent alternatives. Btw, running it directly isn't that complicated
-
Google removes the waitlist on Bard today and will be available in 180 more countries
https://github.com/ggerganov/llama.cpp https://github.com/oobabooga/text-generation-webui https://github.com/mlc-ai/mlc-llm https://github.com/cocktailpeanut/dalai https://github.com/ido-pluto/catai (this is super easy to install but it doesnt provide an api or have integration with langchain)
-
ChatGPT Data Breach BreakDown - Why it Should be a Concern for Everyone!
This was easy to get running: https://github.com/cocktailpeanut/dalai with alpaca 13B (on my 16GB or ram)
-
A brief history of LLaMA models
I had it running before with Dalai (https://github.com/cocktailpeanut/dalai) but have since moved to using the browser based WebGPU method (https://mlc.ai/web-llm/) which uses Vicuna 7B and is quite good.
-
Meet Atom the GPT Assistant, an AI-powered Smart Home Assistant. It's like Google Assistant but with endless possibility of ChatGPT, it's like Siri but with extensibility of Open Source power.
https://github.com/nsarrazin/serge let's you pick which model and runs in a container. For API https://github.com/cocktailpeanut/dalai looks super promising.
- Mercredi Tech - 2023-04-26
What are some alternatives?
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
gpt4all - gpt4all: run open-source LLMs anywhere
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
babyagi
llama - Inference code for Llama models
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/AutoGPT]
alpaca-lora - Instruct-tune LLaMA on consumer hardware
visual-chatgpt - Official repo for the paper: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [Moved to: https://github.com/microsoft/TaskMatrix]
llama.cpp - LLM inference in C/C++
botpress - The open-source hub to build & deploy GPT/LLM Agents ⚡️
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.