spotcast
dalai
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spotcast
-
Meet Atom the GPT Assistant, an AI-powered Smart Home Assistant. It's like Google Assistant but with endless possibility of ChatGPT, it's like Siri but with extensibility of Open Source power.
Yes, the GPT Assistant relies on HomeAssistant, which you can use the official spotify integration plus spotcast to play a specific playlist of Spotify on a specific device (say speaker that supports ChromeCast)
-
Has anyone managed to cast a Spotify playlist?
Use this custom component in Home Assistant to cast to a Spotify-connected device: https://github.com/fondberg/spotcast
-
Spotify on HomeAssistant
According to https://github.com/fondberg/spotcast/issues/170 that issue was fixed a while back but I'm sure I've seen it since. Just noticed I had an update pending for spotcadt so make sure it's up to date
- https://np.reddit.com/r/homeassistant/comments/ock207/my_work_in_progress_copied_from_others/h3vs2n2/
-
Spotify intergration - running on iMac
Ah okay. Well spotcast should meet those requirements perfectly 😃
-
Scheduled Spotify to Google Home Playlist
And Spotcast: SPOTCAST
dalai
-
Ask HN: What are the capabilities of consumer grade hardware to work with LLMs?
I agree, I've definitely seen way more information about running image synthesis models like Stable Diffusion locally than I have LLMs. It's counterintuitive to me that Stable Diffusion takes less RAM than an LLM, especially considering it still needs the word vectors. Goes to show I know nothing.
I guess it comes down to the requirement of a very high end (or multiple) GPU that makes it impractical for most vs just running it in Colab or something.
Tho there are some efforts:
https://github.com/cocktailpeanut/dalai
-
Meta to release open-source commercial AI model
If you're just looking to play with something locally for the first time, this is the simplest project I've found and has a simple web UI: https://github.com/cocktailpeanut/dalai
It works for 7B/13B/30B/65B LLaMA and Alpaca (fine-tuned LLaMA which definitely works better). The smaller models at least should run on pretty much any computer.
- How can I run a large language model locally?
- meirl
-
FreedomGPT: AI with no censorship
I am not against easy mode options dude, for example I used to run GANs through command line. I replaced them with Upscayl when I found it. Convenience is king after all. Something about this one isn't right though. They are advertising it as a model they built meanwhile their own github show it to be a frontend of LLAMA. Why aren't they honest about it? Why use bots to spam about it? This causes me to not trust the executable they share to 1 to 1 compliation of the source code neither. I would still recommend looking for more decent alternatives. Btw, running it directly isn't that complicated
-
Google removes the waitlist on Bard today and will be available in 180 more countries
https://github.com/ggerganov/llama.cpp https://github.com/oobabooga/text-generation-webui https://github.com/mlc-ai/mlc-llm https://github.com/cocktailpeanut/dalai https://github.com/ido-pluto/catai (this is super easy to install but it doesnt provide an api or have integration with langchain)
-
ChatGPT Data Breach BreakDown - Why it Should be a Concern for Everyone!
This was easy to get running: https://github.com/cocktailpeanut/dalai with alpaca 13B (on my 16GB or ram)
-
A brief history of LLaMA models
I had it running before with Dalai (https://github.com/cocktailpeanut/dalai) but have since moved to using the browser based WebGPU method (https://mlc.ai/web-llm/) which uses Vicuna 7B and is quite good.
-
Meet Atom the GPT Assistant, an AI-powered Smart Home Assistant. It's like Google Assistant but with endless possibility of ChatGPT, it's like Siri but with extensibility of Open Source power.
https://github.com/nsarrazin/serge let's you pick which model and runs in a container. For API https://github.com/cocktailpeanut/dalai looks super promising.
- Mercredi Tech - 2023-04-26
What are some alternatives?
mini-media-player - Minimalistic media card for Home Assistant Lovelace UI
gpt4all - gpt4all: run open-source LLMs anywhere
spotify-card - Spotify playlist card for Home Assistant card
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
lovelace-layout-card - 🔹 Get more control over the placement of lovelace cards.
llama - Inference code for Llama models
hass-browser_mod - 🔹 A Home Assistant integration to turn your browser into a controllable entity and media player
alpaca-lora - Instruct-tune LLaMA on consumer hardware
PlexMeetsHomeAssistant - Custom card which integrates plex into Home Assistant and makes it possible to launch movies or tv shows on TV with a simple click
llama.cpp - LLM inference in C/C++
mini-graph-card - Minimalistic graph card for Home Assistant Lovelace UI
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.