-
InvokeAI
InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
hn_summary
Summarizes top stories from Hacker News using a large language model and post them to a Telegram channel.
In case it interests anyone, I built a document editor + GPT implementation and updated it to use text-davinci-003: https://github.com/typpo/arkose/
If you don't have an OpenAI API key, I've set up a quick demo here for people to give it a try, at least until I hit my billing cap (normally users would supply their own API key): https://arkose.pages.dev/
No I hadn't. Thanks I'll check it out! Of course the open source nature of SD is really nice as the community[0] keep pumping out usability improvements and features.. like the Pokemon trained model[1] which is an absolute blast for my kid!
[0]: https://github.com/invoke-ai/InvokeAI
[1]: https://huggingface.co/justinpinkney/pokemon-stable-diffusio...
biggest barrier to this is the hardware requirements. I saw an estimate on r/machinelearning that based on the parameter count, gpt-3 needs around 350GB of VRAM. maybe you could cut that in half, or even one-eighth if someone figures out some crazy quantization scheme, but it's still firmly outside of the realm of consumer hardware right now.
stuff like koboldai can let you run smaller models on your hardware though (https://github.com/KoboldAI/KoboldAI-Client).