get-beam
discourse-ai | get-beam | |
---|---|---|
2 | 8 | |
54 | 88 | |
- | - | |
9.7 | 7.9 | |
4 days ago | 12 days ago | |
Ruby | Shell | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
discourse-ai
-
Show HN: I scraped all of OpenAI's Community Forum
That's super cool, thanks for sharing! I will share this as an easy to follow example of what we can with AI.
> Allowing a Q&A interface using these embeddings over the post contents could speed up research over the community posts (if you know the right questions to ask :P). Let's view some posts similar to this one complaining about function calling
That's indeed a great thing to surface, and that's exactly how the the OpenAI forum selects the "Related Topics" to show at the end of every topic. We use embeddings for this feature, and the entire thing is open-source: https://github.com/discourse/discourse-ai/blob/main/lib/embe...
We also embeddings for suggesting tags, categories, HyDE search and more. It's by far my favorite tech of this new AI/ML gen so far in terms of applicability.
> Using Twitter-roBERTa-base for sentiment analysis, we generated a post_sentiment label (negative, positive, neutral) and post_sentiment_score confidence score for each post.
We do the same, with even the same model, and conveniently show that information on the admin interface of the forum. Again all open source: https://github.com/discourse/discourse-ai/tree/main/lib/sent...
Disclaimer: I'm the tech lead on the AI parts of Discourse, the open source software that powers OpenAI's community forum.
-
Workers AI: serverless GPU-powered inference on Cloudflare’s global network
Embedding cost and model choice makes this a very compelling choice. I'm working on leveraging embeddings in https://github.com/discourse/discourse-ai where it powers offering related topics, semantic search, tag and category recommendations among other things.
A cheap offering like this can make it a lot more reasonable for self-hosters.
get-beam
-
Ask HN: Where to find an env with GPU for model training?
You should checkout https://beam.cloud (I'm the founder), it'll give you access to plenty of cloud GPU resources for training or inference.
Right now it's pretty hard to get GPU quota on AWS/GCP, so hopefully this is useful for you.
-
Cloudflare launches new AI tools to help customers deploy and run models
Cloudflare AI and Replicate are great for running off-the-shelf models, but anything custom is going to incur a 10+ minute cold start.
For running custom fine-tuned models on serverless, you could look into https://beam.cloud which is optimized for serving custom models with extremely fast cold start (I'm a little biased since I work there, but the numbers don't lie)
-
Workers AI: serverless GPU-powered inference on Cloudflare’s global network
Serverless only works if the cold boot is fast. For context, my company runs a serverless cloud GPU product called https://beam.cloud, which we've optimized for fast cold start. We see Whisper in production cold start in under 10s (across model sizes). A lot of our users are running semi-real time STT, and this seems to be working well for them.
-
Ultrafast serverless GPU runtime for custom SD models
I’m Eli, and my co-founder and I built Beam to run workloads on serverless cloud GPUs with hot reloading, autoscaling, and (of course) fast cold start. You don’t need Docker or AWS to use it, and everyone who signs up gets 10 hours of free GPU credit to try it out.
-
[D] We built Beam: An ultrafast serverless GPU runtime
Github with example apps and tutorials: https://github.com/slai-labs/get-beam/tree/main/examples
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
- Run CodeLlama on a Serverless GPU
What are some alternatives?
deep-chat - Fully customizable AI chatbot component for your website
whisper-turbo - Cross-Platform, GPU Accelerated Whisper 🏎️
ruby-openai - OpenAI API + Ruby! 🤖❤️ Now with Assistants v2, Batches & Ollama/Groq 🚀
finetune-llama2
store-sentry - Manage access to in-app purchase content hosted in Cloudflare based on App Store Server Notifications
ChatGPT3-Free-Prompt-List - A free guide for learning to create ChatGPT3 Prompts
alpaca-lora - Instruct-tune LLaMA on consumer hardware
ask_chatgpt - AI-Powered Assistant Gem right in your Rails console. Full power of ChatGPT in Rails
AiTreasureBox - 🤖 Collect practical AI repos, tools, websites, papers and tutorials on AI. 实用的AI百宝箱 💎
ask_gpt - A ruby gem to Interact with OpenAI GPT API with context and history
magma-chat - Ruby on Rails 7-based ChatGPT Bot Platform