ubicloud
FastChat
ubicloud | FastChat | |
---|---|---|
16 | 83 | |
3,065 | 34,514 | |
3.9% | 4.3% | |
9.9 | 9.6 | |
4 days ago | 6 days ago | |
Ruby | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ubicloud
- FLaNK AI for 11 March 2024
-
Show HN: Open-source x64 and Arm GitHub runners. Reduces GitHub Actions bill 10x
The docs still say the Elastic license is used but looking at https://github.com/ubicloud/ubicloud/blob/main/LICENSE it looks like the project might have switched to GNU Affero General Public License v3.0 in the last day.
- GitHub - ubicloud/ubicloud: Open, free, and portable cloud. Elastic compute, block storage (non replicated), and virtual networking services in public alpha.
-
Ask HN: How does your company balance test coverage and deploy speed?
At Ubicloud, we have 100% line and branch coverage that is mandated on every PR (https://github.com/ubicloud/ubicloud). We also have an E2E test suite that we run periodically and with every commit. We did not really feel like our tests are slowing us down, it actually makes us faster since we have a higher trust to the payload and many manual checks that would need to be done is safely skipped.
-
Ubicloud – open, free and portable cloud
> Taken from here: https://ubicloud.com/
Am I the only one getting a certificate error browsing there?
-
Ask HN: Thoughts about Elastic V2, SSPL, or mixed software licenses?
Link to our project: https://github.com/ubicloud/ubicloud
We’re choosing Elastic V2 for three reasons: (1) We’re planning to monetize through a managed service and we’d like the license to support that, (2) Later if we change our mind, we think it’s easier on our users if we go from a restrictive license to a more permissive one, and (3) The Elastic V2 license is much simpler than its cousin, Server Side Public License (SSPL).
That said, Elastic V2 is a new license and doesn’t seem to as popular as SSPL. Also, some projects out there mix and match multiple licenses in their repo to be able to call themselves open source.
Any insights / feedback on Elastic V2 or software licenses in general?
- Attribute-Based Access Control (ABAC) Implementation in 130 Lines of Code
FastChat
-
GPT4.5 or GPT5 being tested on LMSYS?
gpt2-chatbot isn't the only "mystery model" on LMSYS. Another is "deluxe-chat".
When asked about it in October last year, LMSYS replied [0] "It is an experiment we are running currently. More details will be revealed later"
One distinguishing feature of "deluxe-chat": although it gives high quality answers, it is very slow, so slow that the arena displays a warning whenever it is invoked
[0] https://github.com/lm-sys/FastChat/issues/2527
-
LLMs on your local Computer (Part 1)
FastChat
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
- ChatGPT for Teams
- FastChat: An open platform for training and serving large language models
-
LM Studio – Discover, download, and run local LLMs
How does it compare with something like FastChat? https://github.com/lm-sys/FastChat
Feature set seems like a decent amount of overlap. One limitation of FastChat, as far as I can tell, is that one is limited to the models that FastChat supports (though I think it would be minor to modify it to support arbitrary models?)
-
Video-LLaVA
Looks like the Vicuna repo is Apache 2.0 also[1].
What's the interpretation of copyright law that would prevent the code being Apache 2.0 based on the source of the fine-tuning dataset?
[1] https://github.com/lm-sys/FastChat
-
🔥🚀 Top 10 Open-Source Must-Have Tools for Crafting Your Own Chatbot 🤖💬
Check how to start with FastChat. Support FastChat on GitHub ⭐
-
Show HN: ChatAPI – PWA to Use ChatGPT by API Build with Alpine.js
For something a little heavier but much more robust in terms of features/functionality I've been enjoying FastChat: https://github.com/lm-sys/FastChat
It allows you to plug in different backends so that you can use OpenAI compatible clients with various LLM's, selfhosted or otherwise.
What are some alternatives?
manageiq - ManageIQ Open-Source Management Platform
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
fog-azure-rm - Fog for Azure Resource Manager
llama.cpp - LLM inference in C/C++
cloudfront-signer - Ruby gem for signing AWS CloudFront private content URLs and streaming paths.
gpt4all - gpt4all: run open-source LLMs anywhere
AWS SDK for Ruby - The official AWS SDK for Ruby.
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
forem - For empowering community 🌱
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
homebrew-portable-ruby - 🚗 Versions of Ruby that can be installed and run from anywhere on the filesystem.
llama-cpp-python - Python bindings for llama.cpp