vault-server
socketioxide
vault-server | socketioxide | |
---|---|---|
4 | 47 | |
34 | 1,067 | |
- | - | |
10.0 | 9.8 | |
8 months ago | 11 days ago | |
JavaScript | Rust | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vault-server
-
Streaming ChatGPT Message completions using OpenAI
When working on ArguflowAI's backend we ran into multiple issues with streaming ChatGPT completions from OpenAI API to the client. We made this blog post to help other users out. Would love to hear your feedback on it!
-
Can Conducting Practice Rounds with AI Be Helpful?
I had to commute to teach a class of 10 novices Seriously, I am very very very appreciative that you took the time to respond to us at all. I watched your Youtube video on ChatGPT for debate and also on game theory for a new format and thought they were super interesting. I explain it below, but the entire point of building this thing was to engage with folks like yourself to help us achieve our long term goal of "building software that makes arguing better." If I redeem myself to a satisfactory extent, please DM me to setup a time to meet (I don't expect you to do this for free and would compensate appropriately) as I would really like to pick your brain on some of Arguflow's plans and goals. You got a ChatGPT API and you thought "What can I make that people would pay for?" and debate came to mind. We created the company/brand Arguflow in january of this year with the goal of "creating software for arguing." Our first product was a completely free live-debate tool, docs.arguflow.gg , that basically structured arguing online (in a thread like this one lol) into a flow. It didn't get much traction or engage the community like we wanted it to, so we started cold-emailing debate coaches to see if we could get their feedback on how the product could be more useful and got very few replies. The small number of people we did talk to were really only interested in the "AI Coach" style app and that's actually what drove us to create this. We figured, based on a very small sample size, that an AI Debate Coach app would engage the debate community and get the people who could offer the highest quality feedback talking to us about our long term goal of "making software for arguing." To that end, I really wish that we just shipped the "AI Debate Coach" app as free on launch. Seeming scammy is the exact opposite of what we wanted. pricing I really made an ass of myself. I should have taken more time to calculate and been more thoughtful before saying that price was break-even for us, and I regret not doing that. It was actually an open issue on our codebase that we punted on prior to launch, and I wish we didn't. I'm going to talk to my co-founder, but we will likely be lowering the pricing to free for Silver and $10 for Gold. Again, thank you for your feedback. For further defense, the code is completely open source. You can self-host it without paying us a penny. I understand that it's unreasonable to expect folks to self-host it, but it's something. I'm grasping at straws because I really don't want to come across as scammy. This is kind of an aside, but I have a theory that chatGPT is disproportionately useful for software engineering or maybe just people that really grasp how it's working under the hood? I paid for chatGPT plus and the primary value add wasn't really GPT4, but the consistent uptime and speed of response. I never really even got the chance to use GPT4 because I hit the message cap so frequently. In general, I think I was projecting my power-user experience onto others. I don't quite know how to deal with that, but I will be more aware of it from now on. answer explained 5 times Maybe nerding out, but the way transformer LLM's like GPT work is by creating a numeric vector representation of all the text you provide it. Explaining things over and over again or trying to switch contexts several messages into a thread is something I would actually expect to worsen performance. So I'm not surprised that they found that. One of the value adds that I thought our UI could provide is encouragement to stick to a single debate topic and only go one argument at a time when using it. You get significantly better performance that way. When Jake from MyDebateCoach tested it out, that's one of the things he noted. sale pricing and marketing lingo I didn't think it would come across as scammy. In my head, it was helping to make us look more professional. We are just going to remove it all. Thanks for your feedback again. on the research The paper you linked is analyzing GPT3 and not the chat-tuned GPT-3-turbo of OpenAI. GPT-3-turbo does signicicantly better as shown here. However, I really don't like the way they test GPT in those papers as it's not built for MC questions. The transformer model is making a vector for the prompt text then (this is overly simplified) looking in its db for other vectors that typically come after the vector it got and finally translating the vector to text. I think that this paper does a much better job evaluating the model. On the whole, the researchers noted GPT3/4 can get passing grades in law school which is interesting. your notes on performance and the brief-reference idea If you have time, try uploading a brief to this thing (or another one like it) https://www.chatpdf.com/ and seeing if it's helpful for you to chat with it. I think you might be surprised. In general, I also think you'd be surprised by what's possible when chaining requests. I.e. the user inputs an argument, a search agent searches through that users' uploaded briefs for evidence for its counterargument/feedback, a chat agent takes the output of the search agent and uses it to write counterarguments and feedback. I also wouldn't call this "stealing other people's work" as we would be having the user upload the brief. So the user would have to buy the brief or create it themselves to use the feature. video
- Re-implementing ChatGPT's backend with Actix Web - Including Streaming
-
What's everyone working on this week (20/2023)?
We just launched our AI Debate Coach application! The backend was built with Rust using the actix-web framework - https://github.com/arguflow/ai-editor
socketioxide
- Show HN: Socketioxide – A high performance socket.io server written in Rust
- Hey Sveltetors 😄 Socketioxide, the rust based socket io, just dropped a banger of a release (0.8)
- Hey Reactivists :D Socketioxide, the rust based socket io, just dropped a banger of a release (0.8)
- Sup noders! Socketioxide, the nodejs socket io clone written in Rust, just dropped a banger of a release
- Listen up: Socketioxide, the rust based socket io, dropped 0.8 which has Global State management
- Socketioxide, the Rust based socket io, has dropped 0.8 release that brings Global State management
- Rust world listen: Socketioxide 0.8 is out with global state management
-
Question: how good is Rust for web development?
It is becoming mature, check out rust socket io https://github.com/Totodore/socketioxide
-
Socketioxide v0.7.0 release! (socket.io server implementation as a tower service/layer)
I'm glad to announce the version 0.7.0 of my library socketioxide! It is a socket.io server implementation working as a tower layer/service. Therefore it integrates with any hyper based http framework like salvo, axum, warp or hyper itself.
What are some alternatives?
busan - An actor implementation in Rust
aya - Aya is an eBPF library for the Rust programming language, built with a focus on developer experience and operability.
cudarc - Safe rust wrapper around CUDA toolkit
utoipa - Simple, Fast, Code first and Compile time generated OpenAPI documentation for Rust
krust-manifesto - Abstractions to write concise Kubernetes manifests using Rust
MagenBoy - GameBoy and GameBoy Color emulator written in Rust
tiny-ml - Basic neural networks for rust
concurrent-queue - Concurrent multi-producer multi-consumer queue
Rust-vJoy-Manager - A virtual joystick manager and remapper.
dipa - dipa makes it easy to efficiently delta encode large Rust data structures.
RustNet - A rust API (and solid.js frontend) for a neural net
iggy - Iggy is the persistent message streaming platform written in Rust, supporting QUIC, TCP and HTTP transport protocols, capable of processing millions of messages per second.