vault-server

Rust REST API server and white-label SolidJS UIs for semantic search and RAG/no-hallucination LLM-chat [Moved to: https://github.com/arguflow/arguflow] (by arguflow)

Vault-server Alternatives

Similar projects and alternatives to vault-server

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better vault-server alternative or higher similarity.

vault-server reviews and mentions

Posts with mentions or reviews of vault-server. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-15.
  • Streaming ChatGPT Message completions using OpenAI
    1 project | /r/actix | 16 May 2023
    When working on ArguflowAI's backend we ran into multiple issues with streaming ChatGPT completions from OpenAI API to the client. We made this blog post to help other users out. Would love to hear your feedback on it!
  • Can Conducting Practice Rounds with AI Be Helpful?
    1 project | /r/Debate | 15 May 2023
    I had to commute to teach a class of 10 novices Seriously, I am very very very appreciative that you took the time to respond to us at all. I watched your Youtube video on ChatGPT for debate and also on game theory for a new format and thought they were super interesting. I explain it below, but the entire point of building this thing was to engage with folks like yourself to help us achieve our long term goal of "building software that makes arguing better." If I redeem myself to a satisfactory extent, please DM me to setup a time to meet (I don't expect you to do this for free and would compensate appropriately) as I would really like to pick your brain on some of Arguflow's plans and goals. You got a ChatGPT API and you thought "What can I make that people would pay for?" and debate came to mind. We created the company/brand Arguflow in january of this year with the goal of "creating software for arguing." Our first product was a completely free live-debate tool, docs.arguflow.gg , that basically structured arguing online (in a thread like this one lol) into a flow. It didn't get much traction or engage the community like we wanted it to, so we started cold-emailing debate coaches to see if we could get their feedback on how the product could be more useful and got very few replies. The small number of people we did talk to were really only interested in the "AI Coach" style app and that's actually what drove us to create this. We figured, based on a very small sample size, that an AI Debate Coach app would engage the debate community and get the people who could offer the highest quality feedback talking to us about our long term goal of "making software for arguing." To that end, I really wish that we just shipped the "AI Debate Coach" app as free on launch. Seeming scammy is the exact opposite of what we wanted. pricing I really made an ass of myself. I should have taken more time to calculate and been more thoughtful before saying that price was break-even for us, and I regret not doing that. It was actually an open issue on our codebase that we punted on prior to launch, and I wish we didn't. I'm going to talk to my co-founder, but we will likely be lowering the pricing to free for Silver and $10 for Gold. Again, thank you for your feedback. For further defense, the code is completely open source. You can self-host it without paying us a penny. I understand that it's unreasonable to expect folks to self-host it, but it's something. I'm grasping at straws because I really don't want to come across as scammy. This is kind of an aside, but I have a theory that chatGPT is disproportionately useful for software engineering or maybe just people that really grasp how it's working under the hood? I paid for chatGPT plus and the primary value add wasn't really GPT4, but the consistent uptime and speed of response. I never really even got the chance to use GPT4 because I hit the message cap so frequently. In general, I think I was projecting my power-user experience onto others. I don't quite know how to deal with that, but I will be more aware of it from now on. answer explained 5 times Maybe nerding out, but the way transformer LLM's like GPT work is by creating a numeric vector representation of all the text you provide it. Explaining things over and over again or trying to switch contexts several messages into a thread is something I would actually expect to worsen performance. So I'm not surprised that they found that. One of the value adds that I thought our UI could provide is encouragement to stick to a single debate topic and only go one argument at a time when using it. You get significantly better performance that way. When Jake from MyDebateCoach tested it out, that's one of the things he noted. sale pricing and marketing lingo I didn't think it would come across as scammy. In my head, it was helping to make us look more professional. We are just going to remove it all. Thanks for your feedback again. on the research The paper you linked is analyzing GPT3 and not the chat-tuned GPT-3-turbo of OpenAI. GPT-3-turbo does signicicantly better as shown here. However, I really don't like the way they test GPT in those papers as it's not built for MC questions. The transformer model is making a vector for the prompt text then (this is overly simplified) looking in its db for other vectors that typically come after the vector it got and finally translating the vector to text. I think that this paper does a much better job evaluating the model. On the whole, the researchers noted GPT3/4 can get passing grades in law school which is interesting. your notes on performance and the brief-reference idea If you have time, try uploading a brief to this thing (or another one like it) https://www.chatpdf.com/ and seeing if it's helpful for you to chat with it. I think you might be surprised. In general, I also think you'd be surprised by what's possible when chaining requests. I.e. the user inputs an argument, a search agent searches through that users' uploaded briefs for evidence for its counterargument/feedback, a chat agent takes the output of the search agent and uses it to write counterarguments and feedback. I also wouldn't call this "stealing other people's work" as we would be having the user upload the brief. So the user would have to buy the brief or create it themselves to use the feature. video
  • Re-implementing ChatGPT's backend with Actix Web - Including Streaming
    1 project | /r/rust | 15 May 2023
  • What's everyone working on this week (20/2023)?
    15 projects | /r/rust | 15 May 2023
    We just launched our AI Debate Coach application! The backend was built with Rust using the actix-web framework - https://github.com/arguflow/ai-editor
  • A note from our sponsor - SurveyJS
    surveyjs.io | 30 Apr 2024
    With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js. Learn more →

Stats

Basic vault-server repo stats
4
34
10.0
8 months ago

Sponsored
The modern identity platform for B2B SaaS
The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
workos.com