-
vertex-ai-samples
Sample code and notebooks for Vertex AI, the end-to-end machine learning platform on Google Cloud
-
vault-ai
OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, epub, etc) using a simple React frontend.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
langchain
Discontinued ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain] (by hwchase17)
Depending on how much work you want to put into it, you can get started at HuggingFace with their models and datasets, but you'd need compute power, multiple MLOps, etc. I was introduced to the concept in this video, since Google has their Vertex AI tools on Google Cloud, and there's always LangChain but I'm not sure about anything recent.
There's this GitHub repo for Pinecone Vector with custom knowledge base: VaultAI. But I'm sure the costs would be exorbitant at scale. Basically trains it on specific files, but the API is expensive as expected. Edit: I didn't read and thought you were talking about training your own, sorry. But I'll leave the second paragraph up anyways lol. Someone mentioned LLaMA and another Falcon, the latter of which I hadn't heard of but which looks good too.
There's this GitHub repo for Pinecone Vector with custom knowledge base: VaultAI. But I'm sure the costs would be exorbitant at scale. Basically trains it on specific files, but the API is expensive as expected. Edit: I didn't read and thought you were talking about training your own, sorry. But I'll leave the second paragraph up anyways lol. Someone mentioned LLaMA and another Falcon, the latter of which I hadn't heard of but which looks good too.
Depending on how much work you want to put into it, you can get started at HuggingFace with their models and datasets, but you'd need compute power, multiple MLOps, etc. I was introduced to the concept in this video, since Google has their Vertex AI tools on Google Cloud, and there's always LangChain but I'm not sure about anything recent.