CASALIOY
hands-on-llms
CASALIOY | hands-on-llms | |
---|---|---|
6 | 1 | |
231 | 2,311 | |
0.0% | - | |
8.7 | 8.7 | |
6 months ago | 29 days ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CASALIOY
-
Open LLM suggestions
Also this is 50% slower at ingestion. We use multithreaded ingestion that sips a SOTU.txt in 50ms whereas privateGPT takes about 2 seconds. CASALIOY
-
ChatGPT on a Raspberry Pi Zero W with OLED display
If you have some spare time you could give Casalioy a try. This has not been tested on a RPi yet.
-
Air-gapped langchain Agent. Talk to your Data privately
Here's a demo screencast with an ingested text just as long as this comment (on my i5-9600k 16GB)
hands-on-llms
-
Where to start
There are 3 courses that I usually recommend to folks looking to get into MLE/MLOps that already have a technical background. The first is a higher-level look at the MLOps processes, common challenges and solutions, and other important project considerations. It's one of Andrew Ng's courses from Deep Learning AI but you can audit it for free if you don't need the certificate: - Machine Learning in Production For a more hands-on, in-depth tutorial, I'd recommend this course from NYU (free on GitHub), including slides, scripts, full-code homework: - Machine Learning Systems And the title basically says it all, but this is also a really good one: - Hands-on Train and Deploy ML Pau Labarta, who made that last course, actually has a series of good (free) hands-on courses on GitHub. If you're interested in getting started with LLMs (since every company in the world seems to be clamoring for them right now), this course just came out from Pau and Paul Iusztin: - Hands-on LLMs For LLMs I also like this DLAI course (that includes Prompt Engineering too): - Generative AI with LLMs It can also be helpful to start learning how to use MLOps tools and platforms. I'll suggest Comet because I work there and am most familiar with it (and also because it's a great tool). Cloud and DevOps skills are also helpful. Make sure you're comfortable with git. Make sure you're learning how to actually deploy your projects. Good luck! :)
What are some alternatives?
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
MLSys-NYU-2022 - Slides, scripts and materials for the Machine Learning in Finance Course at NYU Tandon, 2022
deeplake - Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai
finetuned-qlora-falcon7b-medical - Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
E2B - Secure cloud runtime for AI apps & AI agents. Fully open-source.
AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
doc_chat_api - Create a production level scalable chat bot API to respond from the fed data
LLM-Finetuning-Hub - Toolkit for fine-tuning, ablating and unit-testing open-source LLMs. [Moved to: https://github.com/georgian-io/LLM-Finetuning-Toolkit]
Local-LLM-Langchain - Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and KoboldAI versions of the langchain notebooks with examples.