SaaSHub helps you find the best software and product alternatives Learn more →
LLMsPracticalGuide Alternatives
Similar projects and alternatives to LLMsPracticalGuide
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
-
minGPT
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
-
localGPT
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
-
basaran
Discontinued Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
AntiPython-AI-Club
Discontinued AI for people who don't like Python [Moved to: https://github.com/Fileforma/AntiPython-AI-Compiler-Colab]
LLMsPracticalGuide reviews and mentions
- Ask HN: Daily practices for building AI/ML skills?
-
XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
Here are some high level answers:
"7B" refers to the number of parameters or weights for a model. For a specific model, the versions with more parameters take more compute power to train and perform better.
A foundational model is the part of a ML model that is "pretrained" on a massive data set (and usually is the bulk of the compute cost). This is usually considered the "raw" model after which it is fine-tuned for specific tasks (turned into a chatbot).
"8K length" refers to the Context Window length (in tokens). This is basically an LLM's short term memory - you can think of it as its attention span and what it can generate reasonable output for.
"1.5T tokens" refers to the size of the corpus of the training set.
In general Wikipedia (or I suppose ChatGPT 4/Bing Chat with Web Browsing) is a decent enough place to start reading/asking basic questions. I'd recommend starting here: https://en.wikipedia.org/wiki/Large_language_model and finding the related concepts.
For those going deeper, there are lot of general resources lists like https://github.com/Hannibal046/Awesome-LLM or https://github.com/Mooler0410/LLMsPracticalGuide or one I like, https://sebastianraschka.com/blog/2023/llm-reading-list.html (there are a bajillion of these and you'll find more once you get a grasp on the terms you want to surf for). Almost everything is published on arXiv, and most is fairly readable even as a layman.
For non-ML programmers looking to get up to speed, I feel like Karpathy's Zero to Hero/nanoGPT or Jay Mody's picoGPT https://jaykmody.com/blog/gpt-from-scratch/ are alternative/maybe a better way to understand the basic concepts on a practical level.
-
Need help finding local LLM
checked e.g.: - https://medium.com/geekculture/list-of-open-sourced-fine-tuned-large-language-models-llm-8d95a2e0dc76 - https://github.com/Mooler0410/LLMsPracticalGuide - https://www.reddit.com/r/LocalLLaMA/comments/12r552r/creating_an_ai_agent_with_vicuna_7b_and_langchain/ - https://www.youtube.com/watch?v=9ISVjh8mdlA
-
1-Jun-2023
The Practical Guides for Large Language Models (https://github.com/Mooler0410/LLMsPracticalGuide)
- [D] LLM Evolutionare Tree from "The Practical Guides for Large Language Models"
- Comprehensive Table of LLM Usage Restrictions
- Check out this Comprehensive and Practical Guide for Practitioners Working with Large Language Models
- The Practical Guides for Large Language Models
- Practical Guide for LLMs
-
A note from our sponsor - SaaSHub
www.saashub.com | 5 May 2024
Stats
Sponsored