liboai VS nanoGPT

Compare liboai vs nanoGPT and see what are their differences.

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs. (by karpathy)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
liboai nanoGPT
82 69
295 32,050
- -
6.0 5.4
27 days ago about 1 month ago
C++ Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

liboai

Posts with mentions or reviews of liboai. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-18.
  • Revolutionizing Real-Time Alerts with AI, NATs and Streamlit
    6 projects | dev.to | 18 Feb 2024
    Imagine you have an AI-powered personal alerting chat assistant that interacts using up-to-date data. Whether it's a big move in the stock market that affects your investments, any significant change on your shared SharePoint documents, or discounts on Amazon you were waiting for, the application is designed to keep you informed and alert you about any significant changes based on the criteria you set in advance using your natural language. In this post, we will learn how to build a full-stack event-driven weather alert chat application in Python using pretty cool tools: Streamlit, NATS, and OpenAI. The app can collect real-time weather information, understand your criteria for alerts using AI, and deliver these alerts to the user interface.
  • Top 9 AI APIs you must try in 2024
    3 projects | dev.to | 26 Dec 2023
    1.OpenAI API
  • Build a Basketball SMS Chatbot with LangChain Prompt Templates in Python
    2 projects | dev.to | 31 May 2023
    OpenAI Account – make an OpenAI Account here
  • ML Trends To Blow Your Mind
    1 project | /r/u_Amazinum | 10 May 2023
    Their big advantage is that they can quickly deal with data they’ve never seen before and scale. NVIDIA and Open AI are currently leading providers of such technologies. Read more.
  • liboai VS openai-cpp - a user suggested alternative
    2 projects | 12 Apr 2023
  • Master ChatGPT with /shortcuts (+1 trillion prompts)
    1 project | /r/ChatGPTPro | 28 Mar 2023
    You need an API key https://openai.com/api
  • Computing power needed for running ChatGPT?
    1 project | /r/ChatGPT | 28 Feb 2023
    To run ChatGPT, you need access to OpenAI’s API3, which provides a cloud-based platform for interacting with various models, including ChatGPT. You can sign up for an account and request access to the API here: https://openai.com/api/
  • How to make an AI Image Generator yourself for free(easy)🖼️
    2 projects | dev.to | 25 Feb 2023
    To get started, go to OpenAI and create an account. After that, click on your profile on the top right corner, and select view API keys.
  • Pre-processing data to assemble in a database to try to apply GPT2 to it
    2 projects | /r/datascience | 22 Feb 2023
    As far as I know the ChatGPT API isn’t public yet. However you can use the GPT3 version from OpenAI which do have an API you can connect to. The Davinci-003 text model is pretty damn good. https://openai.com/api/ for more info
  • Which Free to use AI for content writing is the best?
    1 project | /r/AIShop | 22 Feb 2023
    GPT-3: GPT-3 is an advanced language generation model developed by OpenAI. It is capable of generating high-quality human-like text with a high degree of coherence and coherence. The model has been trained on a vast corpus of text, making it capable of generating text across a range of topics. GPT-3 can be used for a variety of content creation tasks such as blog posts, articles, and social media updates. One of the most significant advantages of using GPT-3 is that it can save a significant amount of time when creating content. The model can generate text much faster than a human writer, and the output can be used as a starting point for further refinement.

nanoGPT

Posts with mentions or reviews of nanoGPT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-01.
  • Show HN: Predictive Text Using Only 13KB of JavaScript. No LLM
    3 projects | news.ycombinator.com | 1 Mar 2024
    Nice work! I built something similar years ago and I did compile the probabilities based on a corpus of text (public domain books) in an attempt to produce writing in the style of various authors. The results were actually quite similar to the output of nanoGPT[0]. It was very unoptimized and everything was kept in memory. I also knew nothing about embeddings at the time and only a little about NLP techniques that would certainly have helped. Using a graph database would have probably been better than the datastructure I came up with at the time. You should look into stuff like Datalog, Tries[1], and N-Triples[2] for more inspiration.

    You're idea of splitting the probabilities based on whether you're starting the sentence or finishing it is interesting but you might be able to benefit from an approach that creates a "window" of text you can use for lookup, using an LCS[3] algorithm could do that. There's probably a lot of optimization you could do based on the probabilities of different sequences, I think this was the fundamental thing I was exploring in my project.

    Seeing this has inspired me further to consider working on that project again at some point.

    [0] https://github.com/karpathy/nanoGPT

    [1] https://en.wikipedia.org/wiki/Trie

    [2] https://en.wikipedia.org/wiki/N-Triples

    [3] https://en.wikipedia.org/wiki/Longest_common_subsequence

  • LLMs Learn to Be "Generative"
    1 project | news.ycombinator.com | 4 Feb 2024
    where x1 denotes the 1st token, x2 denotes the 2nd token and so on, respectively.

    I understand the conditional terms p(x_n|...) where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token p(x1). How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?

    IMHO, if the model doesn't learn p(x1) properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?

    I asked the same question on nanoGPT repo: https://github.com/karpathy/nanoGPT/issues/432, but I haven't found the answer I'm looking for yet. Could someone please enlighten me.

  • A simulation of me: fine-tuning an LLM on 240k text messages
    2 projects | news.ycombinator.com | 4 Jan 2024
    This repo, albeit "old" in regards to how much progress there's been in LLMs, has great simple tutorials right there eg. fine-tuning GPT2 with Shakespeare: https://github.com/karpathy/nanoGPT
  • Ask HN: Is it feasible to train my own LLM?
    3 projects | news.ycombinator.com | 2 Jan 2024
    For training from scratch, maybe a small model like https://github.com/karpathy/nanoGPT or tinyllama. Perhaps with quantization.
  • Writing a C compiler in 500 lines of Python
    4 projects | news.ycombinator.com | 4 Sep 2023
    It does remind me of a project [1] Andrej Karpathy did, writing a neural network and training code in ~600 lines (although networks have easier logic to code than a compiler).

    [1] https://github.com/karpathy/nanoGPT

  • [D] Can GPT "understand"?
    1 project | /r/MachineLearning | 20 Aug 2023
    But I'm still not convinced that it can't in theory. Maybe the training set or transformer size I'm using is too small. I'm using nanoGPT implementation (https://github.com/karpathy/nanoGPT) with layers 24, heads 12, and embeddings per head 32. I'm using character-based vocab: every digit is a separate token, +, = and EOL.
  • Transformer Attention is off by one
    4 projects | news.ycombinator.com | 24 Jul 2023
    https://github.com/karpathy/nanoGPT/blob/f08abb45bd2285627d1...

    At training time, probabilities for the next token are computed for each position, so if we feed in a sequence of n tokens, we basically get n training examples, one for each position, but at inference time, we only compute the next token since we’ve already output the preceding ones.

  • Sarah Silverman Sues ChatGPT Creator for Copyright Infringement
    1 project | /r/books | 10 Jul 2023
    And there are a bunch of other efforts at making training more efficient. Here's a cool model by Karpathy (OpenAI/used to head up Tesla's efforts): https://github.com/karpathy/nanoGPT
  • Douglas Hofstadter changes his mind on Deep Learning and AI risk
    2 projects | news.ycombinator.com | 3 Jul 2023
    Just being a part of any auto-regressive system does not contradict his statement.

    Go look at the GPT training code, here is the exact line: https://github.com/karpathy/nanoGPT/blob/master/train.py#L12...

    The model is only trained to predict the next token. The training regime is purely next-token prediction. There is no loopiness whatsoever here, strange or ordinary.

    Just because you take that feedforward neural network and wrap it in a loop to feed it its own output does not change the architecture of the neural net itself. The neural network was trained in one direction and runs in one direction. Hofstadter is surprised that such an architecture yields something that looks like intelligence.

    He specifically used the correct term "feedforward" to constrast with recurrent neural networks, which GPT is not: https://en.wikipedia.org/wiki/Feedforward_neural_network

  • NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation.
    1 project | /r/LocalLLaMA | 30 Jun 2023
    Does anyone have or know of an example implementation in plain pytorch, not huggingface transformers. Like something you could plug into https://github.com/karpathy/nanoGPT ?

What are some alternatives?

When comparing liboai and nanoGPT you can also consider the following projects:

OpenAI C++ - OpenAI C++ is a community-maintained library for the Open AI API

minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training

openai-node - The official Node.js / Typescript library for the OpenAI API

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

LongtermChatExternalSources - GPT-3 chatbot with long-term memory and external sources

PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

whatsapp-chatgpt - ChatGPT + DALL-E + WhatsApp = AI Assistant :rocket: :robot:

ChatGPT - 🔮 ChatGPT Desktop Application (Mac, Windows and Linux)

chatGPT-SMS-js - ChatGPT over SMS using Twilio Programmable Messaging, Twilio Serverless Toolkit, OpenAI API, Node.js

nn-zero-to-hero - Neural Networks: Zero to Hero

whatsapp-chatgpt

gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]