Constrained-Text-Genera VS nn-zero-to-hero

Compare Constrained-Text-Genera vs nn-zero-to-hero and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Constrained-Text-Genera nn-zero-to-hero
11 10
- 10,499
- -
- 2.4
- 8 days ago
Jupyter Notebook
- MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Constrained-Text-Genera

Posts with mentions or reviews of Constrained-Text-Genera. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-06.
  • Photoshop for Text (2022)
    2 projects | news.ycombinator.com | 6 Apr 2024
    Oh my god. I wrote a whole library called "Constrained Text Generation Studio" where I mused that I wanted a "Photoshop for Text". I'm not even sure which work predates the other: https://github.com/Hellisotherpeople/Constrained-Text-Genera...

    The core idea of a "photoshop for text", specifically a word processor made for prosumers supporting GenAI first class (i.e oobabooga but actually good) - is worth so much. If you're a VC reading this, chances are I want to talk to you to actually execute on the idea from the OP

  • Ask HN: What have you built with LLMs?
    43 projects | news.ycombinator.com | 5 Feb 2024
    I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:

    1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games

    2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...

    3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG

    4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8

  • You need a mental model of LLMs to build or use a LLM-based product
    2 projects | news.ycombinator.com | 13 Nov 2023
    My mental model for LLMs was built by carefully studying the distribution of its output vocabulary at every time step.

    There are tools that allow you to right click and see all possible continuations for an LLM like you would in a code IDE[1]. Seeing what this vocabulary is[2] and how trivial modifications to the prompt can impact probabilities will do a lot for improving the mental model of how LLM operate.

    Shameless self plug, but software which can do what I am describing is here, and it's worth noting that it ended up as peer reviewed research.

    [1] https://github.com/Hellisotherpeople/Constrained-Text-Genera...

  • Ask HN: How training of LLM dedicated to code is different from LLM of “text”
    3 projects | news.ycombinator.com | 2 Oct 2023
    Yeah, the LLM outputs a distribution of likely next tokens. It is up to the decoder to select one, and it can use a grammar to enforce certain rules on the output. https://github.com/Hellisotherpeople/Constrained-Text-Genera... or https://github.com/ggerganov/llama.cpp/blob/master/grammars/... for example.
  • Show HN: LLMs can generate valid JSON 100% of the time
    25 projects | news.ycombinator.com | 14 Aug 2023
  • Llama: Add Grammar-Based Sampling
    7 projects | news.ycombinator.com | 21 Jul 2023
    I am in love with this, I tried my hand at building a Constrained Text Generation Studio (https://github.com/Hellisotherpeople/Constrained-Text-Genera...), and got published at COLING 2022 for my paper on it (https://paperswithcode.com/paper/most-language-models-can-be...), but I always knew that something like this or the related idea enumerated in this paper: https://arxiv.org/abs/2306.03081 was the way to go.
  • Understanding GPT Tokenizers
    10 projects | news.ycombinator.com | 8 Jun 2023
    I agree with you, and I'm SHOCKED at how little work there actually is in phonetics within the NLP community. Consider that most of the phonetic tools that I am using to enforce rhyming or similar syntactic constrained in constrained text generation studio (https://github.com/Hellisotherpeople/Constrained-Text-Genera...) were built circa 2014, such as the CMU rhyming dictionary. In most cases, I could not find better modern implementations of these tools.

    I did learn an awful lot about phonetic representations and matching algorithms. Things like "soundex" and "double metaphone" now make sense to me and are fascinating to read about.

  • Don Knuth Plays with ChatGPT
    6 projects | news.ycombinator.com | 20 May 2023
    https://github.com/hellisotherpeople/constrained-text-genera...

    Just ban the damn tokens and try again. I wish that folks had more intuition around tokenization, and why LLMs struggle to follow syntactic, lexical, or phonetic constraints.

  • GPT-3 Creative Fiction
    2 projects | news.ycombinator.com | 19 Apr 2023
    My work on constrained text generation / filter assisted decoding for LLMS is cited in this article! One of my proudest moments was being noticed by my senpai Gwern!

    https://paperswithcode.com/paper/most-language-models-can-be...

    I want to update that just because GPT-4 appears to be far better at following constraints, doesn't mean that it's anywhere near perfect at following them. It's better now at my easy example of "ban the letter e" but if you ask for several constraints, or mixing lexical and phonetic constraints, it gets pretty awful pretty quickly. Filter assisted decoding can make any LLM (no matter how awful they are) follow constraints perfectly.

    I can't wait to get someone whose better at coding than me to implement these techniques in the major LLM frontends (oogabooga, llamma.ccp, etc) since my attempt at it was quite poopy research code: https://github.com/hellisotherpeople/constrained-text-genera...

  • Photoshop for Text
    2 projects | news.ycombinator.com | 18 Oct 2022
    The paper at COLING 2022 that I wrote, titled "Most language models can be poets too" included a GUI constrained text generation studio that I market as being "Like Photoshop but for text"

    https://github.com/Hellisotherpeople/Constrained-Text-Genera...

nn-zero-to-hero

Posts with mentions or reviews of nn-zero-to-hero. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-08.
  • Understanding GPT Tokenizers
    10 projects | news.ycombinator.com | 8 Jun 2023
    Andrej covers this in https://github.com/karpathy/nn-zero-to-hero. He explains things in multiple ways, both the matrix multiplications as well as the "programmer's" way of thinking of it - i.e. the lookups. The downside is it takes a while to get through those lectures. I would say for each 1 hour you need another 10 to looks stuff up and practice, unless you are fresh out of calculus and linear algebra classes.
  • New to AI and ChatGPT - Where do I start?
    1 project | /r/learnmachinelearning | 7 May 2023
  • Let's Create Our Own ChatGPT From Scratch! — An online discussion group starting Tuesday May 16, monthly meetings
    1 project | /r/PhilosophyEvents | 22 Apr 2023
    All the needed course material is here: https://github.com/karpathy/nn-zero-to-hero
  • Any good content for software engineers looking to delve deeper into LLMs/AI/NLP etc?
    1 project | /r/OpenAI | 25 Mar 2023
  • GPT in 60 Lines of NumPy
    9 projects | news.ycombinator.com | 9 Feb 2023
    That concept is not the easiest to describe succinctly inside a file like this, I think (especially as there are various levels of 'beginner' to take into account here). This is considered a very entry level concept, and I think there might be others who would consider it to be noise if logged in the code or described in the comments/blogpost.

    After all, there was a disclaimer that you might have missed up front in the blogpost! "This post assumes familiarity with Python, NumPy, and some basic experience training neural networks." So it is in there! But in all of the firehose of info we get maybe it is not that hard to miss.

    However, I'm here to help! Thankfully the concept is not too terribly difficult, I believe.

    Effectively, the loss function compresses the task we've described with our labels from our training dataset into our neural network. This includes (ideally, at least), 'all' the information the neural network needs to perform that task well, according to the data we have, at least. If you'd like to know more about the specifics of this, I'd refer you to the original Shannon-Weaver paper on information theory -- Weaver's introduction to the topic is in plain English and accessible to (I believe) nearly anyone off of the street with enough time and energy to think through and parse some of the concepts. Very good stuff! An initial read-through should take no more than half an hour to an hour or so, and should change the way you think about the world if you've not been introduced to the topic before. You can read a scan of the book at a university hosted link here: https://raley.english.ucsb.edu/wp-content/Engl800/Shannon-We...

    Using some of the concepts of Shannon's theory, we can see that anything that minimizes an information-theoretic loss function should indeed learn as well those prerequisites to the task at hand (features that identify xyz, features that move information about xyz from place A to B in the neural network, etc). In this case, even though it appears we do not have labels -- we certainly do! We are training on predicting the _next words_ in a sequence, and so thus by consequence humans have already created a very, _very_ richly labeled dataset for free! In this way, getting the data is much easier and the bar to entry for high performance for a neural network is very low -- especially if we want to pivot and 'fine-tune' to other tasks. This is because...to learn the task of predicting the next word, we have to learn tons of other sub-tasks inside of the neural network which overlap with the tasks that we want to perform. And because of the nature of spoken/written language -- to truly perform incredibly well, sometimes we have to learn all of these alternative tasks well enough that little-to-no-finetuning on human-labeled data for this 'secondary' task (for example, question answering) is required! Very cool stuff.

    This is a very rough introduction, I have not condensed it as much as it could be and certainly, some of the words are more than they should be. But it's an internet comment so this is probably the most I should put into it for now. I hope this helps set you forward a bit on your journey of neural network explanation! :D :D <3 <3 :)))))))))) :fireworks:

    For reference, I'm interested very much in what I refer to as Kolmogorov-minimal explanations (Wikipedia 'Kolmogorov complexity' once you chew through some of that paper if you're interested! I am still very much a student of it, but it is a fun explanation). In fact (though this repo performs several functions), I made https://github.com/tysam-code/hlb-CIFAR10 as beginner-friendly as possible. One does have to make some decisions to keep verbosity down, and I assume a very basic understand of what's happening in neural networks here too.

    I have yet to find a good go-to explanation of neural networks as a conceptual intro (I started with Hinton -- love the man but extremely mathematically technical for foundation! D:). Karpathy might have a really good one, I think I saw a zero-to-hero course from him a little while back that seemed really good.

    Andrej (practically) got me into deep learning via some of his earlier work, and I really love basically everything that I've seen the man put out. I skimmed the first video of his from this series and it seems pretty darn good, I trust his content. You should take a look! (Github and first video: https://github.com/karpathy/nn-zero-to-hero, https://youtu.be/VMj-3S1tku0)

    For reference, he is the person that's made a lot of cool things recently, including his own minimal GPT (https://github.com/karpathy/minGPT), and the much smaller version of it (https://github.com/karpathy/nanoGPT). But of course, since we are in this blog post I would refer you to this 60 line numpy GPT first (A. to keep us on track, B. because I skimmed it and it seemed very helpful! I'd recommend taking a look at outside sources if you're feeling particularly voracious in expanding your knowledge here.)

    I hope this helps give you a solid introduction to the basics of this concept, and/or for anyone else reading this, feel free to let me know if you have any technically (or-otherwise) appropriate questions here, many thanks and much love! <3 <3 <3 <3 :DDDDDDDD :)))))))) :)))) :))))

  • Trending ML repos of the week 📈
    10 projects | dev.to | 31 Jan 2023
    6️⃣ karpathy/nn-zero-to-hero
  • What can I do to start learning machine learning?
    1 project | /r/learnmachinelearning | 26 Jan 2023
    I’m a software engineer with zero experience with ml but have an interest in learning. I am confortable programming in any dynamic object oriented language. My basic plan to get started is to spend some time with the mathematical foundations of ml (Udemy course Mathematical foundations of Machine learning on Udemy looks decent). It also covers these concepts in the context of popular ml frameworks such as tensorflow and PyTorch so that’s kind of a two for one. I also stumbled upon this course: https://github.com/karpathy/nn-zero-to-hero.
  • Neural Networks: Zero to Hero
    1 project | news.ycombinator.com | 24 Jan 2023
    1 project | news.ycombinator.com | 12 Sep 2022
  • Mesterséges intelligencia
    1 project | /r/hungary | 21 Jan 2023

What are some alternatives?

When comparing Constrained-Text-Genera and nn-zero-to-hero you can also consider the following projects:

outlines - Structured Text Generation

nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.

Constrained-Text-Generation-Studio - Code repo for "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" at the (CAI2) workshop, jointly held at (COLING 2022)

minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training

agency - Agency: Robust LLM Agent Management with Go

llama.go - llama.go is like llama.cpp in pure Golang!

tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer

awesome-chatgpt-prompts - This repo includes ChatGPT prompt curation to use ChatGPT better.

torch-grammar

ChatGPT - 🔮 ChatGPT Desktop Application (Mac, Windows and Linux)

relm - ReLM is a Regular Expression engine for Language Models

tuning_playbook - A playbook for systematically maximizing the performance of deep learning models.