potatogpt VS workshop

Compare potatogpt vs workshop and see what are their differences.

potatogpt

Pure Typescript, dependency free, ridiculously slow implementation of GPT2 for educational purposes (by newhouseb)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
potatogpt workshop
2 3
40 13
- -
6.0 10.0
about 1 year ago over 1 year ago
TypeScript Python
- MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

potatogpt

Posts with mentions or reviews of potatogpt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-19.

workshop

Posts with mentions or reviews of workshop. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-26.
  • Transformers from Scratch
    4 projects | news.ycombinator.com | 26 Apr 2023
    - There are a few common ways you might see this done, but they broadly work by assigning fixed or learned embeddings to each position in the input token sequence. These embeddings can be added to our matrix above so that the first row gets the embedding for the first position added to it, the second row gets the embedding for the second position, and so on. Now if the tokens are reordered, the embedding matrix will not be the same. Alternatively, these embeddings can be concatenated horizontally to our matrix: this guarantees the positional information is kept entirely separate from the linguistic (at the cost of having a larger combined embedding that the block must support).

    I put together this repository at the end of last year to better help visualize the internals of a transformer block when applied to a toy problem: https://github.com/rstebbing/workshop/tree/main/experiments/.... It is not super long, and the point is to try and better distinguish between the quantities you referred to by seeing them (which is possible when embeddings are in a low dimension).

    I hope this helps!

  • Understanding and Coding the Self-Attention Mechanism of Large Language Models
    1 project | news.ycombinator.com | 10 Feb 2023
    At the end of last year I put together a repository to try and show what is achieved by self-attention on a toy example: detect whether a sequence of characters contains both "a" and "b".

    The toy problem is useful because the model dimensionality is low enough to make visualization straightforward. The walkthrough also goes through how things can go wrong, and how it can be improved, etc.

    The walkthrough and code is all available here: https://github.com/rstebbing/workshop/tree/main/experiments/....

    It's not terse like nanoGPT or similar because the goal is a bit different. In particular, to gain more intuition about the intermediate attention computations, the intermediate tensors are named and persisted so they can be compared and visualized after the fact. Everything should be exactly reproducible locally too!

  • The Transformer Family
    1 project | news.ycombinator.com | 29 Jan 2023
    I put together a repository at the end of last year to walk through a basic use of a single layer Transformer: detect whether "a" and "b" are in a sequence of characters. Everything is reproducible, so hopefully helpful at getting used to some of the tooling too!

    https://github.com/rstebbing/workshop/tree/main/experiments/...

What are some alternatives?

When comparing potatogpt and workshop you can also consider the following projects:

benchmark - TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.

scratch-www - Standalone web client for Scratch

requests-wasm-polyfill - Drop-in replacement for the requests library for wasm python

picoGPT - An unnecessarily tiny implementation of GPT-2 in NumPy.

shumai - Fast Differentiable Tensor Library in JavaScript and TypeScript with Bun + Flashlight

anansi - open source tooling for AI search and understanding

opov - Operator Overloading for Typescript with Tagged Template Literals

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web

llama.cpp - LLM inference in C/C++

onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

tfjs - A WebGL accelerated JavaScript library for training and deploying ML models.