transformers
Pluto.jl
Our great sponsors
transformers | Pluto.jl | |
---|---|---|
173 | 78 | |
124,115 | 4,860 | |
2.4% | - | |
10.0 | 9.4 | |
6 days ago | 6 days ago | |
Python | JavaScript | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transformers
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.
-
Self train a super tiny model recommendations
You can train it with the code provided in transformer repo: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
transformers uses accelerate if you call it with device_map='auto'
-
Show HN: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
Too much money being thrown around on BS in the LLM space, hardly any of it is going to places where it matters.
For example, the researchers working hard on better text sampling techniques, or on better constraint techniques (i.e. like this https://arxiv.org/abs/2306.03081), or on actual negative prompting/CFG in LLMs (i.e. like this https://github.com/huggingface/transformers/issues/24536) are doing far FAR more to advance the state of AI than dozens of VC backed LLM "prompt engineering" companies operating today.
HN, and the NLP community have some serious blindspots with knowing how to exploit their own technology. At least someone at Anderson Howartz got a clue and gave some funding to Oogabooga - still waiting for Automatic1111 to get any funding.
-
🐍🐍 23 issues to grow yourself as an exceptional open-source Python expert 🧑💻 🥇
Repo : https://github.com/huggingface/transformers
-
Whisper prompt tuning
From what I know, Whisper already supports prompting (https://github.com/huggingface/transformers/pull/22496). Can I somehow freeze the whole model and tune exclusively the prompt or would I need to write an implementation from scratch?
Pluto.jl
-
Potential of the Julia programming language for high energy physics computing
I thought that notebook based development and package based development were diametrically opposed in the past, but Pluto.jl notebooks have changed my mind about this.
A Pluto.jl notebook is a human readable Julia source file. The Pluto.jl package is itself developed via Pluto.jl notebooks.
https://github.com/fonsp/Pluto.jl
Also, the VSCode Julia plugin tooling has really expanded in functionality and usability for me in the past year. The integrated debugging took some work to setup, but is fast enough to drop into a local frame.
https://code.visualstudio.com/docs/languages/julia
Julia is the first language I have achieved full life cycle integration between exploratory code to sharable package. It even runs quite well on my Android. 2023 is the first year I was able to solve a differential equation or render a 3D surface from a calculated mesh with the hardware in my pocket.
-
Ask HN: Why don't other languages have Jupyter style notebooks?
Re Julia there is also pluto.jl that is another notebook-like environment for julia. It's been a few years since I played with it but it looked cool, for example it handles state differently so you don't get into the same messes as with ipython notebooks. https://plutojl.org/
-
Looking for a Julia gui framework with a demo like EGUI
For this, Notebooks are often used. Julia offers a uniquely nice and interactive Pluto notebook for the web https://github.com/fonsp/Pluto.jl
- Excel Labs, a Microsoft Garage Project
-
IPyflow: Reactive Python Notebooks in Jupyter(Lab)
I believe this is what Pluto sets out to do for Julia.
I used it as part of the “Computational Thinking” with Julia course a year or two back. Even then the beta software was very good and some of the demos the Pluto dev showed were nothing short of amazing
- For Julia is there some thing like VSCode's python interactive window?
-
What have you "washed your hands of" in Python?
I think what you want is Pluto!
-
Show HN: Out of order execution in Jupyter notebooks is a solved problem
I like how Pluto.jl handles this:
> Pluto offers an environment where changed code takes effect instantly and where deleted code leaves no trace. Unlike Jupyter or Matlab, there is no mutable workspace, but rather, an important guarantee:
> At any instant, the program state is completely described by the code you see.
-
My Journey from R to Julia
I only used Julia for a short time, but I didn't see the blazing fast speeds I was promised. I've seen the benchmarks, of course, on which the claims are founded, but the C-like speeds weren't obvious to me in everyday data science workflows. In the end, there wasn't sufficient motivation for me to switch to Julia as my weapon of choice. I do like Pluto[0], though...
-
Using Julia with Anaconda and Voilà
You can use https://github.com/fonsp/Pluto.jl instead of Jupyter.
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
llama - Inference code for Llama models
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
huggingface_hub - The official Python client for the Huggingface Hub.
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch
sentencepiece - Unsupervised text tokenizer for Neural Network-based text generation.
Swin-Transformer-Tensorflow - Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)
vim-slime - A vim plugin to give you some slime. (Emacs)
faiss - A library for efficient similarity search and clustering of dense vectors.
rmarkdown - Dynamic Documents for R