unsloth
hn-search
unsloth | hn-search | |
---|---|---|
15 | 1,637 | |
8,974 | 524 | |
42.8% | 0.2% | |
9.4 | 2.9 | |
4 days ago | 6 months ago | |
Python | TypeScript | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unsloth
-
Ask HN: Most efficient way to fine-tune an LLM in 2024?
Gemma 7b is 2.4x faster than HF + FA2.
Check out https://github.com/unslothai/unsloth for full benchmarks!
-
Gemma doesn't suck anymore – 8 bug fixes
Here are the missing links:
* Gemma, a family of open models from Google: https://ai.google.dev/gemma
* Unsloth is a tool/method for training models faster (IIUC): https://github.com/unslothai/unsloth
-
AMD ROCm Software Blogs
Thanks! Again, partnerships over customers. If you're experienced and have the technical chops to make a MI300x sing, we want to work with you. Our model is that we are the capex/opex investor for businesses. As much as I love software, Hot Aisle is more of a hardware business. Running super high end large scale compute is an extreme challenge in itself. We are less interested in building the software side of things and want to foster those who can focus on that side.
https://github.com/unslothai/unsloth/issues/160
https://github.com/search?q=repo%3Apredibase%2Florax+rocm&ty...
https://github.com/sgl-project/sglang/issues/157
https://github.com/casper-hansen/AutoAWQ (supports rocm)
-
Show HN: We got fine-tuning Mistral-7B to not suck
Unsloth’s colab notebooks for fine-tuning Mistral-7B are super easy to use and run fine in just about any colab instance:
https://github.com/unslothai/unsloth
It’s my default now for experimenting and basic training. If I want to get into the weeds with the training, I use axolotl, but 9/10, it’s not really necessary.
-
Mistral 7B Fine-Tune Optimized
If anyone wants to finetune their own Mistral 7b model 2.2x faster and use 62% less memory - give our open source package Unsloth a try! https://github.com/unslothai/unsloth a try! :)
-
Has anyone tried out the ASPEN-Framework for LoRA Fine-Tuning yet and can share their experience?
https://github.com/unslothai/unsloth seems good and more relevant to your aims perhaps but I haven't tried it.
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
The unsloth project offers some low-level optimizations for Llama et al, and as of today some prelim Mistral work (which I heard is the llama architecture?)
- Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
-
80% faster, 50% less memory, 0% accuracy loss Llama finetuning
This seems to just be a link to the Unsloth Github repo[0], which in turn is the free version of Unsloth Pro/Max[1]. Maybe the link should be changed?
[0]: https://github.com/unslothai/unsloth
- 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
hn-search
-
Rule of Thumb: Anything that looks fancy is not worth you time
- Ads with Psychological tricks
Truly good websites have around 2 facts per 10 word sentence, and get instantly to the chase. Also: good websites give you the names of all their competitors/alternative websites before showing their own stuff, and give you further reading.
Right now the world of technology is supposedly more innovative than ever, but somehow Wikipedia (https://www.wikipedia.org/) and Search Hackernews (https://hn.algolia.com/) beat billion dollar search engines.
Articles written decades ago are still unsurpassed in terms of quality and ease of understanding, but the best modern websites can do is textbook explanations. It is time society graduates from boilerplate buzzword textbook culture.
Now the gems of the internet are slowly being buried beneath mountains of trash.
If something sounds boilerplate it isn't good enough.
Don't bother saying something that has been said before, and better.
-
What makes a translation great
>for more detail: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Oh, I see. We actually discussed Pound about four years ago - just a little back and forth about the ABC of Reading: https://news.ycombinator.com/item?id=24196681
>What's your explanation of why Pound went Fascist?
I'm not sure I particularly have one; I haven't read any of his longer political or cultural (i.e. non-literary) works. I just think it's silly to correlate an approach to translation that you dislike with fascism. Especially as I'm not sure it even makes sense on its own terms: I can only read your comment as 'lazy translator? Figures that he would be a fascist', but if I imagine the type of translation a fascist would approve of, the approach I picture is fastidious, fussy, concerned with fidelity to the point of stickler-ishness. (Isn't that from where we get 'grammar nazi'?)
And oh, well, since you ask I'll take a shy at it: my vague sense is that he became fascist because saw a society in decline due to it becoming more and more a sham society: opulence without virtue, power without vigour, money no longer tied to actually existing goods. (Of course, all of this shades easily into antisemitism.) He saw fascism as the answer; It's easier to see in retrospect that it wasn't.
-
Zed Decoded: Linux When? – Zed Blog
"multiplayer notepad" goes back 15 years at least - https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... notepad&sort=byDate&type=comment
it was used back with a popular website which opened a text document and anyone viewing could type, but I can't remember the name. That became a thing in Google Docs, Microsoft Office, Floobits, and lots of self-hosted and cloned sites.
-
Louis Rossmann: YouTube's Legal Team sent me a letter [video]
If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. You can help by flagging it or emailing us at [email protected].
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
-
An Oil Price-Fixing Conspiracy Caused 27% of All Inflation in 2021
Ok, but please don't post unsubstantive comments to Hacker News.
I understand the reason for repeating these sentiments—it's the same reason why they get upvoted to the top of threads*—but repetition of this kind is what we're most trying to avoid here.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
https://news.ycombinator.com/newsguidelines.html
* I've marked this one off topic now.
-
Validating app for manufacturers enhancing process reliability and efficiency
I was looking for it in the guidelines. There are a couple of conventions for postings. Consider a bit of prior examples: [https://hn.algolia.com/?q=show+hn]
-
Show HN: Hacker Search – A semantic search engine for Hacker News
yeah there are only three stories coming up from the site search
https://hn.algolia.com/?q=postgres+clustering
only one is semanthically correct, the other pick up the wrong version of clustering (i.e. k-means instead of multi master writes)
but yeah if one doesn't test the hard cases, how does one know it preserves semantics :D
- Longevity of Recordable CDs, DVDs and Blu-Rays
-
The Scientific Method Part 5: Illusions, Delusions, and Dreams
Like dismissing the work of Feyerabend or Wittgenstein without seemingly having read either:
https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=tr...
-
Any Google Analytics Alternatives?
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
duckduckgo-locales - Translation files for <a href="https://duckduckgo.com"> </a>
llama.cpp - LLM inference in C/C++
v - Simple, fast, safe, compiled language for developing maintainable software. Compiles itself in <1s with zero library dependencies. Supports automatic C => V translation. https://vlang.io
nanoChatGPT - nanogpt turned into a chat model
parser - 📜 Extract meaningful content from the chaos of a web page
gpt-fast - Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
readability - A standalone version of the readability lib
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
yq - Command-line YAML, XML, TOML processor - jq wrapper for YAML/XML/TOML documents
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
milkdown - 🍼 Plugin driven WYSIWYG markdown editor framework.