SaaSHub helps you find the best software and product alternatives Learn more →
Blog Alternatives
Similar projects and alternatives to blog
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
-
WizardLM
Discontinued Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
-
awesome-notebooks
A powerful data & AI notebook templates catalog: prompts, plugins, models, workflow automation, analytics, code snippets - following the IMO framework to be searchable and reusable in any context.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
blog reviews and mentions
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
[4] https://github.com/huggingface/blog/blob/main/starcoder.md
-
A comprehensive guide to running Llama 2 locally
If you just want to do inference/mess around with the model and have a 16GB GPU, then this[0] is enough to paste into a notebook. You need to have access to the HF models though.
0. https://github.com/huggingface/blog/blob/main/llama2.md#usin...
-
Let’s train your first Offline Decision Transformer model from scratch 🤖
The hands-on 👉https://github.com/huggingface/blog/blob/main/notebooks/101_train-decision-transformers.ipynb
-
How to switch to half precision fp16?
I'm also running the optimized script but it doesn't run with 512x512 on my RTX3050 Ti mobile. On this website, they recommend to switch to fp16 for GPUs with less than 10gb of vram.
-
Are people hiding their deep learning code?
Here's a notebook illustrating how to train a language model from scratch: https://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
-
A note from our sponsor - SaaSHub
www.saashub.com | 26 Apr 2024
Stats
The primary programming language of blog is Jupyter Notebook.
Sponsored