Can we discuss MLOps, Deployment, Optimizations, and Speed?

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
  • accelerate

    🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support

  • accelerate is a best-in-class lib for deploying models, especially across multi-gpu and multi-node.

  • transformers

    🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

  • transformers uses accelerate if you call it with device_map='auto'

  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • unsloth

    Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory

  • The unsloth project offers some low-level optimizations for Llama et al, and as of today some prelim Mistral work (which I heard is the llama architecture?)

  • llama.cpp

    LLM inference in C/C++

  • llama.cpp is a great resource for running Quants, and even though it's called llama, it's the goto backend for basically all LLMs right now (ctransformers is dead)

  • DeepSpeed

    DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

  • DeepSpeed can handle parallelism concerns, and even offload data/model to RAM, or even NVMe (!?) . I'm surprised I don't see this project used more.

  • ollama

    Get up and running with Llama 3, Mistral, Gemma, and other large language models.

  • uniteai

    Your AI Stack in Your Editor

  • I recently went through the same with UniteAI, and had to swap ctransformers back out for llama.cpp

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • FLaNK Stack 05 Feb 2024

    49 projects | dev.to | 5 Feb 2024
  • A Curated List of Free ML/ DL YouTube Courses

    1 project | news.ycombinator.com | 28 Jan 2024
  • ML-YouTube-Courses: NEW Courses - star count:11622.0

    1 project | /r/algoprojects | 7 Dec 2023
  • ML-YouTube-Courses: NEW Courses - star count:11622.0

    1 project | /r/algoprojects | 6 Dec 2023
  • ML-YouTube-Courses: NEW Courses - star count:11622.0

    1 project | /r/algoprojects | 5 Dec 2023