Optimized implementation of training/fine-tuning of LLMs [D]

This page summarizes the projects mentioned and recommended in the original post on /r/MachineLearning

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • FasterTransformer

    Transformer related optimization, including BERT, GPT

  • Have anyone tried to optimize the forward and backward using custom Cuda code or fused kernel to speed up the training time of current LLMs? I only have seen FasterTransformer ( NVIDIA/FasterTransformer) and other similar tools but they're only focusing on inference.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Why are self attention not as deployment friendly?

    2 projects | /r/deeplearning | 26 Jul 2022
  • Whether the ML computation engineering expertise will be valuable, is the question.

    2 projects | /r/LanguageTechnology | 21 Apr 2023
  • Lack of activation in transformer feedforward layer?

    2 projects | /r/learnmachinelearning | 20 May 2021
  • How to Build an AI Text Generator: Text Generation with a GPT-2 Model

    3 projects | dev.to | 2 Feb 2021
  • AI leaderboards are no longer useful. It's time to switch to Pareto curves

    1 project | news.ycombinator.com | 30 Apr 2024