Is what I need possible currently?

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • peft

    🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

  • ue5-llama-lora

    A proof-of-concept project that showcases the potential for using small, locally trainable LLMs to create next-generation documentation tools.

  • This would be an interesting experiment. People are already doing similar stuff, such as expanding the model's knowledge domain into something specific. Here's an example of how someone created a LoRA for UE5 documentation: https://github.com/bublint/ue5-llama-lora

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • h2o-llmstudio

    H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/

  • Check out LLM Studio for fine tuning LLMs. Open source: https://github.com/h2oai/h2o-llmstudio

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • LoftQ: LoRA-fine-tuning-aware Quantization

    1 project | news.ycombinator.com | 19 Dec 2023
  • PEFT 0.5 supports fine-tuning GPTQ models

    1 project | /r/LocalLLaMA | 24 Aug 2023
  • Exploding loss when trying to train OpenOrca-Platypus2-13B

    1 project | /r/LocalLLaMA | 21 Aug 2023
  • [D] Is there a difference between p-tuning and prefix tuning ?

    1 project | /r/MachineLearning | 3 Jul 2023
  • How does using QLoRAs when running Llama on CPU work?

    2 projects | /r/LocalLLaMA | 23 Jun 2023