Show HN: GPU Prices on eBay

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • LLaMA-Factory

    Unify Efficient Fine-Tuning of 100+ LLMs

  • Depends what model you want to train, and how well you want your computer to keep working while you're doing it.

    If you're interested in large language models there's a table of vram requirements for fine-tuning at [1] which says you could do the most basic type of fine-tuning on a 7B parameter model with 8GB VRAM.

    You'll find that training takes quite a long time, and as a lot of the GPU power is going on training, your computer's responsiveness will suffer - even basic things like scrolling in your web browser or changing tabs uses the GPU, after all.

    Spend a bit more and you'll probably have a better time.

    [1] https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#...

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Better and Faster Large Language Models via Multi-Token Prediction

    1 project | news.ycombinator.com | 1 May 2024
  • Llama.cpp Bfloat16 Support

    1 project | news.ycombinator.com | 30 Apr 2024
  • Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps

    1 project | dev.to | 30 Apr 2024
  • GGML Flash Attention support merged into llama.cpp

    1 project | news.ycombinator.com | 30 Apr 2024
  • Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B

    11 projects | news.ycombinator.com | 28 Apr 2024