Replit's new AI Model now available on Hugging Face

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • localpilot

  • refact

    WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding

  • I don’t recommend that, since that uses the cloud for the actual inference by default (and they provide no guidance for changing that).

    I don’t consider cloud inference to count as getting it working “locally” as requested by the comment above yours.

    Refact works nicely and works locally, but the challenge with any new model is making it be supported by the existing software: https://github.com/smallcloudai/refact/

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • llm-vscode-inference-server

    An endpoint server for efficiently serving quantized open-source LLMs for code.

  • Requests for code generation are made via an HTTP request.

    You can use the Hugging Face Inference API or your own HTTP endpoint, provided it adheres to the API specified here[1] or here[2]."

    It's fairly easy to use your own model locally with the plugin. You can just use the one of the community developed inference servers, which are listed at the bottom of the page, but here's the links[3] to both[4].

    [1]: https://huggingface.co/docs/api-inference/detailed_parameter...

    [2]: https://huggingface.github.io/text-generation-inference/#/Te...

    [3]: https://github.com/wangcx18/llm-vscode-inference-server

    [4]: https://github.com/wangcx18/llm-vscode-inference-server

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Ask HN: How do you develop and maintain a good note-taking habit?

    2 projects | news.ycombinator.com | 5 May 2024
  • Rabbit R1 can be run on a Android device

    1 project | news.ycombinator.com | 5 May 2024
  • Flags Are Not Languages

    1 project | news.ycombinator.com | 5 May 2024
  • Download your Learn course content with this free and open-source tool. All you need is a working computer and basic Python knowledge, and you can save a local copy of your Learn courses' content for future reference after the end of the term.

    1 project | /r/uwaterloo | 9 Dec 2023
  • What Are HTML Meta Tags And What Is Their Importance?

    2 projects | dev.to | 5 May 2024