refact VS lmdeploy

Compare refact vs lmdeploy and see what are their differences.

refact

WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding (by smallcloudai)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
refact lmdeploy
34 3
1,422 2,391
3.3% 12.6%
9.8 9.8
4 days ago 2 days ago
JavaScript Python
BSD 3-clause "New" or "Revised" License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

refact

Posts with mentions or reviews of refact. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-15.

lmdeploy

Posts with mentions or reviews of lmdeploy. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-06.
  • AMD May Get Across the CUDA Moat
    8 projects | news.ycombinator.com | 6 Oct 2023
    I wouldn’t say ROCm code is “slower”, per se, but in practice that’s how it presents. References:

    https://github.com/InternLM/lmdeploy

    https://github.com/vllm-project/vllm

    https://github.com/OpenNMT/CTranslate2

    You know what’s missing from all of these and many more like them? Support for ROCm. This is all before you get to the really wildly performant stuff like Triton Inference Server, FasterTransformer, TensorRT-LLM, etc.

    ROCm is at the “get it to work stage” (see top comment, blog posts everywhere celebrating minor successes, etc). CUDA is at the “wring every last penny of performance out of this thing” stage.

    In terms of hardware support, I think that one is obvious. The U in CUDA originally stood for unified. Look at the list of chips supported by Nvidia drivers and CUDA releases. Literally anything from at least the past 10 years that has Nvidia printed on the box will just run CUDA code.

    One of my projects specifically targets Pascal up - when I thought even Pascal was a stretch. Cue my surprise when I got a report of someone casually firing it up on Maxwell when I was pretty certain there was no way it could work.

    A Maxwell laptop chip. It also runs just as well on an H100.

    THAT is hardware support.

  • Nvidia Introduces TensorRT-LLM for Accelerating LLM Inference on H100/A100 GPUs
    3 projects | news.ycombinator.com | 8 Sep 2023
    vLLM has healthy competition. Not affiliated but try lmdeploy:

    https://github.com/InternLM/lmdeploy

    In my testing it’s significantly faster and more memory efficient than vLLM when configured with AWQ int4 and int8 KV cache.

    If you look at the PRs, issues, etc you’ll see there are many more optimizations in the works. That said there are also PRs and issues for some of the lmdeploy tricks in vllm as well (AWQ, Triton Inference Server, etc).

    I’m really excited to see where these projects go!

  • Meta: Code Llama, an AI Tool for Coding
    18 projects | news.ycombinator.com | 24 Aug 2023

What are some alternatives?

When comparing refact and lmdeploy you can also consider the following projects:

tabby - Self-hosted AI coding assistant

vllm - A high-throughput and memory-efficient inference and serving engine for LLMs

fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server

llama.cpp - LLM inference in C/C++

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

llama-cpp-python - Python bindings for llama.cpp

CTranslate2 - Fast inference engine for Transformer models

developer - the first library to let you embed a developer agent in your own app!

smartcat

supervision - We write your reusable computer vision tools. 💜

seamless_communication - Foundational Models for State-of-the-Art Speech and Text Translation