SaaSHub helps you find the best software and product alternatives Learn more →
CTranslate2 Alternatives
Similar projects and alternatives to CTranslate2
-
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
-
-
-
-
willow
Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative
-
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
-
FlexGen
Discontinued Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput generation. [Moved to: https://github.com/FMInference/FlexGen] (by Ying1123)
-
-
-
-
-
-
openWakeWord
An open-source audio wake word (or phrase) detection framework with a focus on performance and simplicity.
-
-
-
-
-
rust-bert
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
CTranslate2 discussion
CTranslate2 reviews and mentions
-
Brood War Korean Translations
Thanks for the added context on the builds! As "foreign" BW player and fellow speech processing researcher, I agree shallow contextual biasing should help. While not difficult to implement, most generally available ASR solutions don't make it easy to use. There's a PR in ctranslate2 implementing the same feature so that it could be exposed in faster-whisper: https://github.com/OpenNMT/CTranslate2/pull/1789
- Creando Subtítulos Automáticos para Vídeos con Python, Faster-Whisper, FFmpeg, Streamlit, Pillow
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
Just a point of clarification - faster-whisper references it but ctranslate2[0] is what's really doing the magic here.
Ctranslate2 is a sleeper powerhouse project that enables a lot. They should be up front and center and get the credit they deserve.
[0] - https://github.com/OpenNMT/CTranslate2
-
A Raspberry Pi 5 is better than two Pi 4S
We'd love to move beyond Nvidia.
The issue (among others) is we achieve the speech recognition performance we do largely thanks to ctranslate2[0]. They've gone on the record saying that they essentially have no interest in ROCm[1].
Of course with open source anything is possible but we see this as being one of several fundamental issues in supporting AMD GPGPU hardware.
[0] - https://github.com/OpenNMT/CTranslate2
[1] - https://github.com/OpenNMT/CTranslate2/issues/1072
- AMD May Get Across the CUDA Moat
-
StreamingLLM: Efficient streaming technique enable infinite sequence lengths
Etc.
Now, what this allows you to do is reuse the attention computed from the previous turns (since the prefix is the same).
In practice, people often have a system prompt before the conversation history, which (as far a I can tell) makes this technique not applicable (the input prefix will change as soon as the conversation history is long enough that we need to start dropping the oldest turns).
In such case, what you could do is to cache at least the system prompt. This is also possible with https://github.com/OpenNMT/CTranslate2/blob/2203ad5c8baf878a...
-
Faster Whisper Transcription with CTranslate2
The original Whisper implementation from OpenAI uses the PyTorch deep learning framework. On the other hand, faster-whisper is implemented using CTranslate2 [1] which is a custom inference engine for Transformer models. So basically it is running the same model but using another backend, which is specifically optimized for inference workloads.
[1] https://github.com/OpenNMT/CTranslate2
-
Explore large language models on any computer with 512MB of RAM
FLAN-T5 models generally perform well for their size, but they are encode-decoder models, and they aren't as widely supported for efficient inference. I wanted students to be able to run everything locally on CPU, so I was ideally hoping for something that supported quantization for CPU inference. I explored llama.cpp and GGML, but ultimately landed on ctranslate2 for inference.
- CTranslate2: An efficient inference engine for Transformer models
-
[D] Faster Flan-T5 inference
You can also check out the CTranslate2 library which supports efficient inference of T5 models, including 8-bit quantization on CPU and GPU. There is a usage example in the documentation.
-
A note from our sponsor - SaaSHub
www.saashub.com | 17 Jul 2025
Stats
OpenNMT/CTranslate2 is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of CTranslate2 is C++.