bark VS fairseq

Compare bark vs fairseq and see what are their differences.

bark

🔊 Text-Prompted Generative Audio Model (by suno-ai)

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python. (by facebookresearch)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
bark fairseq
67 89
32,784 29,301
3.8% 0.9%
5.4 6.0
8 days ago 6 days ago
Jupyter Notebook Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

bark

Posts with mentions or reviews of bark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-13.
  • Exploring Bark, the Open Source Text-to-Speech Model
    1 project | dev.to | 28 Apr 2024
    !pip install git+https://github.com/suno-ai/bark.git
  • AI-generated sad girl with piano performs the text of the MIT License
    1 project | news.ycombinator.com | 4 Apr 2024
    To my knowledge, the model being used for this is "chirp" which is 'based on' bark[1], an AI text to speech model.

    The github page for bark links to a page about chirp, which returns a 404 page for me [2]. that the model for suno.ai's song generator isn't too much different than the text to speech model.

    My hunch is that it was something like a coincidence that the bark model was capable of producing music, and that was spun off into this product. Unfortunately, there seems to still be issues with bark when generating long (like book length) spoken audio. Which is too bad, as someone who's worked jobs that require lots of driving, it would be awesome to be able to have any text read to me in a natural sounding voice.

    [1]https://github.com/suno-ai/bark

  • Generating music in the waveform domain (2020)
    1 project | news.ycombinator.com | 26 Mar 2024
    Stable-audio and MusicGen sounds better than Jukebox.

    But the best so far is Suno.ai ( https://app.suno.ai ) especially with their V3 model they have very impressive results, the fidelity is not studio quality but they're getting very close.

    It's very likely based on their TTS model they have released before Bark, but trained on more data and with higher resolution.

    https://github.com/suno-ai/bark

  • Stable-Audio-Demo
    2 projects | news.ycombinator.com | 13 Feb 2024
    https://github.com/suno-ai/bark

    > Bark was developed for research purposes. It is not a conventional text-to-speech model but instead a fully generative text-to-audio model, which can deviate in unexpected ways from provided prompts. Suno does not take responsibility for any output generated. Use at your own risk, and please act responsibly.

    I've generated probably >200 songs now with Suno, of which perhaps 10 have been any good, and I can't detect any pattern in terms of the outputs.

    Here's another one which is pretty good. I accidentally copied and pasted the prompt and lyrics, and it's amazing to me how 'musically' it renders the prompt:

  • Suno AI
    1 project | news.ycombinator.com | 25 Dec 2023
    hahah wow! cool :-)

    PS: OT, I am reading this Bark thing(https://github.com/suno-ai/bark). Can I run it locally on a Macbook 2015 with 8GB RAM?

  • SDXL + SVD + Suno AI
    1 project | /r/StableDiffusion | 10 Dec 2023
    I have it locally. The model is on huggingface. It runs with about 8GB VRAM.
  • [discussion] text to voice generation for textbooks
    3 projects | /r/MachineLearning | 5 Dec 2023
  • Open Source Libraries
    25 projects | /r/AudioAI | 2 Oct 2023
    suno-ai/bark
  • Weird A.I. Yankovic, a cursed deep dive into the world of voice cloning
    4 projects | news.ycombinator.com | 2 Oct 2023
  • FLaNK Stack Weekly 2 October 2023
    19 projects | dev.to | 2 Oct 2023

fairseq

Posts with mentions or reviews of fairseq. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-03.
  • Sequence-to-Sequence Toolkit Written in Python
    1 project | news.ycombinator.com | 30 Mar 2024
  • Unsupervised (Semi-Supervised) ASR/STT training recipes
    2 projects | /r/deeplearning | 3 Nov 2023
  • Nvidia's 900 tons of GPU muscle bulks up server market, slims down wallets
    1 project | news.ycombinator.com | 19 Sep 2023
    > Is there really no way to partition the workload to run with 16gb memory per card?

    It really depends and this can get really complicated really fast. I'll give a tldr and then a longer explanation.

    TLDR:

    Yes, you can easily split networks up. If your main bottleneck is batch size (i.e. training) then there aren't huge differences in spreading across multiple GPUs assuming you have good interconnects (GPU direct is supported). If you're running inference and the model fits on the card you're probably fine too unless you need to do things like fancy inference batching (i.e. you have LOTS of users)

    Longer version:

    You can always split things up. If we think about networks we recognize some nice properties about how they operate as mathematical groups. Non-residual networks are compositional, meaning each layer can be treated as a sub network (every residual block can be treated this way too). Additionally, we may have associative and distributive properties depending on the architecture (some even have commutative!). So we can use these same rules to break apart networks in many different ways. There are often performance hits for doing this though, as it practically requires you touching the disk more often but in some more rare cases (at least to me, let me know if you know more) they can help.

    I mentioned the batching above and this can get kinda complicated. There are actually performance differences when you batch in groups of data (i.e. across GPUs) compared to batching on a single accelerator. This difference isn't talked about a lot. But it is going to come down to how often your algorithm depends on batching and what operations are used, such as batch norm. The batch norm is calculated across the GPU's batch, not the distributed batch (unless you introduce blocking). This is because your gradients AND inference are going to be computed differently. In DDP your whole network is cloned across cards so you basically run inference on multiple networks and then do an all reduce on the loss then calculate the gradient and then recopy the weights to all cards. There is even a bigger difference when you use lazy regularization (don't compute gradients for n-minibatches). GANs are notorious for using this and personally I've seen large benefits to distributed training for these. GANs usually have small batch sizes and aren't getting anywhere near the memory of the card anyways (GANs are typically unstable so large batch sizes can harm them), but also pay attention to this when evaluating papers (of course as well as how much hyper-parameter tuning has been done. This is always tricky when comparing works, especially between academia and big labs. You can easily be fooled by which is a better model. Evaluating models is way tougher than people give credit to and especially in the modern era of LLMs. I could rant a lot about just this alone). Basically in short, we can think of this as an ensembling method, except our models are actually identical (you could parallel reduce lazily too and that will create some periodic divergence between your models but that's not important for conceptually understanding, just worth noting).

    There is are also techniques to split a single model up called model sharding and checkpointing. Model sharding is where you split a single model across multiple GPUs. You're taking advantage of the compositional property of networks, meaning that as long as there isn't a residual layer between your split location you can actually treat one network as a series of smaller networks. This has obvious drawbacks as you need to feed one into another and so the operations have to be synchronous, but sometimes this isn't too bad. Checkpointing is very similar but you're just doing the same thing on the same GPU. Your hit here is in I/O, but may or may not be too bad with GPU Direct and highly depends on your model size (were you splitting because batch size or because model size?).

    This is all still pretty high level but if you want to dig into it more META developed a toolkit called fairseq that will do a lot of this for you and they optimized it

    https://engineering.fb.com/2021/07/15/open-source/fsdp/

    https://github.com/facebookresearch/fairseq

    TLDR: really depends on your use case, but it is a good question.

  • Talk back and forth with AI like you would with a person
    1 project | /r/singularity | 7 Jul 2023
    How do they do the text to voice conversion so fast? https://github.com/facebookresearch/fairseq/tree/main (open source takes sub-minute to do text to voice.
  • Voice generation AI (TTS)
    3 projects | /r/ArtificialInteligence | 1 Jul 2023
    It might be worth checking out Meta's TTS tho, I haven't gotten the chance to fiddle around with it but it looks somewhat promising https://github.com/facebookresearch/fairseq/tree/main/examples/mms
  • Translation app with TTS (text-to-speech) for Persian?
    2 projects | /r/machinetranslation | 24 Jun 2023
    They have instructions on how to use it in command line and a notebook on how to use it as a python library.
  • Why no work on open source TTS (Text to speech) models
    2 projects | /r/ArtificialInteligence | 20 Jun 2023
  • Meta's Massively Multilingual Speech project supports 1k languages using self supervised learning
    1 project | /r/DataCentricAI | 13 Jun 2023
    Github - https://github.com/facebookresearch/fairseq/tree/main/examples/mms Paper - https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/
  • AI — weekly megathread!
    2 projects | /r/artificial | 26 May 2023
    Meta released a new open-source model, Massively Multilingual Speech (MMS) that can do both speech-to-text and text-to-speech in 1,107 languages and can also recognize 4,000+ spoken languages. Existing speech recognition models only cover approximately 100 languages out of the 7,000+ known spoken languages. [Details | Research Paper | GitHub].
  • Meta's MMS: Scaling Speech Technology to 1000+ languages (How to Run colab)
    1 project | /r/LanguageTechnology | 24 May 2023

What are some alternatives?

When comparing bark and fairseq you can also consider the following projects:

tortoise-tts - A multi-voice TTS system trained with an emphasis on quality

gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

SadTalker - [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Retrieval-based-Voice-Conversion-WebUI - Easily train a good VC model with voice data <= 10 mins!

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

whisper.cpp - Port of OpenAI's Whisper model in C/C++

text-to-text-transfer-transformer - Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

espnet - End-to-End Speech Processing Toolkit

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration