Clover-Edition VS fairseq

Compare Clover-Edition vs fairseq and see what are their differences.

Clover-Edition

State of the art AI plays dungeon master to your adventures. (by cloveranon)

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python. (by facebookresearch)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
Clover-Edition fairseq
36 89
169 29,262
- 1.8%
0.6 6.0
over 2 years ago 6 days ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Clover-Edition

Posts with mentions or reviews of Clover-Edition. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-25.
  • State of the app?
    2 projects | /r/AIDungeon | 25 Apr 2022
    AI Dungeon Clover Edition isn't exactly the best in terms of output quality in comparison to most alternatives (mainly because it just hasn't been updated in quite a while; its best models still use GPT-Neo 2.7B), but it's the most similar alternative to Classic AI Dungeon, and plays more like a game than current AI Dungeon, or any of these other alternatives.
  • This game doesn't work anymore
    1 project | /r/AIDungeon | 29 Mar 2022
    KoboldAI with GPT-J 6B (or a more powerful open-source model) is probably the best free alternative. You can either run the AI model locally, or use the KoboldAI GPT-J 6B Google Colab. Dreamily also exists as a free option, but I'm honestly not sure I'd recommend it over KoboldAI + GPT-J 6B (or even AI Dungeon, for that matter). If you want something that plays more similarly to the CYOA-style of AI Dungeon, try AI Dungeon Clover Edition. It's not exactly the best free option in terms of output quality, but it's pretty much the closest one in terms of gameplay to AI Dungeon.
  • How do I install on PC
    1 project | /r/AIDungeon | 24 Mar 2022
    The best you can get are some pre-dragon modified versions like Clover Edition if you want AID specifically.
  • did thy fix the game and its issues?
    4 projects | /r/AIDungeon | 19 Jan 2022
    There are also a ton of branches of the original version of AI Dungeon, but Clover Edition is likely the most notable, and arguably the best out of them. It also has a link to a Google Colab version, in case you can't run it locally. It's worth mentioning, though, that it's not really quite as good, in terms of output quality, as other options, mainly due to it using GPT-Neo 2.7B. It also hasn't been updated in quite a while. However, as far as AID alternatives go, it's one of the alternatives that are closest to the AI Dungeon experience, if that's what you're looking for in an alternative.
  • ah yes, my favourite line of dialogue (tried reloading the site and redoing the dialogue. that is the only response from the AI that I got for the past 5 minutes )
    1 project | /r/AIDungeon | 21 Sep 2021
  • That's the absurdity you have to get to, so GOD FORBID NOT TO activate the bucking filter!!!!!!!!!!
    3 projects | /r/AIDungeon | 12 Sep 2021
    There are plenty of free alternatives. Write With Transformer exists, is free, and has a few AI models to choose from. You can try the base GPT-J 6B model on EleutherAI's website, or through the KoboldAI Google Colab. Clover Edition exists, and has multiple AI models to choose from. GPT-Neo Dungeon exists, and uses GPT-Neo, hence the name. Open CYOAI and AI Dungeon 2 Unleashed also exist. GodAI exists as well, and uses GPT-2. KoboldAI is a good frontend for locally running AI models, and it has a subreddit at r/KoboldAI. Dreamily exists, and has a mobile app. However, they require quite a bit of personal information to make an account; only make an account if you trust them with said info. Their privacy policy also, at one point, actually admitted to monitoring private content. There is also a filter in place, though I'm unsure what the filter disallows (aside from mentioning Xi Jinping). Also, HoloAI, Hyperwrite, ShortlyAI, and InferKit have fairly abusable free trials. HoloAI uses GPT-J 6B, InferKit uses Megatron-11B, and Hyperwrite and ShortlyAI both use GPT-3 (although, they both also use OpenAI's incredibly broad filter; quite a lot of stuff is disallowed).
  • List of all alternatives
    3 projects | /r/AIDungeon | 31 Aug 2021
    Clover Edition exists, and has multiple AI models to choose from.
  • I think I just noticed whats wrong with every single AI dungeon alternative out there.
    2 projects | /r/AIDungeon | 28 Aug 2021
    Getting an older version of the game isn't a great idea. If you mean an older version of the app, the filter would be there anyways, since the filter is done on the server-side, and the AI would be the same as it is now. The original version of AI Dungeon uses the Classic AI, which I doubt many people are willing to settle for. However, there are plenty of forks of the original version of AID, and some AID clones, such as Clover Edition, GPT-Neo Dungeon, Open CYOAI, and AI Dungeon 2 Unleashed. As was mentioned already, NovelAI also has a text adventure mode that plays near identically to AI Dungeon, and the text adventure module does kinda capture the feel of AI Dungeon. Honestly, all it takes to recreate AI Dungeon on alternatives is to add "> You"/"> You say" at the start of your inputs. Literally all AI Dungeon's Do and Say modes did was add "> You" and "> You say", respectively, to the beginning of your input. Also, I think you're underestimating how many people actually did use AI Dungeon to write serious stories.
  • I made a Therapist world, she is not half bad
    2 projects | /r/AIDungeon | 16 Aug 2021
    NovelAI uses GPT-Neo 2.7B and GPT-J 6B, with subscription tiers of $10/$15/$25 per month. HoloAI uses GPT-J 6B. It has a free trial, and subscription tiers of $5/$8 per month. Hyperwrite offers 1500 outputs for free, and uses GPT-3. After the free trial, a subscription is required. Though, some content (to be more specific, NSFW content) is disallowed. ShortlyAI uses GPT-3, and offers a free trial. After the free trial, a subscription is required. Though, similarly to Hyperwrite, NSFW content is disallowed. Write With Transformer exists, is free, and has a few AI models to choose from. You can try the base GPT-J 6B model on EleutherAI's website, or through the KoboldAI Google Colab. Endless Visual Novel, though it isn't yet released, will be going into closed alpha this month (you can currently sign up for the closed alpha, if it interests you). A subscription (the subscription prices haven't been specified yet) will be required to use the AI and create stories. It uses the GPT-3 Davinci model (the same model that AI Dungeon's Dragon model is based on) for text generation, and also has AI-generated imagery and music to go along with it. As the name implies, it plays in a visual novel format. Depending on what you use AI Dungeon for, it could genuinely end up outclassing AI Dungeon when it releases. There are also some more alternatives worth mentioning, though they all require some setup to get working, and most run locally, which requires more computing power than what the average person has. While they are quite good, the fact that they're aren't exactly user friendly might be a deal breaker for you. Anyhow, Clover Edition exists, and has multiple AI models to choose from. GPT-Neo Dungeon exists, and uses GPT-Neo, hence the name. Open CYOAI and AI Dungeon 2 Unleashed also exist. GodAI exists as well, and uses GPT-2. KoboldAI is a good frontend for locally running AI models, and it has a subreddit at r/KoboldAI.
  • Unless your dirt poor you really should Give Novel AI a shot. They just added text adventure mode!
    3 projects | /r/AIDungeon | 13 Aug 2021
    HoloAI uses GPT-J 6B. It has a free trial, and subscription tiers of $5/$8 per month. Hyperwrite offers 1500 outputs for free, and uses GPT-3. After the free trial, a subscription is required. Though, some content (mostly NSFW content) is disallowed. ShortlyAI uses GPT-3, and offers a free trial. After the free trial, a subscription is required. Though, similarly to Hyperwrite, some content is disallowed. InferKit has a free trial, uses the Megatron 11B model. A subscription is required after the free trial. Write With Transformer exists, is free, and has a few AI models to choose from. You can try the base GPT-J 6B model on EleutherAI's website, or through the KoboldAI Google Colab. Clover Edition exists, and has multiple AI models to choose from. GPT-Neo Dungeon exists, and uses GPT-Neo, hence the name. Open CYOAI and AI Dungeon 2 Unleashed also exist. GodAI exists as well, and uses GPT-2. KoboldAI is a good frontend for locally running AI models, and it has a subreddit at r/KoboldAI. Dreamily exists, and has a mobile app. However, they require quite a bit of personal information to make an account; only make an account if you trust them with said info. Their privacy policy also, at one point, actually admitted to monitoring private content, as well as to collecting quite a lot of personal info and sharing said info with third parties. There is also a filter in place, though I'm unsure what the filter disallows (aside from mentioning Xi Jinping).

fairseq

Posts with mentions or reviews of fairseq. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-03.
  • Sequence-to-Sequence Toolkit Written in Python
    1 project | news.ycombinator.com | 30 Mar 2024
  • Unsupervised (Semi-Supervised) ASR/STT training recipes
    2 projects | /r/deeplearning | 3 Nov 2023
  • Nvidia's 900 tons of GPU muscle bulks up server market, slims down wallets
    1 project | news.ycombinator.com | 19 Sep 2023
    > Is there really no way to partition the workload to run with 16gb memory per card?

    It really depends and this can get really complicated really fast. I'll give a tldr and then a longer explanation.

    TLDR:

    Yes, you can easily split networks up. If your main bottleneck is batch size (i.e. training) then there aren't huge differences in spreading across multiple GPUs assuming you have good interconnects (GPU direct is supported). If you're running inference and the model fits on the card you're probably fine too unless you need to do things like fancy inference batching (i.e. you have LOTS of users)

    Longer version:

    You can always split things up. If we think about networks we recognize some nice properties about how they operate as mathematical groups. Non-residual networks are compositional, meaning each layer can be treated as a sub network (every residual block can be treated this way too). Additionally, we may have associative and distributive properties depending on the architecture (some even have commutative!). So we can use these same rules to break apart networks in many different ways. There are often performance hits for doing this though, as it practically requires you touching the disk more often but in some more rare cases (at least to me, let me know if you know more) they can help.

    I mentioned the batching above and this can get kinda complicated. There are actually performance differences when you batch in groups of data (i.e. across GPUs) compared to batching on a single accelerator. This difference isn't talked about a lot. But it is going to come down to how often your algorithm depends on batching and what operations are used, such as batch norm. The batch norm is calculated across the GPU's batch, not the distributed batch (unless you introduce blocking). This is because your gradients AND inference are going to be computed differently. In DDP your whole network is cloned across cards so you basically run inference on multiple networks and then do an all reduce on the loss then calculate the gradient and then recopy the weights to all cards. There is even a bigger difference when you use lazy regularization (don't compute gradients for n-minibatches). GANs are notorious for using this and personally I've seen large benefits to distributed training for these. GANs usually have small batch sizes and aren't getting anywhere near the memory of the card anyways (GANs are typically unstable so large batch sizes can harm them), but also pay attention to this when evaluating papers (of course as well as how much hyper-parameter tuning has been done. This is always tricky when comparing works, especially between academia and big labs. You can easily be fooled by which is a better model. Evaluating models is way tougher than people give credit to and especially in the modern era of LLMs. I could rant a lot about just this alone). Basically in short, we can think of this as an ensembling method, except our models are actually identical (you could parallel reduce lazily too and that will create some periodic divergence between your models but that's not important for conceptually understanding, just worth noting).

    There is are also techniques to split a single model up called model sharding and checkpointing. Model sharding is where you split a single model across multiple GPUs. You're taking advantage of the compositional property of networks, meaning that as long as there isn't a residual layer between your split location you can actually treat one network as a series of smaller networks. This has obvious drawbacks as you need to feed one into another and so the operations have to be synchronous, but sometimes this isn't too bad. Checkpointing is very similar but you're just doing the same thing on the same GPU. Your hit here is in I/O, but may or may not be too bad with GPU Direct and highly depends on your model size (were you splitting because batch size or because model size?).

    This is all still pretty high level but if you want to dig into it more META developed a toolkit called fairseq that will do a lot of this for you and they optimized it

    https://engineering.fb.com/2021/07/15/open-source/fsdp/

    https://github.com/facebookresearch/fairseq

    TLDR: really depends on your use case, but it is a good question.

  • Talk back and forth with AI like you would with a person
    1 project | /r/singularity | 7 Jul 2023
    How do they do the text to voice conversion so fast? https://github.com/facebookresearch/fairseq/tree/main (open source takes sub-minute to do text to voice.
  • Voice generation AI (TTS)
    3 projects | /r/ArtificialInteligence | 1 Jul 2023
    It might be worth checking out Meta's TTS tho, I haven't gotten the chance to fiddle around with it but it looks somewhat promising https://github.com/facebookresearch/fairseq/tree/main/examples/mms
  • Translation app with TTS (text-to-speech) for Persian?
    2 projects | /r/machinetranslation | 24 Jun 2023
    They have instructions on how to use it in command line and a notebook on how to use it as a python library.
  • Why no work on open source TTS (Text to speech) models
    2 projects | /r/ArtificialInteligence | 20 Jun 2023
  • Meta's Massively Multilingual Speech project supports 1k languages using self supervised learning
    1 project | /r/DataCentricAI | 13 Jun 2023
    Github - https://github.com/facebookresearch/fairseq/tree/main/examples/mms Paper - https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/
  • AI — weekly megathread!
    2 projects | /r/artificial | 26 May 2023
    Meta released a new open-source model, Massively Multilingual Speech (MMS) that can do both speech-to-text and text-to-speech in 1,107 languages and can also recognize 4,000+ spoken languages. Existing speech recognition models only cover approximately 100 languages out of the 7,000+ known spoken languages. [Details | Research Paper | GitHub].
  • Meta's MMS: Scaling Speech Technology to 1000+ languages (How to Run colab)
    1 project | /r/LanguageTechnology | 24 May 2023

What are some alternatives?

When comparing Clover-Edition and fairseq you can also consider the following projects:

KoboldAI-Client

gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

gpt-neo_dungeon - Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

AID2-Installer-Project - Installs AID2: Clover Edition

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Open-CYOAI-Project - Colab frontend to play the different modded versions of AI Dungeon 2. Also main Wiki of the game with info gathered from 4chan's Anons.

text-to-text-transfer-transformer - Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

AIDCAT - AI Dungeon Catalog Archive Toolkit

espnet - End-to-End Speech Processing Toolkit

aid_adventure_vulnerability_report - Report and source code detailing the AI Dungeon private adventure vulnerability

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration