List-of-Dirty-Naughty-Obscene-and VS List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words

Compare List-of-Dirty-Naughty-Obscene-and vs List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
List-of-Dirty-Naughty-Obscene-and List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words
3 25
- 2,776
- 1.5%
- 0.0
- 3 months ago
- Creative Commons Attribution 4.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

List-of-Dirty-Naughty-Obscene-and

Posts with mentions or reviews of List-of-Dirty-Naughty-Obscene-and. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-23.
  • Microsoft's paper on OpenAI's GPT-4 had hidden information
    3 projects | news.ycombinator.com | 23 Mar 2023
    "The Colossal Clean Crawled Corpus, used to train a trillion parameter LM in , is cleaned, inter alia, by discarding any page containing one of a list of about 400 “Dirty, Naughty, Obscene or Otherwise Bad Words”. This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika, white power) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink, the influence of online spaces built by and for LGBTQ people. If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light"

    from "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? " https://dl.acm.org/doi/10.1145/3442188.3445922

    That list of words is https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...

  • The naughty username checking system used by Twitch
    4 projects | news.ycombinator.com | 6 Oct 2021
    The good news is that things like https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and... exist so getting a source of words to filter is easy enough. And converting numbers to letters isn't too bad.

    The hardest problem with the implementation was that with a long list you can't just search for a few dozen inappropriate words (like the Twitch implementation). It would be very expensive to do hundreds or even thousands of checks against every inappropriate word.

    The solution we came to was to truncate all the inappropriate words to either 3 or 4 letters and store them in a big set. We then take our generated strings, which are usually 11 characters, and break them up into all possible substrings of lengths 3 and 4. For example, 1a2b3c4d5e6 would be broken down into 1a2 a2b 2b3 b3c 3c4 c4d 4d5 5e6 1a2b a2b3 2b3c b3c4 3c4d c4d5 4d5e d5e6. An 11 character string would always have 16 such substrings. We then check all 16 against the banned set. 16 lookups into a set is pretty cheap and as we have expanded the word set over time (e.g. add a new language) our performance hasn't changed.

    One drawback to our approach is that we do have false positives but we did the math and our space was still large enough, the cost of generating a new one was pretty low, and customers never see it so it's just not a big deal to throw out false positives.

  • Minority voices ‘filtered’ out of Google Natural Language Processing models
    2 projects | news.ycombinator.com | 24 Sep 2021
    I believe this is the word list that the authors are objecting to:

    https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...

List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words

Posts with mentions or reviews of List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-04.
  • Ask HN: List of Subdomains to Reserve
    4 projects | news.ycombinator.com | 4 Mar 2024
    Good point. I am already checking against the naughty-words list from here:

    https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...

  • Where is the banned word list so I can integrate it?
    1 project | /r/ecommerce | 27 Jun 2023
    https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words is one
  • We’re Washington Post reporters who analyzed Google’s C4 data set to see which websites AI uses to make itself sound smarter. Ask us Anything!
    4 projects | /r/IAmA | 16 May 2023
    We know that C4 was used to train Google’s influential T5 model, Facebook’s LLaMA, as well as the open source model Red Pajama. C4 is a very cleaned-up version of a scrape of the internet from the non-profit CommonCrawl taken in 2019. OpenAI’s model GPT-3 used a training dataset that began with 41 scrapes of the web from CommonCrawl from 2016 to 2019 so I think it’s safe to say that something akin to C4 was part of GPT-3. (The researchers who originally looked into C4 argue that these issues are common to all web-scraped datasets.) When we reached out to OpenAI and Google for comment, both companies emphasized that they undergo extensive efforts to weed out potentially problematic data from their training sets. But within the industry, C4 is known as being a heavily filtered dataset and has been criticized, in fact, for eliminating content related to LGBTQ+ identities because of its reliance on a heavy-handed blocklist. (https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words ) We are working on some reporting to try to address your last and very crucial question, but it’s an open area of research and one that even AI developers are struggling to answer.
  • TIL there's an official list of profanities ChatGPT is trained to avoid
    1 project | /r/todayilearned | 20 Apr 2023
  • Microsoft's paper on OpenAI's GPT-4 had hidden information
    3 projects | news.ycombinator.com | 23 Mar 2023
    "The Colossal Clean Crawled Corpus, used to train a trillion parameter LM in , is cleaned, inter alia, by discarding any page containing one of a list of about 400 “Dirty, Naughty, Obscene or Otherwise Bad Words”. This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika, white power) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink, the influence of online spaces built by and for LGBTQ people. If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light"

    from "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? " https://dl.acm.org/doi/10.1145/3442188.3445922

    That list of words is https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...

  • Rule
    1 project | /r/196 | 17 Mar 2023
    Yeah, This is shutterstocks one which they shared
  • If I made a game with a chatroom, what curses and slurs would I ban?
    1 project | /r/gamedev | 3 Mar 2023
    I always turn off the chatfilter, so defo let them choose if they want to have it censored or not. For the actual words themselves, there are plenty of lists out there that you can use (like this one). Although these are just regular words, none of the circumvention methods are included
  • Emad announces a new Stability lab with a new soon model. It looks like a Dall-e 2 style AI to me. Maybe it is our open source Dall-e 2, like KARLO. The images are very interesting. According to Emad "Soon".
    1 project | /r/StableDiffusion | 5 Jan 2023
    That it's very crudely filtered for naughty words. According to the paper, "We removed any page that contained any word on the “List of Dirty, Naughty, Obscene or Otherwise Bad Words”." That list is here. While it contains a lot of unquestionably ugly words, it also contains words like "tit".
  • I made a Stable Diffusion for Anime app in your Pocket! Running 100% offline on your Apple Devices (iPhone, iPad, Mac)
    4 projects | /r/StableDiffusion | 26 Nov 2022
    No problem! I wrote a short json file and Swift script to remove the nsfw words from the prompt during the image generation process, therefore it's not based on the negative prompt. The json file is a txt full with nsfw words so the app can check and remove unwanted prompts, e.g.: https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words
  • Lewdle - A daily lewd word game
    1 project | /r/wordle | 27 Jan 2022
    This is the closest I’ve come to finding one. It’s not that great.

What are some alternatives?

When comparing List-of-Dirty-Naughty-Obscene-and and List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words you can also consider the following projects:

arxiv-latex-cleaner - arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv

google-profanity-words - Full list of bad words and top swear words banned by Google.

git-crypt - Transparent file encryption in git

following-instructions-human-feedback

rmarkdown - Dynamic Documents for R

Hashids.java - Hashids algorithm v1.0.0 implementation in Java

RedPajama-Data - The RedPajama-Data repository contains code for preparing large datasets for training large language models.

wordfilter - A small module meant for use in text generators that lets you filter strings for bad words.

maple-diffusion - Stable Diffusion inference on iOS / macOS using MPSGraph