model VS pytorch_nsfw_model

Compare model vs pytorch_nsfw_model and see what are their differences.

model

The model for filtering NSFW images backing the Wingman Jr. plugin: https://github.com/wingman-jr-addon/wingman_jr (by wingman-jr-addon)

pytorch_nsfw_model

Pytorch model for NSFW classification with usage example (by emiliantolo)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
model pytorch_nsfw_model
7 1
5 46
- -
0.0 10.0
about 3 years ago about 5 years ago
Jupyter Notebook
Creative Commons Zero v1.0 Universal -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

model

Posts with mentions or reviews of model. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-18.
  • Show HN: Firefox Addon to Filter NSFW Content
    10 projects | news.ycombinator.com | 18 Nov 2022
    https://github.com/wingman-jr-addon/model#dataset

    Your response is interesting because it tells me you maybe expected it to be in a different spot - was there a specific spot you were looking at? Might help me improve the descriptions.

  • Show HN: An AI program to check videos for NSFW content
    7 projects | news.ycombinator.com | 8 Feb 2022
    Thanks for the response dynamite-ready. There's a lot in here, but I'll try to comment on a couple items. Some of your suggestions I've actually thought about extensively, so perhaps you'll find the reasoning interesting?

    Regarding the current state of tech: I agree the tech still has quite a ways to go. I think one of the most interesting aspects here is how e.g. NSFW.js can get extremely high accuracy - but not necessarily perform better in the real world. I think it speaks in part to the nature of how CNN's work, the nature of the data, and the difficulty of the problem. Still, having seen how incredibly good "AI" has gotten in the last decade, I have quite a bit of hope here.

    Regarding putting it on a server: that is indeed a fair question, but my desire is to keep the scanning on the client side for the user. In fact, it was actually the confluence of Firefox's webRequest response filtering (which is why I didn't make a Chrome version) and Tensorflow.js that allowed me to move from dream to reality as I had been waiting prior to that time. I can't afford server infrastructure if the user base grows, and people don't want to route all their pictures to me. So I guess I see the current way it works as a bonus, not a flaw - but it DOES impact performance, certainly.

    Regarding data collection with respect to server - yes, this is something I've contemplated (there's a GitHub issue if you're curious). There are, however, two things that I've long mulled over: privacy and dark psychological patterns. Let me explain a bit. On the privacy front - it is not likely legal for a user to share the image data directly due to copyright, so they need to share by URL. This can have many issues when considering e.g. authenticated services, but one big one also is that the URL may have relatively sensitive user-identifying information buried in its path. I can try to be careful here but this absolutely precludes sharing this type of URL data as an open dataset. On the psychological dark patterns front - while I'm fine with folks wanting to submit false positives, I think there's a very real chance some will want to go flag all the images they can find that are false negatives (e.g. porn). I don't think that type of submission is particularly good for their mental health or mine. So, in general, I think user image feedback is something that would be quite powerful but needs a lot of care in how it would be approached.

    Regarding the UX - thanks! And you're welcome to try the model as well - I've tried to include enough detail and data to allow others to integrate as they wish: https://github.com/wingman-jr-addon/model/tree/master/sqrxr_... Also, let us know how things go if you try out Darknet.

    Good luck!

  • What kind of evil genius research do you do in your Lab? Or not so evil - I won’t judge.
    5 projects | /r/homelab | 10 Jan 2022
    Y'all were kind enough to help me get up and running with a Supermicro GPU server. I use it to cook up the machine learning model for a Firefox addon that blocks NSFW images client-side, Wingman Jr. Filter. Your help made a big difference in me being able to get the right box at the right price - so thanks!

pytorch_nsfw_model

Posts with mentions or reviews of pytorch_nsfw_model. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-02-08.
  • Show HN: An AI program to check videos for NSFW content
    7 projects | news.ycombinator.com | 8 Feb 2022
    It's interesting. I've not tested the model on anything too risque, but again, with the well known Baywatch intro as a frame of reference, wide angle group shots of the whole cast in their swimsuits, is fine. A close up of any single cast member in the famous red swimsuit, will invariably trigger the model. Male or female.

    In the blog, I suggest this could be the result of an uncultured data set, which is one part of it. Or perhaps the dataset was fine, and this is pushing the hard limit of what ResNet50 can do (the off the shelf model I use for this is a ResNet50 extension).

    Some of the anomalous results are amusing. One day, I passed through a video of a female violinist in concert, and the model flagged every close up of her as NSFW! Just those closeups. Wide shots, and closeups of other musicians were absolutely fine.

    Again some of that might be down to me (clucky code, very low NSFW threshold). And I suspect the model I used was itself a PoC (https://github.com/emiliantolo/pytorch_nsfw_model). But it does make you wonder how the bigger labs with critical products like Palantir handle doubts like this.

What are some alternatives?

When comparing model and pytorch_nsfw_model you can also consider the following projects:

WebODM - User-friendly, commercial-grade software for processing aerial imagery. 🛩

darknet - Convolutional Neural Networks

movie-parser-cli

wingman_jr - This is the official repository (https://github.com/wingman-jr-addon/wingman_jr) for the Wingman Jr. Firefox addon, which filters NSFW images in the browser fully client-side: https://addons.mozilla.org/en-US/firefox/addon/wingman-jr-filter/ Optional DNS-blocking using Cloudflare's 1.1.1.1 for families! Also, check out the blog!

movie-parser - NWJS wrapper for a wider project.

nsfw-filter - A free, open source, and privacy-focused browser extension to block “not safe for work” content built using TypeScript and TensorFlow.js.

cert-manager - Automatically provision and manage TLS certificates in Kubernetes