wingman_jr
pytorch_nsfw_model
wingman_jr | pytorch_nsfw_model | |
---|---|---|
6 | 1 | |
33 | 46 | |
- | - | |
6.2 | 10.0 | |
3 months ago | about 5 years ago | |
JavaScript | Jupyter Notebook | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wingman_jr
-
Show HN: Firefox Addon to Filter NSFW Content
That's with the model actually! It's copied into here: https://github.com/wingman-jr-addon/wingman_jr/tree/master/s...
-
Show HN: An AI program to check videos for NSFW content
You're right that stuff is quite difficult. I write a Firefox addon (https://addons.mozilla.org/en-US/firefox/addon/wingman-jr-fi..., https://github.com/wingman-jr-addon/wingman_jr) and train an associated NSFW model (https://github.com/wingman-jr-addon/model) - I've been at it for a few years now, and have had to plug many specific edge cases.
pytorch_nsfw_model
-
Show HN: An AI program to check videos for NSFW content
It's interesting. I've not tested the model on anything too risque, but again, with the well known Baywatch intro as a frame of reference, wide angle group shots of the whole cast in their swimsuits, is fine. A close up of any single cast member in the famous red swimsuit, will invariably trigger the model. Male or female.
In the blog, I suggest this could be the result of an uncultured data set, which is one part of it. Or perhaps the dataset was fine, and this is pushing the hard limit of what ResNet50 can do (the off the shelf model I use for this is a ResNet50 extension).
Some of the anomalous results are amusing. One day, I passed through a video of a female violinist in concert, and the model flagged every close up of her as NSFW! Just those closeups. Wide shots, and closeups of other musicians were absolutely fine.
Again some of that might be down to me (clucky code, very low NSFW threshold). And I suspect the model I used was itself a PoC (https://github.com/emiliantolo/pytorch_nsfw_model). But it does make you wonder how the bigger labs with critical products like Palantir handle doubts like this.
What are some alternatives?
movie-parser - NWJS wrapper for a wider project.
darknet - Convolutional Neural Networks
model - The model for filtering NSFW images backing the Wingman Jr. plugin: https://github.com/wingman-jr-addon/wingman_jr
movie-parser-cli
nsfw-filter - A free, open source, and privacy-focused browser extension to block “not safe for work” content built using TypeScript and TensorFlow.js.