toxicity
alibi
toxicity | alibi | |
---|---|---|
11 | 4 | |
166 | 2,289 | |
0.0% | 0.6% | |
0.0 | 7.7 | |
almost 2 years ago | 10 days ago | |
Python | ||
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
toxicity
-
Perhaps It Is a Bad Thing That the Leading AI Companies Cannot Control Their AIs
I'm a PM at a human data company (https://www.surgehq.ai) that helps the large language model companies ensure their models are safe (we're the “clever prompt engineers” who helped Redwood assess their model performance).
We actually just published a blog today that includes our perspective on building “AI red teams” and best practices for AI alignment/safety: https://www.surgehq.ai/blog/ai-red-teams-for-adversarial-tra...
-
30% of Google's Emotions Dataset Is Mislabeled
I'd love to chat. Want to reach out to the email in my profile? I'm the founder of a much higher-quality data startup (https://www.surgehq.ai), and previously built the human computation platforms at a couple FAANGs.
We work with a lot of the top AI/NLP companies and research labs, and do both the "typical" data labeling work (sentiment analysis, text categorization, etc), but also a lot more advanced stuff (e.g., training coding assistants, evaluating the new wave of large language models, adversarial labeling, etc -- so not just distinguishing cats and dogs, but rather making full use of the power of the human mind!).
-
Building a No-Code Toxicity Classifier – By Talking to GitHub Copilot
> Rather than operating under a strict definition of toxicity, we asked our team to identify comments that they personally found toxic.
[0]: https://github.com/surge-ai/toxicity
-
Ask HN: Who is hiring? (January 2022)
Love language? So do we, and our mission is to infuse AI with that same love. At Surge, we're building the human infrastructure to power NLP — from detecting hate speech, to parsing complex documents, to injecting human values into the next wave of language models. Our first product is a platform that helps ML teams create amazing, human-powered datasets to train AI in the richness of language. We're a team of former Google, Facebook, and Airbnb engineering leads, and we work with top companies at the forefront of machine learning. Our tech stack is Ruby on Rails, React, and Python. We’re rapidly growing, and we're looking for full-stack engineers to join the team and develop our product. To apply, please email [email protected] with a resume and 2-3 sentences describing your interest in Surge. We love personal projects and writings too!
More information: https://www.surgehq.ai/about#careers
A blog post explaining the problems we are working to solve: https://www.surgehq.ai/blog/the-ai-bottleneck-high-quality-h...
- The Toxicity Dataset – building the largest free dataset of online toxicity
- [Free] The Toxicity Dataset — building the world's largest free dataset of online toxicity [Github]
- The Toxicity Dataset — building the world's largest free dataset of online toxicity
- The Toxicity Dataset (1000 social media comments) — any ideas for interesting visualizations? [github]
- The Toxicity Dataset - free dataset of online toxicity (Github) - could be used for interesting portfolio projects
- The Toxicity Dataset — free dataset of online toxicity (Github)
alibi
- Alibi: Open-source Python lib for ML model inspection and interpretation
-
Ask HN: Who is hiring? (January 2022)
Seldon | Multiple positions | London/Cambridge UK | Onsite/Remote | Full time | seldon.io
At Seldon we are building industry leading solutions for deploying, monitoring, and explaining machine learning models. We are an open-core company with several successful open source projects like:
* https://github.com/SeldonIO/seldon-core
* https://github.com/SeldonIO/mlserver
* https://github.com/SeldonIO/alibi
* https://github.com/SeldonIO/alibi-detect
* https://github.com/SeldonIO/tempo
We are hiring for a range of positions, including software engineers(go, k8s), ml engineers (python, go), frontend engineers (js), UX designer, and product managers. All open positions can be found at https://www.seldon.io/careers/
- Ask HN: Who is hiring? (December 2021)
-
Best alternatives to 'shap' package?
Alibi explain might be an option depending on what you are looking for https://github.com/SeldonIO/alibi
What are some alternatives?
hate-speech-and-offensive-language - Repository for the paper "Automated Hate Speech Detection and the Problem of Offensive Language", ICWSM 2017
interpret - Fit interpretable models. Explain blackbox machine learning.
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
zotero - Zotero is a free, easy-to-use tool to help you collect, organize, annotate, cite, and share your research sources.
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Fleet - Open-source platform for IT, security, and infrastructure teams. (Linux, macOS, Chrome, Windows, cloud, data center)
conductor - Conductor is a microservices orchestration engine.
zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
MLServer - An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
datapane - Build and share data reports in 100% Python
causallift - CausalLift: Python package for causality-based Uplift Modeling in real-world business