synthesizer
adversarial-robustness-toolbox
synthesizer | adversarial-robustness-toolbox | |
---|---|---|
4 | 8 | |
566 | 4,556 | |
- | 2.1% | |
10.0 | 9.6 | |
5 months ago | 8 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
synthesizer
-
Phibrarian Alpha - the first model checkpoint from SciPhi's Mistral-7b
The run is a few days in on a 8x 80gb A100 cluster, and I quietly released the first epoch checkpoint here. I am building the model in association with our synthetic data efforts here, at SciPhi.
-
With LLMs we can create a fully open-source Library of Alexandria.
I am updating because we have another interesting result - by going deeper instead of broader, and by combining new techniques like RAG, we can make incredibly descriptive textbooks. This one here was generated by a ~fully AI pipeline. The pipeline goes MIT OCW -> Syllabus -> Table of Contents -> Textbook. The last step is grounded through vector-lookups over the whole of Wikipedia.
- Textbook was authored with an AI pipeline
-
Looking for fine-tuners who want to build an exciting new model -
The timing is great, because yesterday I introduced RAG into the synthetic generation pipeline [here](https://github.com/emrgnt-cmplxty/sciphi/tree/main). I'm in the process of indexing the entirety of this pypi dataset using ChromaDB in the cloud. It will be relatively easy to plug this into SciPhi when done.
adversarial-robustness-toolbox
- [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models?
- [D] ML Researchers/Engineers in Industry: Why don't companies use open source models more often?
- [D]: How safe is it to just use a strangers Model?
-
[D] Does anyone care about adversarial attacks anymore?
Check out this project https://github.com/Trusted-AI/adversarial-robustness-toolbox
- adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
- Library for Machine Learning Security Evasion, Poisoning, Extraction, Inference
-
Introduction to Adversarial Machine Learning
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.
-
[D] Testing a model's robustness to adversarial attacks
Depending on what attacks you want I've found both https://github.com/cleverhans-lab/cleverhans and https://github.com/Trusted-AI/adversarial-robustness-toolbox to be useful.
What are some alternatives?
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
DeepRobust - A pytorch adversarial library for attack and defense methods on images and graphs
autoscraper - A Smart, Automatic, Fast and Lightweight Web Scraper for Python
auto-attack - Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
pytorch-lightning - The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. [Moved to: https://github.com/PyTorchLightning/pytorch-lightning]
alpha-zero-boosted - A "build to learn" Alpha Zero implementation using Gradient Boosted Decision Trees (LightGBM)
pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
m2cgen - Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies
stableagents - Stable, Semi-Autonomous, Reliable and Steerable LLM Agents for production use cases.
waf-bypass - Check your WAF before an attacker does