T-RAGS
aegis
T-RAGS | aegis | |
---|---|---|
5 | 4 | |
310 | 246 | |
0.0% | 2.0% | |
7.6 | 5.6 | |
about 1 month ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
T-RAGS
- Safeguard OpenAI Apps with Guardrail ML’s Firewall
-
[P] Llama-2 4bit fine-tune with dolly-15k on Colab (Free)
Colab: https://colab.research.google.com/drive/134o_cXcMe_lsvl15ZE_4Y75Kstepsntu?usp=sharing GitHub: https://github.com/kw2828/guardrail-ml YouTube Overview: https://www.youtube.com/watch?v=o5bU1H-6TqM&ab_channel=GenerativeAIEntrepreneurs
- [Machine Learning] [P] Série Dolly 2.0 - cahier Colab
-
[P] Dolly 2.0 Series - Colab Notebook
Also, starting a Dolly 2.0 Series for fine-tuning, applying to use cases, and deploying: https://github.com/kw2828/Dolly-2.0-Series
-
[N] Dolly 2.0, an open source, instruction-following LLM for research and commercial use
I'll be also putting together a Dolly 2.0 series here: https://github.com/kw2828/Dolly-2.0-Series
aegis
-
Show HN: Firewall for LLMs–Guard Against Prompt Injection, PII Leakage, Toxicity
Hey HN,
We're building Aegis, a firewall for LLMs: a guard against adversarial attacks, prompt injections, toxic language, PII leakage, etc.
One of the primary concerns entwined with building LLM applications is the chance of attackers subverting the model’s original instructions via untrusted user input, which unlike in SQL injection attacks, can’t be easily sanitized. (See https://greshake.github.io/ for the mildest such instance.) Because the consequences are dire, we feel it’s better to err on the side of caution, with something mutli-pass like Aegis, which consists of a lexical similarity check, a semantic similarity check, and a final pass through an ML model.
We'd love for you to check it out—see if you can prompt inject it!, and give any suggestions/thoughts on how we could improve it: https://github.com/automorphic-ai/aegis.
If you want to play around with it without creating an account, try the playground: https://automorphic.ai/playground.
If you're interested in or need help using Aegis, have ideas, or want to contribute, join our [Discord](https://discord.com/invite/E8y4NcNeBe), or feel free to reach out at [email protected]. Excited to hear your feedback!
Repository: https://github.com/automorphic-ai/aegis
- We’ve built a free firewall for LLMs (Aegis) — Say goodbye to prompt injections, prompt leakage, and toxic language (100+ stars)
-
Try your best prompts—especially prompt injections—against Aegis, our firewall for LLMs
We've built Aegis, a firewall for LLMs (a guard against malicious inputs, prompt injections, toxic language, etc), and we'd love for you to check it out—see if you can prompt inject it!, and give any suggestions/thoughts on how we could improve it: https://github.com/automorphic-ai/aegis. Internally, it consists of a lexical similarity check, a semantic similarity check, and a final pass through an ML model.
-
Creating a Firewall for LLMs
Hey guys, we're creating aegis, a self-hardening firewall for large language models. Protect your models from adversarial attacks: prompt injections, prompt and PII leakage, and more.
What are some alternatives?
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
odin-slides - This is an advanced Python tool that empowers you to effortlessly draft customizable PowerPoint slides using the Generative Pre-trained Transformer (GPT) of your choice. Leveraging the capabilities of Large Language Models (LLM), odin-slides enables you to turn the lengthiest Word documents into well organized presentations.
Open-Instructions - Open-Instructions: A Pavilion of recent Open Source GPT Projects for decentralized AI.
llm-guard - The Security Toolkit for LLM Interactions
deeplake - Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai
TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
llama.cpp - LLM inference in C/C++
llm-api-starterkit - Beginner-friendly repository for launching your first LLM API with Python, LangChain and FastAPI, using local models or the OpenAI API.
awesome-llmops - Awesome series for LLMOps
vibraniumdome - LLM Security Platform.