odin-slides VS aegis

Compare odin-slides vs aegis and see what are their differences.

odin-slides

This is an advanced Python tool that empowers you to effortlessly draft customizable PowerPoint slides using the Generative Pre-trained Transformer (GPT) of your choice. Leveraging the capabilities of Large Language Models (LLM), odin-slides enables you to turn the lengthiest Word documents into well organized presentations. (by leonid20000)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
odin-slides aegis
4 4
92 243
- 2.1%
7.8 5.6
3 months ago 3 months ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

aegis

Posts with mentions or reviews of aegis. We have used some of these posts to build our list of alternatives and similar projects.
  • Show HN: Firewall for LLMs–Guard Against Prompt Injection, PII Leakage, Toxicity
    1 project | news.ycombinator.com | 28 Jun 2023
    Hey HN,

    We're building Aegis, a firewall for LLMs: a guard against adversarial attacks, prompt injections, toxic language, PII leakage, etc.

    One of the primary concerns entwined with building LLM applications is the chance of attackers subverting the model’s original instructions via untrusted user input, which unlike in SQL injection attacks, can’t be easily sanitized. (See https://greshake.github.io/ for the mildest such instance.) Because the consequences are dire, we feel it’s better to err on the side of caution, with something mutli-pass like Aegis, which consists of a lexical similarity check, a semantic similarity check, and a final pass through an ML model.

    We'd love for you to check it out—see if you can prompt inject it!, and give any suggestions/thoughts on how we could improve it: https://github.com/automorphic-ai/aegis.

    If you want to play around with it without creating an account, try the playground: https://automorphic.ai/playground.

    If you're interested in or need help using Aegis, have ideas, or want to contribute, join our [Discord](https://discord.com/invite/E8y4NcNeBe), or feel free to reach out at [email protected]. Excited to hear your feedback!

    Repository: https://github.com/automorphic-ai/aegis

  • We’ve built a free firewall for LLMs (Aegis) — Say goodbye to prompt injections, prompt leakage, and toxic language (100+ stars)
    1 project | /r/ChatGPTPro | 28 Jun 2023
  • Try your best prompts—especially prompt injections—against Aegis, our firewall for LLMs
    1 project | /r/GPT_jailbreaks | 28 Jun 2023
    We've built Aegis, a firewall for LLMs (a guard against malicious inputs, prompt injections, toxic language, etc), and we'd love for you to check it out—see if you can prompt inject it!, and give any suggestions/thoughts on how we could improve it: https://github.com/automorphic-ai/aegis. Internally, it consists of a lexical similarity check, a semantic similarity check, and a final pass through an ML model.
  • Creating a Firewall for LLMs
    1 project | /r/LocalLLaMA | 19 Jun 2023
    Hey guys, we're creating aegis, a self-hardening firewall for large language models. Protect your models from adversarial attacks: prompt injections, prompt and PII leakage, and more.

What are some alternatives?

When comparing odin-slides and aegis you can also consider the following projects:

Jieba - 结巴中文分词

llm-guard - The Security Toolkit for LLM Interactions

PPspliT - A PowerPoint add-in that splits slides according to slideshow-time animation effects

TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/

pntl - Practical Natural Language Processing Tools for Humans is build on the top of Senna Natural Language Processing (NLP) predictions: part-of-speech (POS) tags, chunking (CHK), name entity recognition (NER), semantic role labeling (SRL) and syntactic parsing (PSG) with skip-gram all in Python and still more features will be added. The website give is for downlarding Senna tool

T-RAGS - Trustworthy Retrieval Augmented Generation (RAG) with Safeguards

pkuseg-python - pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation

llm-api-starterkit - Beginner-friendly repository for launching your first LLM API with Python, LangChain and FastAPI, using local models or the OpenAI API.

SnowNLP - Python library for processing Chinese text

vibraniumdome - LLM Security Platform.

NLTK - NLTK Source

hazm - Persian NLP Toolkit