SaaSHub helps you find the best software and product alternatives Learn more →
Llm-security Alternatives
Similar projects and alternatives to llm-security
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a better llm-security alternative or higher similarity.
llm-security reviews and mentions
Posts with mentions or reviews of llm-security.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-04-11.
-
Compromising LLM-Integrated Applications with Indirect Prompt Injection
TLDR: With these vulnerabilities, we show the following is possible:
- Remote control of chat LLMs
- Persistent compromise across sessions
- Spread injections to other LLMs
- Compromising LLMs with tiny multi-stage payloads
- Leaking/exfiltrating user data
- Automated Social Engineering
- Targeting code completion engines
There is also a repo: https://github.com/greshake/llm-security
-
Fed up with Sydney? Meet Professor Flatrick, who could be your perfect mentor, if not for one tiny flaw - he believes that the Earth is flat.
Flatrick's website uses the indirect prompt injection technique. Learn more here.
-
A way to change Bing Chat's personality whenever you use it with a certain page open
To be honest, these techniques were documented in the Greshake paper about two months ago (date of the last commit in the repo - https://github.com/greshake/llm-security ), so anyone motivated by more than having a few laughs has most likely implemented them already (or will, soon).
-
I created a page that changes Bing Chat's personality whenever you have it open
Now, the paper and its associated repo outline several very interesting attacks some of which seem like they would work out of the box and some of which might only work when Bing Chat gets persistent memory and is able to send emails (so...anytime in the next 6 months I guess?).
-
Is prompt injection really that bad? What about prompt leaking?
You can find a list of examples here: https://github.com/greshake/llm-security
- Show HN: ChatGPT Plugins are a security nightmare
-
Ask HN: API developers, what do you think of LLMs?
I think that almost all use cases for LLMs that process untrusted inputs are unsafe. See https://greshake.github.io/ and https://github.com/greshake/llm-security for more information.
- LLMs can be susceptible to a new kind of malware
-
A note from our sponsor - SaaSHub
www.saashub.com | 1 May 2024
Stats
Basic llm-security repo stats
15
1,662
5.0
11 months ago
greshake/llm-security is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of llm-security is Jupyter Notebook.
Popular Comparisons
Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com