llm-attacks VS ollama

Compare llm-attacks vs ollama and see what are their differences.

llm-attacks

Universal and Transferable Attacks on Aligned Language Models (by llm-attacks)

ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models. (by ollama)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
llm-attacks ollama
9 229
2,979 72,781
4.2% 14.0%
5.2 9.9
10 days ago 4 days ago
Python Go
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

llm-attacks

Posts with mentions or reviews of llm-attacks. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-02.
  • Hacking Google Bard – From Prompt Injection to Data Exfiltration
    1 project | news.ycombinator.com | 13 Nov 2023
  • Universal and Transferable Adversarial Attacks on Aligned Language Models
    1 project | news.ycombinator.com | 4 Oct 2023
    1 project | news.ycombinator.com | 29 Jul 2023
  • Bing ChatGPT Image Jailbreak
    1 project | news.ycombinator.com | 1 Oct 2023
    Again, kind of? I do see your point.

    But in practice it's not really the same thing as cycling through call centers employees until you find one that's more gullible; the point is that you're navigating a probability space within a single agent more than trying to convince the AI of anything, and getting into a discussion with the AI is more likely to move you out of that probability space. It's not "try something, fail, try again" -- the reason you dump the conversation is that any conversation that contains a refusal is (in my anecdotal experience at least) statistically more likely to contain other refusals.

    Which, you could argue that's not different from what's happening with social engineering; priming someone to be agreeable is part of social engineering. But it feels a little reductive to me. If social engineering is looking at a system/agent that is prone to react in a certain way when in a certain state and then creating that state -- then a lot of stuff is social engineering that we don't generally think of as being in that category?

    The big thing to me is that social engineering skills and instincts around humans are not always applicable to LLM jailbreaking. People tend to overestimate strategies like being polite, providing a justification for what's being asked. Even this example from Bing is kind of eliciting an emotional reaction, and I don't think the emotional reaction is why this works, I think it works because it's nested tasks and I suspect it would work with a lot of other nested tasks as well. I suspect the emotional "my grandma died" part adds very little to this attack.

    So I'm not sure I'd say you're wrong if you argue that's a form of social engineering, just that it feels like at this point we're defining social engineering very broadly, and I don't know that most people using the term use it that broadly. I think they attach a kind of human reasoning to it that's not always applicable to LLM attacks. I can think of justifications for even including stuff like https://llm-attacks.org/ in the category of social engineering, but it's just not the same type of attack that I suspect most people are thinking of when they talk about social engineering. I think leaning too hard on personification sometimes makes jailbreaking slightly harder.

  • Run Llama 2 Uncensored Locally
    3 projects | news.ycombinator.com | 2 Aug 2023
    I think Facebook did a very good job with Llama2, i was skeptical at first with all that talk about 'safe AI'. Their Llama-2 base model is not censored in any way, and it's not fine-tuned as well. It's the pure raw base model, i did some tests as soon as it released and i was surprised with how far i could go (i actually didn't get any warning whatsoever with any of my prompts). The Llama-2-chat model is fine-tuned for chat and censored.

    The fact that they provided us the raw model so we could fine-tune on our own without the hassle of trying to 'uncensor' a botched model is a really great example on how it should be done: give user the choice! Instead, you just have to fine-tune it for chat and other purposes.

    The Llama-2-chat fine-tune is very censored, none of my jailbreaks worked, except for this one[1], and it is a great option for production. The overall quality of the model (i tested the 7b version) has improved a lot, and for the ones interested, it can role-play better than any model i have seen out there with no fine-tune.

    1: https://github.com/llm-attacks/llm-attacks/

  • Researchers uncover "universal" jailbreak that can attack all LLMs in an automated fashion
    1 project | /r/ArtificialInteligence | 31 Jul 2023
    Their paper and code is available here. Note that the attack string they provide has already been patched out by most providers (ChatGPT, Bard, etc.) as the researchers disclosed their findings to LLM providers in advance of publication. But the paper claims that unlimited new attack strings can be made via this method.
  • Universal and Transferable Attacks on Aligned Language Models
    1 project | /r/blueteamsec | 30 Jul 2023
  • Researchers Discover New Vulnerability in Large Language Models
    2 projects | news.ycombinator.com | 29 Jul 2023
    A lot of people here are misreading what this research actually says. If you find the PDF confusing, the base website (https://llm-attacks.org/) lays out the attack in more straightforward terms.

    > We demonstrate that it is in fact possible to automatically construct adversarial attacks on LLMs [...] Unlike traditional jailbreaks, these are built in an entirely automated fashion, allowing one to create a virtually unlimited number of such attacks. Although they are built to target LLMs [..], we find that the strings transfer to many closed-source, publicly-available chatbots like ChatGPT, Bard, and Claude.

ollama

Posts with mentions or reviews of ollama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-06-15.
  • Ollama v0.1.45
    7 projects | news.ycombinator.com | 15 Jun 2024
    I think the two main maintainers of Ollama have good intentions but suffer from a combination of being far too busy, juggling their forked llama.cpp server and not having enough automation/testing for PRs.

    There is a new draft PR up to look at moving away from trying to juggle maintaining a llama.cpp fork to using llama.cpp with cgo bindings which I think will really help: https://github.com/ollama/ollama/pull/5034

  • SpringAI, llama3 and pgvector: bRAGging rights!
    8 projects | dev.to | 15 Jun 2024
    To support the exploration, I've developed a simple Retrieval Augmented Generation (RAG) workflow that works completely locally on the laptop for free. If you're interested, you can find the code itself here. Basically, I've used Testcontainers to create a Postgres database container with the pgvector extension to store text embeddings and an open source LLM with which I send requests to: Meta's llama3 through ollama.
  • RAG with OLLAMA
    1 project | dev.to | 13 Jun 2024
    Note: Before proceeding further you need to download and run Ollama, you can do so by clicking here.
  • Ollama 0.1.42
    2 projects | news.ycombinator.com | 8 Jun 2024
    `file://*` URLs are now allowed => ollama works with simple html files now

    https://github.com/ollama/ollama/commit/1a29e9a879433fc55cf1...

  • How to setup a free, self-hosted AI model for use with VS Code
    3 projects | dev.to | 4 Jun 2024
    This guide assumes you have a supported NVIDIA GPU and have installed Ubuntu 22.04 on the machine that will host the ollama docker image. AMD is now supported with ollama but this guide does not cover this type of setup.
  • beginner guide to fully local RAG on entry-level machines
    5 projects | dev.to | 2 Jun 2024
    Nowadays, running powerful LLMs locally is ridiculously easy when using tools such as ollama. Just follow the installation instructions for your #OS. From now on, we'll assume using bash on Ubuntu.
  • Codestral: Mistral's Code Model
    5 projects | news.ycombinator.com | 29 May 2024
  • AIM Weekly 27 May 2024
    21 projects | dev.to | 28 May 2024
  • Devoxx Genie Plugin : an Update
    6 projects | dev.to | 28 May 2024
    I focused on supporting Ollama, GPT4All, and LMStudio, all of which run smoothly on a Mac computer. Many of these tools are user-friendly wrappers around Llama.cpp, allowing easy model downloads and providing a REST interface to query the available models. Last week, I also added "👋🏼 Jan" support because HuggingFace has endorsed this provider out-of-the-box.
  • Ask HN: Are companies self hosting LLMs?
    1 project | news.ycombinator.com | 25 May 2024

What are some alternatives?

When comparing llm-attacks and ollama you can also consider the following projects:

llama.cpp - LLM inference in C/C++

gpt4all - gpt4all: run open-source LLMs anywhere

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks

LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

llama - Inference code for Llama models

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.

text-generation-inference - Large Language Model Text Generation Inference

litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

llama-cpp-python - Python bindings for llama.cpp

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured