RedPajama-Data VS llama

Compare RedPajama-Data vs llama and see what are their differences.

RedPajama-Data

The RedPajama-Data repository contains code for preparing large datasets for training large language models. (by togethercomputer)

llama

Inference code for Llama models (by meta-llama)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
RedPajama-Data llama
19 184
4,374 53,371
3.1% 3.0%
6.0 8.1
about 2 months ago 14 days ago
Python Python
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

RedPajama-Data

Posts with mentions or reviews of RedPajama-Data. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • Choose Your Weapon: Survival Strategies for Depressed AI Academics
    1 project | news.ycombinator.com | 3 Apr 2024
    https://github.com/togethercomputer/RedPajama-Data

    Even more than that, this is web scrapped data. There are trillions of valuable tokens worth of text from the likes of pdfs, ebooks and other documents that essentially has no web presence otherwise.

    https://annas-archive.org/llm

  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    The initiative has expanded to include partners like Ontocord.ai, ETH DS3Lab, Stanford CRFM, Hazy Research, and MILA Québec AI Institute. In April 2023, they released a 1.2 trillion token dataset, mirroring LLaMA’s dataset, for training their models. These models, with parameters ranging from 3 to 7 billion, were released in September, licensed under open-source Apache 2.
  • AI will enable mass spying
    1 project | news.ycombinator.com | 5 Dec 2023
    There's a lot of speculation in the comments so I want to talk about the technology that we have __TODAY__. I post a lot about being in ML research and while my focus is on image generation I'm working with another team doing another task but not going to state it explicitly for obvious reasons.

    What can AI/ML do __today__?

    We have lots of ways to track people around a building or city. The challenge is to do these tasks through multi-camera systems. This includes things like people tracking (person with random ID but consistent across cameras), face identification (more specific representation that is independent of clothing, which usually identifies the former), gait tracking (how one walks), device tracking (based on bluetooth, wifi, and cellular). There is a lot of mixed success with these tools but I'll let you know some part that should concern you: right now these are mostly ResNet50 models, datasets are small, and they are not using advanced training techniques. That is changing. There are legal issues and datasets are becoming proprietary but the size and frequency of gathering data is growing.

    I'm not going to talk about social media because the metadata problem is an already well discussed one and you all have already made your decisions and we've witnessed the results of those decisions. I'm also not going to talk about China, the most surveilled country in the world, the UK, or any of that for similar reasons. We'll keep talking in general, that is invariant to country.

    What I will talk about is that modern ML has greatly accelerated the data gathering sector. Your threat models have changed from governments rushing to gather all the data that they can, to big companies joining the game, to now small mom and pop shops doing so. I __really__ implore you all to look at what's in that dataset[0]. There's 5B items, this tool helps retrieve based on CLIP embeddings. You might think "oh yes, Google can already do this" but the difference is that you can't download Google. Google does not give you 16.5TB of clip filtered image,text, & metadata. Or look into the RedPajama dataset[1] which has >30T tokens and 5TB of storage. With 32k tokens being about 50 pages, that's about 47 billion pages. That is, a stack of paper 5000km tall, reaching 5x the height of the ISS and is bigger than the diameter of the moon. I know we all understand that there's big data collection, but do you honestly understand how big these numbers are? I wouldn't even claim to because I cannot accurately conceptualize the size of the moon nor the distance to the ISS. They just roll into the "big" bin in my brain.

    Today, these systems can track you with decent accuracy even if you use basic obscurification techniques like glasses, hats, or even a surgical mask. Today we can track you not just by image, but how you walk, and can with moderate success do this through walls (meaning no camera to see if you want to know you're being tracked). Today, these systems can de-anonymize you through unique text patterns that you use (see Enron dataset, but scale). Today, these machines can uncanny valley replicas of your speech and text. Today we can make images of people that are convincingly real. Today, these tools aren't exclusive to governments or trillion dollar corporations, but available to any person that is willing to spend a few thousand dollars on compute.

    I don't want to paint this as a picture of doom and gloom. These tools are amazing and have the potential to do extraordinary good, at levels that would be unimaginable only a few decades ago. Even many of these tools that can invade your privacy are benefits in some ways, but just need to consider context. You cannot build a post scarce society when you require humans to monitor all stores.

    But like Uncle Ben says, with great power comes great responsibility. A technology that has the capacity to do tremendous good also has the power to do tremendous horrors.

    The choice is ours and the latter prevails when we are not open. We must ever push for these tools to be used for good, because with them we can truly do amazing things. We do not need AGI to create a post scarce world and I have no doubt that were this to become our primary goal, we could easily reach it within our lifetime without becoming a Sci-Fi dystopia and while tackling existential issues such as climate. To poke the bear a little, I'd argue that if your country wants to show dominance and superiority on the global stage, it is not done so through military power but technology. You will win the culture wars of all culture wars and whoever creates the post scarce world will be a country that will never be forgotten by time. Lift a billion people out of poverty? Try lifting 8 billion not just out of poverty, but into the lower middle class, where no child dreams of being hungry. That is something humans will never forget. So maybe this should be our cold war, not the one in the Pacific. If you're so great, truly, truly show me how superior your country/technology/people are. This is a battle that can be won by anyone at this point, not just China vs the US, but even any European power has the chance to win.

    [0] https://rom1504.github.io/clip-retrieval/

    [1] https://github.com/togethercomputer/RedPajama-Data

  • [R] RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models
    1 project | /r/MachineLearning | 1 Nov 2023
    GitHub: https://github.com/togethercomputer/RedPajama-Data
  • RedPajama v2 Open Dataset with 30T Tokens for Training LLMs
    1 project | news.ycombinator.com | 30 Oct 2023
    Thanks for the suggestion! We will add this in the pool of features for future release. (We are currently running the current 40+ annotations on the `tail` partitions).

    If you are interested in contributing the code for these features, feel free to do a PR to https://github.com/togethercomputer/RedPajama-Data! Otherwise we will try our best effort implementation :) but we hope that this can become a community effort

    (feel free to created more issues on github for us to keep track. I created one for this https://github.com/togethercomputer/RedPajama-Data/issues/76)

  • Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
    14 projects | /r/ChatGPT | 30 Jun 2023
    The hallucinations are coming from the LLM interpolating from the training data, substantial portions of which is scraped off of the internet. Because other peoples' prompts never leave their devices (this app makes no internet connections).
  • MosaicML Agrees to Join Databricks to Power Generative AI for All
    3 projects | /r/LocalLLaMA | 26 Jun 2023
    Compare it to red pajama, which has scripts only for preprocessing.
  • The Pile: An 800GB Dataset of Diverse Text for Language Modeling
    1 project | news.ycombinator.com | 10 Jun 2023
    I tried to find out how many "tokens" (I know: depends on the tokenizer) "The Pile" has but couldn't find it.

    As far as I understand RedPajama has 1.2T (https://github.com/togethercomputer/RedPajama-Data) and has a table in the readme listing the main parts and how many tokens each part has.

  • Dataset prep/cleaning
    1 project | /r/LocalLLaMA | 1 Jun 2023
    Then performed simple replaces on special characters, formatting and used clean_copyright_comments found in https://github.com/togethercomputer/RedPajama-Data/blob/main/data_prep/github/github_clean_dedup_local.py
  • We’re Washington Post reporters who analyzed Google’s C4 data set to see which websites AI uses to make itself sound smarter. Ask us Anything!
    4 projects | /r/IAmA | 16 May 2023
    We know that C4 was used to train Google’s influential T5 model, Facebook’s LLaMA, as well as the open source model Red Pajama. C4 is a very cleaned-up version of a scrape of the internet from the non-profit CommonCrawl taken in 2019. OpenAI’s model GPT-3 used a training dataset that began with 41 scrapes of the web from CommonCrawl from 2016 to 2019 so I think it’s safe to say that something akin to C4 was part of GPT-3. (The researchers who originally looked into C4 argue that these issues are common to all web-scraped datasets.) When we reached out to OpenAI and Google for comment, both companies emphasized that they undergo extensive efforts to weed out potentially problematic data from their training sets. But within the industry, C4 is known as being a heavily filtered dataset and has been criticized, in fact, for eliminating content related to LGBTQ+ identities because of its reliance on a heavy-handed blocklist. (https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words ) We are working on some reporting to try to address your last and very crucial question, but it’s an open area of research and one that even AI developers are struggling to answer.

llama

Posts with mentions or reviews of llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-18.
  • Mark Zuckerberg: Llama 3, $10B Models, Caesar Augustus, Bioweapons [video]
    3 projects | news.ycombinator.com | 18 Apr 2024
    derivative works thereof).”

    https://github.com/meta-llama/llama/blob/b8348da38fde8644ef0...

    Also even if you did use Llama for something, they could unilaterally pull the rug on you when you got 700 million years, AND anyone who thinks Meta broke their copyright loses their license. (Checking if you are still getting screwed is against the rules)

    Therefore, Zuckerberg is accountable for explicitly anticompetitive conduct, I assumed an MMA fighter would appreciate the value of competition, go figure.

  • Hello OLMo: A Open LLM
    3 projects | news.ycombinator.com | 8 Apr 2024
    One thing I wanted to add and call attention to is the importance of licensing in open models. This is often overlooked when we blindly accept the vague branding of models as “open”, but I am noticing that many open weight models are actually using encumbered proprietary licenses rather than standard open source licenses that are OSI approved (https://opensource.org/licenses). As an example, Databricks’s DBRX model has a proprietary license that forces adherence to their highly restrictive Acceptable Use Policy by referencing a live website hosting their AUP (https://github.com/databricks/dbrx/blob/main/LICENSE), which means as they change their AUP, you may be further restricted in the future. Meta’s Llama is similar (https://github.com/meta-llama/llama/blob/main/LICENSE ). I’m not sure who can depend on these models given this flaw.
  • Reaching LLaMA2 Performance with 0.1M Dollars
    2 projects | news.ycombinator.com | 4 Apr 2024
    It looks like Llama 2 7B took 184,320 A100-80GB GPU-hours to train[1]. This one says it used a 96×H100 GPU cluster for 2 weeks, for 32,256 hours. That's 17.5% of the number of hours, but H100s are faster than A100s [2] and FP16/bfloat16 performance is ~3x better.

    If they had tried to replicate Llama 2 identically with their hardware setup, it'd cost a little bit less than twice their MoE model.

    [1] https://github.com/meta-llama/llama/blob/main/MODEL_CARD.md#...

  • DBRX: A New Open LLM
    6 projects | news.ycombinator.com | 27 Mar 2024
    Ironically, the LLaMA license text [1] this is lifted verbatim from is itself copyrighted [2] and doesn't grant you the permission to copy it or make changes like s/meta/dbrx/g lol.

    [1] https://github.com/meta-llama/llama/blob/main/LICENSE#L65

  • How Chain-of-Thought Reasoning Helps Neural Networks Compute
    1 project | news.ycombinator.com | 22 Mar 2024
    This is kind of an epistemological debate at this level, and I make an effort to link to some source code [1] any time it seems contentious.

    LLMs (of the decoder-only, generative-pretrained family everyone means) are next token predictors in a literal implementation sense (there are some caveats around batching and what not, but none that really matter to the philosophy of the thing).

    But, they have some emergent behaviors that are a trickier beast. Probably the best way to think about a typical Instruct-inspired “chat bot” session is of them sampling from a distribution with a KL-style adjacency to the training corpus (sidebar: this is why shops that do and don’t train/tune on MMLU get ranked so differently than e.g. the arena rankings) at a response granularity, the same way a diffuser/U-net/de-noising model samples at the image batch (NCHW/NHWC) level.

    The corpus is stocked with everything from sci-fi novels with computers arguing their own sentience to tutorials on how to do a tricky anti-derivative step-by-step.

    This mental model has adequate explanatory power for anything a public LLM has ever been shown to do, but that only heavily implies it’s what they’re doing.

    There is active research into whether there is more going on that is thus far not conclusive to the satisfaction of an unbiased consensus. I personally think that research will eventually show it’s just sampling, but that’s a prediction not consensus science.

    They might be doing more, there is some research that represents circumstantial evidence they are doing more.

    [1] https://github.com/meta-llama/llama/blob/54c22c0d63a3f3c9e77...

  • Asking Meta to stop using the term "open source" for Llama
    1 project | news.ycombinator.com | 28 Feb 2024
  • Markov Chains Are the Original Language Models
    2 projects | news.ycombinator.com | 1 Feb 2024
    Predicting subsequent text is pretty much exactly what they do. Lots of very cool engineering that’s a real feat, but at its core it’s argmax(P(token|token,corpus)):

    https://github.com/facebookresearch/llama/blob/main/llama/ge...

    The engineering feats are up there with anything, but it’s a next token predictor.

  • Meta AI releases Code Llama 70B
    6 projects | news.ycombinator.com | 29 Jan 2024
    https://github.com/facebookresearch/llama/pull/947/
  • Stuff we figured out about AI in 2023
    5 projects | news.ycombinator.com | 1 Jan 2024
    > Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!

    actually its not just a basic version. Llama 1/2's model.py is 500 lines: https://github.com/facebookresearch/llama/blob/main/llama/mo...

    Mistral (is rumored to have) forked llama and is 369 lines: https://github.com/mistralai/mistral-src/blob/main/mistral/m...

    and both of these are SOTA open source models.

  • [D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
    3 projects | /r/MachineLearning | 10 Dec 2023
    In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.

What are some alternatives?

When comparing RedPajama-Data and llama you can also consider the following projects:

StableLM - StableLM: Stability AI Language Models

langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]

gorilla - Gorilla: An API store for LLMs

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

LLaMA_MPS - Run LLaMA inference on Apple Silicon GPUs.

chatgpt-vscode - A VSCode extension that allows you to use ChatGPT

AGIEval

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words - List of Dirty, Naughty, Obscene, and Otherwise Bad Words

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

following-instructions-human-feedback

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.