llama

Inference code for Llama models (by meta-llama)

Llama Alternatives

Similar projects and alternatives to llama

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better llama alternative or higher similarity.

llama reviews and mentions

Posts with mentions or reviews of llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-18.
  • Mark Zuckerberg: Llama 3, $10B Models, Caesar Augustus, Bioweapons [video]
    3 projects | news.ycombinator.com | 18 Apr 2024
    derivative works thereof).”

    https://github.com/meta-llama/llama/blob/b8348da38fde8644ef0...

    Also even if you did use Llama for something, they could unilaterally pull the rug on you when you got 700 million years, AND anyone who thinks Meta broke their copyright loses their license. (Checking if you are still getting screwed is against the rules)

    Therefore, Zuckerberg is accountable for explicitly anticompetitive conduct, I assumed an MMA fighter would appreciate the value of competition, go figure.

  • Hello OLMo: A Open LLM
    3 projects | news.ycombinator.com | 8 Apr 2024
    One thing I wanted to add and call attention to is the importance of licensing in open models. This is often overlooked when we blindly accept the vague branding of models as “open”, but I am noticing that many open weight models are actually using encumbered proprietary licenses rather than standard open source licenses that are OSI approved (https://opensource.org/licenses). As an example, Databricks’s DBRX model has a proprietary license that forces adherence to their highly restrictive Acceptable Use Policy by referencing a live website hosting their AUP (https://github.com/databricks/dbrx/blob/main/LICENSE), which means as they change their AUP, you may be further restricted in the future. Meta’s Llama is similar (https://github.com/meta-llama/llama/blob/main/LICENSE ). I’m not sure who can depend on these models given this flaw.
  • Reaching LLaMA2 Performance with 0.1M Dollars
    2 projects | news.ycombinator.com | 4 Apr 2024
    It looks like Llama 2 7B took 184,320 A100-80GB GPU-hours to train[1]. This one says it used a 96Ă—H100 GPU cluster for 2 weeks, for 32,256 hours. That's 17.5% of the number of hours, but H100s are faster than A100s [2] and FP16/bfloat16 performance is ~3x better.

    If they had tried to replicate Llama 2 identically with their hardware setup, it'd cost a little bit less than twice their MoE model.

    [1] https://github.com/meta-llama/llama/blob/main/MODEL_CARD.md#...

  • DBRX: A New Open LLM
    6 projects | news.ycombinator.com | 27 Mar 2024
    Ironically, the LLaMA license text [1] this is lifted verbatim from is itself copyrighted [2] and doesn't grant you the permission to copy it or make changes like s/meta/dbrx/g lol.

    [1] https://github.com/meta-llama/llama/blob/main/LICENSE#L65

  • How Chain-of-Thought Reasoning Helps Neural Networks Compute
    1 project | news.ycombinator.com | 22 Mar 2024
    This is kind of an epistemological debate at this level, and I make an effort to link to some source code [1] any time it seems contentious.

    LLMs (of the decoder-only, generative-pretrained family everyone means) are next token predictors in a literal implementation sense (there are some caveats around batching and what not, but none that really matter to the philosophy of the thing).

    But, they have some emergent behaviors that are a trickier beast. Probably the best way to think about a typical Instruct-inspired “chat bot” session is of them sampling from a distribution with a KL-style adjacency to the training corpus (sidebar: this is why shops that do and don’t train/tune on MMLU get ranked so differently than e.g. the arena rankings) at a response granularity, the same way a diffuser/U-net/de-noising model samples at the image batch (NCHW/NHWC) level.

    The corpus is stocked with everything from sci-fi novels with computers arguing their own sentience to tutorials on how to do a tricky anti-derivative step-by-step.

    This mental model has adequate explanatory power for anything a public LLM has ever been shown to do, but that only heavily implies it’s what they’re doing.

    There is active research into whether there is more going on that is thus far not conclusive to the satisfaction of an unbiased consensus. I personally think that research will eventually show it’s just sampling, but that’s a prediction not consensus science.

    They might be doing more, there is some research that represents circumstantial evidence they are doing more.

    [1] https://github.com/meta-llama/llama/blob/54c22c0d63a3f3c9e77...

  • Asking Meta to stop using the term "open source" for Llama
    1 project | news.ycombinator.com | 28 Feb 2024
  • Markov Chains Are the Original Language Models
    2 projects | news.ycombinator.com | 1 Feb 2024
    Predicting subsequent text is pretty much exactly what they do. Lots of very cool engineering that’s a real feat, but at its core it’s argmax(P(token|token,corpus)):

    https://github.com/facebookresearch/llama/blob/main/llama/ge...

    The engineering feats are up there with anything, but it’s a next token predictor.

  • Meta AI releases Code Llama 70B
    6 projects | news.ycombinator.com | 29 Jan 2024
    https://github.com/facebookresearch/llama/pull/947/
  • Stuff we figured out about AI in 2023
    5 projects | news.ycombinator.com | 1 Jan 2024
    > Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!

    actually its not just a basic version. Llama 1/2's model.py is 500 lines: https://github.com/facebookresearch/llama/blob/main/llama/mo...

    Mistral (is rumored to have) forked llama and is 369 lines: https://github.com/mistralai/mistral-src/blob/main/mistral/m...

    and both of these are SOTA open source models.

  • [D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
    3 projects | /r/MachineLearning | 10 Dec 2023
    In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 23 Apr 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Stats

Basic llama repo stats
184
52,603
8.1
14 days ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com