awesome-data-temporality VS Llama-2-Onnx

Compare awesome-data-temporality vs Llama-2-Onnx and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
awesome-data-temporality Llama-2-Onnx
17 3
96 987
- 2.0%
10.0 6.7
over 1 year ago 4 months ago
Python
- GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

awesome-data-temporality

Posts with mentions or reviews of awesome-data-temporality. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-14.

Llama-2-Onnx

Posts with mentions or reviews of Llama-2-Onnx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-12.
  • Show HN: Fine-tune your own Llama 2 to replace GPT-3.5/4
    8 projects | news.ycombinator.com | 12 Sep 2023
    System: Here's some docs, answer concisely in a sentence.

    YMMV on cost still, depends on cloud vendor, and my intuition & viewpoint agrees with yours, GPT-3.5 is priced low enough that there isn't a case where it makes sense to use another model.

    It strikes me now that _very_ likely and not just our intuition: OpenAI's $/GPU hour is likely <= any other vendor's.

    The next big step will come from formalizing the stuff rolling around the local LLM community, for months now it's either been one-off $X.c stunts that run on desktop, and the vast majority of the _actual_ usage and progress is coming from porn-y stuff, like all nascent tech.

    Microsoft has LLaMa-2 ONNX available on GitHub[1]. There's budding but very small projects in different languages to wrap ONNX. Once there's a genuine cross-platform[2] ONNX wrapper that makes running LLaMa-2 easy, there will be a step change. It'll be "free"[3] to run your fine-tuned model that does as well as GPT-4 .

    It's not clear to me exactly when this will occur. It's "difficult" now, but only because the _actual usage_ in the local LLM community doesn't have a reason to invest in ONNX, and it's extremely intimidating to figure out how exactly to get LLaMa-2 running in ONNX. Microsoft kinda threw it up on GitHub and moved on, the sample code even still needs a PyTorch model. I see at least one very small company on HuggingFace that _may_ have figured out full ONNX.

    [1] https://github.com/microsoft/Llama-2-Onnx

  • FLaNK Stack Weekly for 14 Aug 2023
    32 projects | dev.to | 14 Aug 2023
  • Llama 2 on ONNX runs locally
    5 projects | news.ycombinator.com | 10 Aug 2023

What are some alternatives?

When comparing awesome-data-temporality and Llama-2-Onnx you can also consider the following projects:

jdbc-connector-for-apache-kafka - Aiven's JDBC Sink and Source Connectors for Apache Kafka®

vllm - A high-throughput and memory-efficient inference and serving engine for LLMs

sqlalchemy-easy-softdelete - Easily add soft-deletion to your SQLAlchemy Models

pkgx - the last thing you’ll install

sql-cli-for-apache-flink-docker - SQL CLI for Apache Flink® via docker-compose

onnx-coreml - ONNX to Core ML Converter

databathing

OpenPipe - Turn expensive prompts into cheap fine-tuned models

rift - Rift: an AI-native language server for your personal AI software engineer

llama.cpp - LLM inference in C/C++

CML_AMP_Churn_Prediction_mlflow - Build an scikit-learn model to predict churn using customer telco data.

gpt-llm-trainer