open_model_zoo
deepeval
Our great sponsors
open_model_zoo | deepeval | |
---|---|---|
5 | 22 | |
3,945 | 1,769 | |
1.7% | 30.3% | |
8.6 | 9.9 | |
1 day ago | 2 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open_model_zoo
- FLaNK Stack Weekly 06 Nov 2023
-
[D] Is BERT going to be obsolete by ChatGPT?
Search engines and chatbots are rapidly advancing with the introduction of ChatGPT, but search engines are still running on models like BERT (Bidirectional Encoder Representations from Transformers) for question-answer functionality. An interesting OpenVINO jupyter notebook #213 demonstrating question-answer functionality with a SQuAD v1.1 training set trained BERT model and transformer functions using either an embedded paragraph or a link to a website. I find it still really compelling to see how this works, basically how our widely used search engines function though on a much bigger scale.
- computer vision with Intel integrated GPU
- student in desperate need of help/ guidance
-
Openvino demos missing libraries
If not, below a github link https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/common/python
deepeval
-
Unit Testing LLMs with DeepEval
For the last year I have been working with different LLMs (OpenAI, Claude, Palm, Gemini, etc) and I have been impressed with their performance. With the rapid advancements in AI and the increasing complexity of LLMs, it has become crucial to have a reliable testing framework that can help us maintain the quality of our prompts and ensure the best possible outcomes for our users. Recently, I discovered DeepEval (https://github.com/confident-ai/deepeval), an LLM testing framework that has revolutionized the way we approach prompt quality assurance.
-
Show HN: Ragas – the de facto open-source standard for evaluating RAG pipelines
Checkout this instead: https://github.com/confident-ai/deepeval
Also has native ragas implementation but supports all models.
-
Show HN: Times faster LLM evaluation with Bayesian optimization
Fair question.
Evaluate refers to the phase after training to check if the training is good.
Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!
So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.
-
Implemented 12+ LLM evaluation metrics so you don't have to
A link to a reddit post (with no discussion) which links to this repo
https://github.com/confident-ai/deepeval
- Show HN: I implemented a range of evaluation metrics for LLMs that runs locally
-
These 5 Open Source AI Startups are changing the AI Landscape
Star DeepEval on GitHub and contribute to the advancement of LLM evaluation frameworks! 🌟
- FLaNK Stack Weekly 06 Nov 2023
-
Why we replaced Pinecone with PGVector 😇
Pinecone, the leading closed-source vector database provider, is known for being fast, scalable, and easy to use. Its ability to allow users to perform blazing-fast vector search makes it a popular choice for large-scale RAG applications. Our initial infrastructure for Confident AI, the world’s first open-source evaluation infrastructure for LLMs, utilized Pinecone to cluster LLM observability log data in production. However, after weeks of experimentation, we made the decision to replace it entirely with pgvector. Pinecone’s simplistic design is deceptive due to several hidden complexities, particularly in integrating with existing data storage solutions. For example, it forces a complicated architecture and its restrictive metadata storage capacity made it troublesome for managing data-intensive workloads.
- Show HN: Unit Testing for LLMs
- Show HN: DeepEval – Unit Testing for LLMs (Open Science)
What are some alternatives?
Face Recognition - The world's simplest facial recognition api for Python and the command line
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
deepstream-occupancy-analytics - This is a sample application for counting people entering/leaving in a building using NVIDIA Deepstream SDK, Transfer Learning Toolkit (TLT), and pre-trained models. This application can be used to build real-time occupancy analytics applications for smart buildings, hospitals, retail, etc. The application is based on deepstream-test5 sample application.
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
Dlib - A toolkit for making real world machine learning and data analysis applications in C++
blog-examples
pyvideotrans - Translate the video from one language to another and add dubbing. 将视频从一种语言翻译为另一种语言,并添加配音
openvino_notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
tensorflow-open_nsfw - Tensorflow Implementation of Yahoo's Open NSFW Model
pezzo - 🕹️ Open-source, developer-first LLMOps platform designed to streamline prompt design, version management, instant delivery, collaboration, troubleshooting, observability and more.
tailspin - 🌀 A log file highlighter