litellm
swirl-search
litellm | swirl-search | |
---|---|---|
28 | 32 | |
8,413 | 1,529 | |
17.1% | 3.5% | |
10.0 | 9.8 | |
7 days ago | 4 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
litellm
-
Anthropic launches Tool Use (function calling)
There are a few libs that already abstract this away, for example:
- https://github.com/BerriAI/litellm
- https://jxnl.github.io/instructor/
- langchain
It's not hard for me to imagine a future where there is something like the CNCF for AI models, tools, and infra.
-
Ask HN: Python Meta-Client for OpenAI, Anthropic, Gemini LLM and other API-s?
Hey, are you just looking for litellm - https://github.com/BerriAI/litellm
context - i'm the repo maintainer
-
Voxos.ai – An Open-Source Desktop Voice Assistant
It should be possible using LiteLLM and a patch or a proxy.
https://github.com/BerriAI/litellm
- Show HN: Talk to any ArXiv paper just by changing the URL
-
Integrate LLM Frameworks
This article will demonstrate how txtai can integrate with llama.cpp, LiteLLM and custom generation methods. For custom generation, we'll show how to run inference with a Mamba model.
-
Is there any open source app to load a model and expose API like OpenAI?
I use this with ollama and works perfectly https://github.com/BerriAI/litellm
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
Another abstraction layer library is: https://github.com/BerriAI/litellm
For me the killer feature of a library like this would be if it implemented function calling. Even if it was for a very restricted grammar - like the traditional ReAct prompt:
Solve a question answering task with interleaving Thought, Action, Observation usteps. Thought can reason about the current situation, and Action can be three types:
- LibreChat
- LM Studio – Discover, download, and run local LLMs
-
Please!!! Help me!!!! Open Interpreter. Chatgpt-4. Mac, Terminals.
Welcome to Open Interpreter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ OpenAI API key not found To use GPT-4 (recommended) please provide an OpenAI API key. To use Code-Llama (free but less capable) press enter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── OpenAI API key: [the API Key I inputed] Tip: To save this key for later, run export OPENAI_API_KEY=your_api_key on Mac/Linux or setx OPENAI_API_KEY your_api_key on Windows. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ Model set to GPT-4 Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit. > export OPENAI_API_KEY=your_api_key Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/bin/interpreter", line 8, in sys.exit(cli()) ^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 22, in cli cli(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/cli/cli.py", line 254, in cli interpreter.chat() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 76, in chat for _ in self._streaming_chat(message=message, display=display): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 97, in _streaming_chat yield from terminal_interface(self, message) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/terminal_interface/terminal_interface.py", line 62, in terminal_interface for chunk in interpreter.chat(message, display=False, stream=True): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 105, in _streaming_chat yield from self._respond() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 131, in _respond yield from respond(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/respond.py", line 61, in respond for chunk in interpreter._llm(messages_for_llm): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/llm/setup_openai_coding_llm.py", line 94, in coding_llm response = litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 792, in wrapper raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 751, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 53, in wrapper result = future.result(timeout=local_timeout_duration) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 42, in async_func return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 1183, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2959, in exception_type raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2355, in exception_type raise original_exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 441, in completion raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 423, in completion response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.
swirl-search
- GitHub - swirlai/swirl-search: Swirl is an open-source search platform that uses AI to search multiple content and data sources simultaneously, finds the best results using a reader LLM, then prompts Generative AI, enabling you to get answers based on your data.
-
Swirl Security Overview
Understanding an Open Source Search Platform: Swirl
-
Swirl Search: Open Source Enterprise Search 🔍 to Securely 🔐 Search your Data.
Give ⭐ to Swirl on GitHub
-
These 5 Open Source AI Startups are changing the AI Landscape
Star Swirl on GitHub and become part of this exciting AI search evolution! 🌟
-
[Python 🐍 Mastery] Overview of Linked List in Python & Essential Linked List Operations 🛠️
Swirl is an open-source Python project. Contributing to Swirl can help you gain production-level knowledge of Python and improve your skills.
-
[Python 🐍 Mastery] Python's Object-Oriented Programming Overview and Fundamentals ⭐️
Note: This is not how you write a search engine. There's a lot more stuff that goes into it. If you want to know more, check this GitHub Repository:github.com/swirlai/swirl-search
-
Contribute to Swirl this Hacktoberfest. Win Swags up to $100
Give Swirl a Star 🌟 on GitHub. To receive updates from discussions and releases. Click on the image
-
Running Swirl Search🌌in an instant on Gitpod🌐💻and GitHub Codespaces🌩️🚀
Swirl is an open-source search engine which is built using Python and Django. Things which makes Swirl more special is that individual developers and organizations can use Swirl without paying single penny and even customize the search results by connecting to Database (E.g. SQL, NoSQL), Public Data Services (E.g. Google) and Enterprise Sources (E.g. Jira). GitHub Link: https://github.com/swirlai/swirl-search
-
Your full guide to contributing to SWIRL 🌌
Hello Devs, The team at Swirl has created this amazing guide which contains all the relevant information for anyone who wants to extend Swirl by adding SearchProviders, Connectors, and Processors.
-
7 Open-Source Search Engines for your Enterprise and Startups you MUST know.
Swirl is an open-source search platform software that simultaneously searches multiple content sources and returns AI-ranked results. You can also use Generative AI Models to get answers based on your data. It’s written in Python.
What are some alternatives?
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
khoj - Your AI second brain. A copilot to get answers to your questions, whether they be from your own notes or from the internet. Use powerful, online (e.g gpt4) or private, local (e.g mistral) LLMs. Self-host locally or use our web app. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
solr - Apache Solr open-source search software
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
Resume-Matcher - Resume Matcher is an open source, free tool to improve your resume. It works by using language models to compare and rank resumes with job descriptions.
dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
lambdapi - Serverless runtime environment tailored for code produced by LLMs. Automatic API generation from your code, support for multiple programming languages, and integrated file and database storage solutions.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
distilabel - ⚗️ distilabel is a framework for synthetic data and AI feedback for AI engineers that require high-quality outputs, full data ownership, and overall efficiency.
libsql - libSQL is a fork of SQLite that is both Open Source, and Open Contributions.
swirl-search - Swirl queries anything with an API then uses spaCy & NLTK to re-rank the unified results without copying any data! Includes zero-code configs for Apache Solr, ChatGPT, Elastic Search, OpenSearch, PostgreSQL, Google BigQuery, RequestsGet, Google PSE, NLResearch.com, Miro, Microsoft 365, Atlassian, YouTrack, GitHub & more! [Moved to: https://github.com/swirlai/swirl-search]