kudu
litellm
Our great sponsors
kudu | litellm | |
---|---|---|
3 | 28 | |
1,799 | 8,225 | |
1.0% | 30.4% | |
9.2 | 10.0 | |
10 days ago | 2 days ago | |
C++ | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kudu
-
FLaNK Stack Weekly for 14 Aug 2023
https://github.com/apache/kudu/blob/master/examples/quickstart/impala/README.adoc https://medium.com/@nifi.notes/building-an-effective-nifi-flow-replacetext-60a6016d378c https://community.cloudera.com/t5/Community-Articles/Running-DNS-and-Domain-Scanning-Tools-From-Apache-NiFi/ta-p/248484 https://community.cloudera.com/t5/Community-Articles/Using-Cloudera-Data-Science-Workbench-with-Apache-NiFi-and/ta-p/249469 https://community.cloudera.com/t5/Community-Articles/Scanning-Documents-into-Data-Lakes-via-Tesseract-MQTT-Python/ta-p/248492 https://community.cloudera.com/t5/Community-Articles/Adding-Stanford-CoreNLP-To-Big-Data-Pipelines-Apache-NiFi-1/ta-p/249378 https://community.cloudera.com/t5/Community-Articles/Using-Apache-NiFi-for-Speech-Processing-Speech-to-Text-with/ta-p/249242 https://community.cloudera.com/t5/Community-Articles/Ingesting-Flight-Data-ADS-B-USB-Receiver-with-Apache-NiFi-1/ta-p/247940 https://community.cloudera.com/t5/Community-Articles/Integrating-lucene-geo-gazetteer-For-Geo-Parsing-with-Apache/ta-p/247993 https://community.cloudera.com/t5/Community-Articles/Creating-WordClouds-From-DataFlows-with-Apache-NiFi-and/ta-p/246605 https://community.cloudera.com/t5/Community-Articles/NIFI-1-x-For-Automatic-Music-Playing-Pipelines/ta-p/247994 https://community.cloudera.com/t5/Community-Articles/Using-Apache-NiFi-with-Apache-MXNet-GluonCV-for-YOLO-3-Deep/ta-p/248979 https://community.cloudera.com/t5/Community-Articles/Tracking-Air-Quality-with-HDP-and-HDF-Part-1-Apache-NiFi/ta-p/248265 https://community.cloudera.com/t5/Community-Articles/Monitoring-Energy-Usage-Utilizing-Apache-NiFi-Python-Apache/ta-p/247525 https://community.cloudera.com/t5/Community-Articles/Using-Command-Line-Security-Tools-from-Apache-NiFi/ta-p/248158 https://community.cloudera.com/t5/Community-Articles/Apache-NiFi-Processor-for-Apache-MXNet-SSD-Single-Shot/ta-p/249240 https://community.cloudera.com/t5/Community-Articles/Ingesting-Apache-MXNet-Gluon-Deep-Learning-Results-Via-MQTT/ta-p/248544 https://community.cloudera.com/t5/Community-Articles/Updating-The-Apache-OpenNLP-Community-Apache-NiFi-Processor/ta-p/248398 https://community.cloudera.com/t5/Community-Articles/Integration-Apache-OpenNLP-1-8-4-into-Apache-NiFi-1-5-For/ta-p/248010 https://community.cloudera.com/t5/Community-Articles/Tracking-Phone-Location-for-Android-and-IoT-with-OwnTracks/ta-p/244875 https://community.cloudera.com/t5/Community-Articles/Ingesting-Drone-Data-From-Ryze-Tello-Part-1-Setup-and/ta-p/249422 https://community.cloudera.com/t5/Community-Articles/Ingesting-RDBMS-Data-As-New-Tables-Arrive-Automagically-into/ta-p/246214 https://community.cloudera.com/t5/Community-Articles/Incrementally-Streaming-RDBMS-Data-to-Your-Hadoop-DataLake/ta-p/247927 https://community.cloudera.com/t5/Community-Articles/Ingesting-and-Analyzing-Street-Camera-Data-from-Major-US/ta-p/249194 https://community.cloudera.com/t5/Community-Articles/Basic-Image-Processing-and-Linux-Utilities-As-Part-of-a-Big/ta-p/249121 https://community.cloudera.com/t5/Community-Articles/Hosting-and-Ingesting-Data-From-Web-Pages-Desktop-and-Mobile/ta-p/244575 https://community.cloudera.com/t5/Community-Articles/QADCDC-Our-how-to-ingest-some-database-tables-to-Hadoop-Very/ta-p/245229 https://community.cloudera.com/t5/Community-Articles/Tracking-Air-Quality-with-HDP-and-HDF-Part-2-Indoor-Air/ta-p/249471 https://community.cloudera.com/t5/Community-Articles/Streaming-Ingest-of-Google-Sheets-with-HDF-2-0/ta-p/247764 https://community.cloudera.com/t5/Community-Articles/Ingesting-Golden-Gate-Records-From-Apache-Kafka-and/ta-p/247557 https://community.cloudera.com/t5/Community-Articles/Data-Processing-Pipeline-Parsing-PDFs-and-Identifying-Names/ta-p/249105 https://community.cloudera.com/t5/Community-Articles/Using-A-TensorFlow-quot-Person-Blocker-quot-With-Apache-NiFi/ta-p/248141 https://community.cloudera.com/t5/Community-Articles/Su-Su-Sussudio-Sudoers-Log-Parsing-with-Apache-NiFi/ta-p/249461 https://community.cloudera.com/t5/Community-Articles/Integrating-IBM-Watson-Machine-Learning-APIs-with-Apache/ta-p/247545 https://community.cloudera.com/t5/Community-Articles/Simple-Change-Data-Capture-CDC-with-SQL-Selects-via-Apache/ta-p/308376 https://community.cloudera.com/t5/Community-Articles/Deep-Learning-IoT-Workflows-with-Raspberry-Pi-MQTT-MXNet/ta-p/249456 https://community.cloudera.com/t5/Community-Articles/Parsing-Web-Pages-for-Images-with-Apache-NiFi/ta-p/248415 https://community.cloudera.com/t5/Community-Articles/Trigger-SonicPi-Music-Via-Apache-NiFi/ta-p/248587 https://community.cloudera.com/t5/Community-Articles/Using-Parsey-McParseFace-Google-TensorFlow-Syntaxnet-From/ta-p/246337 https://community.cloudera.com/t5/Community-Articles/Ingesting-osquery-Into-Apache-Phoenix-using-Apache-NiFi/ta-p/249308 https://community.cloudera.com/t5/Community-Articles/Converting-PowerPoint-Presentations-into-French-from-English/ta-p/248974 https://community.cloudera.com/t5/Community-Articles/Posting-Images-with-Apache-NiFi-1-7-and-a-Custom-Processor/ta-p/249017 https://community.cloudera.com/t5/Community-Articles/Parsing-Any-Document-with-Apache-NiFi-1-5-with-Apache-Tika/ta-p/247672
-
Tencent Data Engineer: Why We Went from ClickHouse to Apache Doris?
Really interested in partial updates, but haven't found any information on how physically the merges/upserts happen. It would be great if a doc like https://github.com/apache/kudu/blob/master/docs/design-docs/tablet.md existed for apache doris.
- Would ParquetWriter from pyarrow automatically flush?
litellm
-
Anthropic launches Tool Use (function calling)
There are a few libs that already abstract this away, for example:
- https://github.com/BerriAI/litellm
- https://jxnl.github.io/instructor/
- langchain
It's not hard for me to imagine a future where there is something like the CNCF for AI models, tools, and infra.
-
Ask HN: Python Meta-Client for OpenAI, Anthropic, Gemini LLM and other API-s?
Hey, are you just looking for litellm - https://github.com/BerriAI/litellm
context - i'm the repo maintainer
-
Voxos.ai – An Open-Source Desktop Voice Assistant
It should be possible using LiteLLM and a patch or a proxy.
https://github.com/BerriAI/litellm
- Show HN: Talk to any ArXiv paper just by changing the URL
-
Integrate LLM Frameworks
This article will demonstrate how txtai can integrate with llama.cpp, LiteLLM and custom generation methods. For custom generation, we'll show how to run inference with a Mamba model.
-
Is there any open source app to load a model and expose API like OpenAI?
I use this with ollama and works perfectly https://github.com/BerriAI/litellm
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
Another abstraction layer library is: https://github.com/BerriAI/litellm
For me the killer feature of a library like this would be if it implemented function calling. Even if it was for a very restricted grammar - like the traditional ReAct prompt:
Solve a question answering task with interleaving Thought, Action, Observation usteps. Thought can reason about the current situation, and Action can be three types:
- LibreChat
- LM Studio – Discover, download, and run local LLMs
-
Please!!! Help me!!!! Open Interpreter. Chatgpt-4. Mac, Terminals.
Welcome to Open Interpreter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ OpenAI API key not found To use GPT-4 (recommended) please provide an OpenAI API key. To use Code-Llama (free but less capable) press enter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── OpenAI API key: [the API Key I inputed] Tip: To save this key for later, run export OPENAI_API_KEY=your_api_key on Mac/Linux or setx OPENAI_API_KEY your_api_key on Windows. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ Model set to GPT-4 Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit. > export OPENAI_API_KEY=your_api_key Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/bin/interpreter", line 8, in sys.exit(cli()) ^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 22, in cli cli(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/cli/cli.py", line 254, in cli interpreter.chat() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 76, in chat for _ in self._streaming_chat(message=message, display=display): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 97, in _streaming_chat yield from terminal_interface(self, message) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/terminal_interface/terminal_interface.py", line 62, in terminal_interface for chunk in interpreter.chat(message, display=False, stream=True): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 105, in _streaming_chat yield from self._respond() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 131, in _respond yield from respond(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/respond.py", line 61, in respond for chunk in interpreter._llm(messages_for_llm): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/llm/setup_openai_coding_llm.py", line 94, in coding_llm response = litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 792, in wrapper raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 751, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 53, in wrapper result = future.result(timeout=local_timeout_duration) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 42, in async_func return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 1183, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2959, in exception_type raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2355, in exception_type raise original_exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 441, in completion raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 423, in completion response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.
What are some alternatives?
iceberg - Apache Iceberg
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
hudi - Upserts, Deletes And Incremental Processing on Big Data.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
ClickHouse - ClickHouse® is a free analytics DBMS for big data
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
Apache Thrift - Apache Thrift
dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
Dask - Parallel computing with task scheduling
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
libsql - libSQL is a fork of SQLite that is both Open Source, and Open Contributions.