litellm
fluent-bit
litellm | fluent-bit | |
---|---|---|
28 | 35 | |
8,696 | 5,366 | |
19.8% | 1.7% | |
10.0 | 9.8 | |
about 9 hours ago | 6 days ago | |
Python | C | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
litellm
-
Anthropic launches Tool Use (function calling)
There are a few libs that already abstract this away, for example:
- https://github.com/BerriAI/litellm
- https://jxnl.github.io/instructor/
- langchain
It's not hard for me to imagine a future where there is something like the CNCF for AI models, tools, and infra.
-
Ask HN: Python Meta-Client for OpenAI, Anthropic, Gemini LLM and other API-s?
Hey, are you just looking for litellm - https://github.com/BerriAI/litellm
context - i'm the repo maintainer
-
Voxos.ai – An Open-Source Desktop Voice Assistant
It should be possible using LiteLLM and a patch or a proxy.
https://github.com/BerriAI/litellm
- Show HN: Talk to any ArXiv paper just by changing the URL
-
Integrate LLM Frameworks
This article will demonstrate how txtai can integrate with llama.cpp, LiteLLM and custom generation methods. For custom generation, we'll show how to run inference with a Mamba model.
-
Is there any open source app to load a model and expose API like OpenAI?
I use this with ollama and works perfectly https://github.com/BerriAI/litellm
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
Another abstraction layer library is: https://github.com/BerriAI/litellm
For me the killer feature of a library like this would be if it implemented function calling. Even if it was for a very restricted grammar - like the traditional ReAct prompt:
Solve a question answering task with interleaving Thought, Action, Observation usteps. Thought can reason about the current situation, and Action can be three types:
- LibreChat
- LM Studio – Discover, download, and run local LLMs
-
Please!!! Help me!!!! Open Interpreter. Chatgpt-4. Mac, Terminals.
Welcome to Open Interpreter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ OpenAI API key not found To use GPT-4 (recommended) please provide an OpenAI API key. To use Code-Llama (free but less capable) press enter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── OpenAI API key: [the API Key I inputed] Tip: To save this key for later, run export OPENAI_API_KEY=your_api_key on Mac/Linux or setx OPENAI_API_KEY your_api_key on Windows. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ Model set to GPT-4 Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit. > export OPENAI_API_KEY=your_api_key Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/bin/interpreter", line 8, in sys.exit(cli()) ^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 22, in cli cli(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/cli/cli.py", line 254, in cli interpreter.chat() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 76, in chat for _ in self._streaming_chat(message=message, display=display): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 97, in _streaming_chat yield from terminal_interface(self, message) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/terminal_interface/terminal_interface.py", line 62, in terminal_interface for chunk in interpreter.chat(message, display=False, stream=True): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 105, in _streaming_chat yield from self._respond() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 131, in _respond yield from respond(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/respond.py", line 61, in respond for chunk in interpreter._llm(messages_for_llm): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/llm/setup_openai_coding_llm.py", line 94, in coding_llm response = litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 792, in wrapper raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 751, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 53, in wrapper result = future.result(timeout=local_timeout_duration) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 42, in async_func return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 1183, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2959, in exception_type raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2355, in exception_type raise original_exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 441, in completion raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 423, in completion response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.
fluent-bit
-
Observability at KubeCon + CloudNativeCon Europe 2024 in Paris
Fluentbit
-
Fluent Bit with ECS: Configuration Tips and Tricks
$ docker run --rm fluent-bit-dummy WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested Fluent Bit v1.9.10 * Copyright (C) 2015-2022 The Fluent Bit Authors * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd * https://fluentbit.io [2023/12/24 16:06:59] [ info] [fluent bit] version=1.9.10, commit=557c8336e7, pid=1 [2023/12/24 16:06:59] [ info] [storage] version=1.4.0, type=memory-only, sync=normal, checksum=disabled, max_chunks_up=128 [2023/12/24 16:06:59] [ info] [cmetrics] version=0.3.7 [2023/12/24 16:06:59] [ info] [output:stdout:stdout.0] worker #0 started [2023/12/24 16:06:59] [ info] [sp] stream processor started [0] dummy.0: [1703434019.553880465, {"message"=>"custom dummy"}] [0] dummy.0: [1703434020.555768799, {"message"=>"custom dummy"}] [0] dummy.0: [1703434021.550525174, {"message"=>"custom dummy"}] [0] dummy.0: [1703434022.551563050, {"message"=>"custom dummy"}] [0] dummy.0: [1703434023.551944509, {"message"=>"custom dummy"}] [0] dummy.0: [1703434024.550027843, {"message"=>"custom dummy"}] [0] dummy.0: [1703434025.550901801, {"message"=>"custom dummy"}] [0] dummy.0: [1703434026.549279385, {"message"=>"custom dummy"}] ^C[2023/12/24 16:07:08] [engine] caught signal (SIGINT) [0] dummy.0: [1703434027.549678344, {"message"=>"custom dummy"}] [2023/12/24 16:07:08] [ warn] [engine] service will shutdown in max 5 seconds [2023/12/24 16:07:08] [ info] [engine] service has stopped (0 pending tasks) [2023/12/24 16:07:08] [ info] [output:stdout:stdout.0] thread worker #0 stopping... [2023/12/24 16:07:08] [ info] [output:stdout:stdout.0] thread worker #0 stopped
-
Should You Be Scared of Unix Signals?
> Libc is a lot more tricky about signals, since not all libc functions can be safely called from handlers.
And this is a huge thing. People do all kinds of operations in signal handlers completely oblivious to the pitfalls. Pitfalls which often do not manifest, making it a great "it works for me" territory.
I once raised a ticket on fluentbit[1] about it but they have abused signal handlers so thoroughly that I do not think they can mitigate the issue without a major rewriting of the signal and crash handling.
[1] https://github.com/fluent/fluent-bit/issues/4836
-
Vector: a Rust-based lightweight alternative to Fluentd/Logstash
Fluentbit is Fluentd's lightweight alternative to itself.
https://fluentbit.io
- FLaNK Stack Weekly for 14 Aug 2023
-
Ultimate EKS Baseline Cluster: Part 1 - Provision EKS
From here, we can explore other developments and tutorials on Kubernetes, such as o11y or observability (PLG, ELK, ELF, TICK, Jaeger, Pyroscope), service mesh (Linkerd, Istio, NSM, Consul Connect, Cillium), and progressive delivery (ArgoCD, FluxCD, Spinnaker).
-
Fluentbit Kubernetes - How to extract fields from existing logs
From this (https://github.com/fluent/fluent-bit/issues/723), I can see there is no grok support for fluent-bit.
-
Parsing multiline logs using a custom Fluent Bit configuration
apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config namespace: newrelic labels: k8s-app: newrelic-logging data: # Configuration files: server, input, filters and output # ====================================================== fluent-bit.conf: | [SERVICE] Flush 1 Log_Level ${LOG_LEVEL} Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE output-newrelic.conf @INCLUDE filter-kubernetes.conf input-kubernetes.conf: | [INPUT] Name tail Tag kube.* Path ${PATH} Parser ${LOG_PARSER} DB /var/log/flb_kube.db Mem_Buf_Limit 7MB Skip_Long_Lines On Refresh_Interval 10 filter-kubernetes.conf: | [FILTER] Name multiline Match * multiline.parser multiline-regex [FILTER] Name record_modifier Match * Record cluster_name ${CLUSTER_NAME} [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc.cluster.local:443 Merge_Log Off output-newrelic.conf: | [OUTPUT] Name newrelic Match * licenseKey ${LICENSE_KEY} endpoint ${ENDPOINT} parsers.conf: | # Relevant parsers retrieved from: https://github.com/fluent/fluent-bit/blob/master/conf/parsers.conf [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On [PARSER] Name cri Format regex Regex ^(?[^ ]+) (?stdout|stderr) (?[^ ]*) (?.*)$ Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L%z [MULTILINE_PARSER] name multiline-regex key_content message type regex flush_timeout 1000 # # Regex rules for multiline parsing # --------------------------------- # # configuration hints: # # - first state always has the name: start_state # - every field in the rule must be inside double quotes # # rules | state name | regex pattern | next state # ------|---------------|--------------------------------|----------- rule "start_state" "/(Dec \d+ \d+\:\d+\:\d+)(.*)/" "cont" rule "cont" "/^\s+at.*/" "cont"
-
Tool to scrape (semi)-structured log files (e.g. log4j)
There are also log forwarding tools like promtail and fluentbit that can be used to both ship logs to something like Loki and produce metrics.
-
How to Deploy and Scale Strapi on a Kubernetes Cluster 2/2
FluentBit, is a logging processor that can help you to push all of your application logs to a central location like an ElasticSearch or OpenSearch cluster.
What are some alternatives?
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
loki - Like Prometheus, but for logs.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
rsyslog - a Rocket-fast SYStem for LOG processing
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
syslog-ng - syslog-ng is an enhanced log daemon, supporting a wide range of input and output methods: syslog, unstructured text, queueing, SQL & NoSQL.
dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
jaeger - CNCF Jaeger, a Distributed Tracing Platform
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
winston - A logger for just about everything.
libsql - libSQL is a fork of SQLite that is both Open Source, and Open Contributions.
Grafana - The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.