logs-benchmark
vector
Our great sponsors
logs-benchmark | vector | |
---|---|---|
11 | 96 | |
75 | 16,512 | |
- | 5.7% | |
10.0 | 9.9 | |
over 1 year ago | 1 day ago | |
Shell | Rust | |
- | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
logs-benchmark
-
OpenObserve: Elasticsearch/Datadog alternative in Rust.. 140x lower storage cost
I am a maintainer at SigNoz. Nice to see OpenObserve's belief that the future of Observability should be OpenSource. We chose clickhouse rather than building a database as it takes multi-year effort to move a db to maturity and clickhouse has been battle-tested at Yandex, Uber and Cloudflare. Clickhouse also provides native integration with s3 and other blob storages. Our users have been using disk as hot storage for a week and moving the data to s3 after that. Tiered storage is really cool in terms of query performance.
We have also seen logs data of our users at a compression ratio of 30x/40x. We have published a logs benchmark (https://github.com/SigNoz/logs-benchmark) where the data is very high-cardinal (causing a compression factor of only 2.5x). Would love to see how does OpenObserve perform in that dataset someday.
Wishing you best for the journey ahead.
-
Elastic vs Loki vs SigNoz : A Performance Benchmark of self hosted & open source logging platforms
Did you update the benchmark after you got feedback from this user at Hacker News or after you got the feedback that including Loki here is kinda pointless?
- Elastic vs Loki vs SigNoz : A Perf Benchmark of open source logging platforms
- FLiP Stack Weekly 28 Jan 2023
- FLiP Stack Weekly 28-Jan-2023
- Elastic, Loki and SigNoz – A Perf Benchmark of Open-Source Logging Platforms
vector
-
Docker Log Observability: Analyzing Container Logs in HashiCorp Nomad with Vector, Loki, and Grafana
job "vector" { datacenters = ["dc1"] # system job, runs on all nodes type = "system" group "vector" { count = 1 network { port "api" { to = 8686 } } ephemeral_disk { size = 500 sticky = true } task "vector" { driver = "docker" config { image = "timberio/vector:0.30.0-debian" ports = ["api"] volumes = ["/var/run/docker.sock:/var/run/docker.sock"] } env { VECTOR_CONFIG = "local/vector.toml" VECTOR_REQUIRE_HEALTHY = "false" } resources { cpu = 100 # 100 MHz memory = 100 # 100MB } # template with Vector's configuration template { destination = "local/vector.toml" change_mode = "signal" change_signal = "SIGHUP" # overriding the delimiters to [[ ]] to avoid conflicts with Vector's native templating, which also uses {{ }} left_delimiter = "[[" right_delimiter = "]]" data=<
- FLaNK AI Weekly 18 March 2024
- Vector: A high-performance observability data pipeline
-
Hacks to reduce cloud spend
we are doing something similar with OTEL but we are looking at using https://vector.dev/
-
About reading logs
We don't pull logs, we forward logs to a centralized logging service.
-
Self hosted log paraer
opensearch - amazon fork of Elasticsearch https://opensearch.org/docs/latestif you do this an have distributed log sources you'd use logstash for, bin off logstash and use vector (https://vector.dev/) its better out of the box for SaaS stuff.
-
creating a centralize syslog server with elastic search
I have done something similar in the past: you can send the logs through a centralized syslog servers (I suggest syslog-ng) and from there ingest into ELK. For parsing I am advice to use something like Vector, is a lot more faster than logstash. When you have your logs ingested correctly, you can create your own dashboard in Kibana. If this fit your requirements, no need to install nginx (unless you want to use as reverse proxy for Kibana), php and mysql.
-
Show HN: Homelab Monitoring Setup with Grafana
I think there's nothing currently that combines both logging and metrics into one easy package and visualizes it, but it's also something I would love to have.
Vector[1] would work as the agent, being able to collect both logs and metrics. But the issue would then be storing it. I'm assuming the Elastic Stack might now be able to do both, but it's just to heavy to deal with in a small setup.
A couple of months ago I took a brief look at that when setting up logging for my own homelab (https://pv.wtf/posts/logging-and-the-homelab). Mostly looking at the memory usage to fit it on my synology. Quickwit[2] and Log-Store[3] both come with built in web interfaces that reduce the need for grafana, but neither of them do metrics.
- [1] https://vector.dev
-
Retaining Logs generated by service running in pod.
Log to stdout/stderr and collect your logs with a tool like vector (vector.dev) and send it to something like Grafana Loki.
-
Lightweight logging on RPi?
I would recommend that you run vector as a systems service so you don't have to worry about managing it. Here is a basic config to do that - https://github.com/vectordotdev/vector/blob/master/distribution/systemd/vector.service .
What are some alternatives?
opentelemetry-collector-co
graylog - Free and open log management
openobserve - 🚀 10x easier, 🚀 140x lower storage cost, 🚀 high performance, 🚀 petabyte scale - Elasticsearch/Splunk/Datadog alternative for 🚀 (logs, metrics, traces, RUM, Error tracking, Session replay).
Fluentd - Fluentd: Unified Logging Layer (project under CNCF)
carbonyl - Chromium running inside your terminal
agent - Vendor-neutral programmable observability pipelines.
shite - The little hot-reloadin' static site maker from shell.
syslog-ng - syslog-ng is an enhanced log daemon, supporting a wide range of input and output methods: syslog, unstructured text, queueing, SQL & NoSQL.
clamshell - experimenting with a python based shell
OpenSearch - 🔎 Open source distributed and RESTful search engine.
FLiPN-Py-Stocks - finnhub stocks
tracing - Application level tracing for Rust.