opentelemetry-collector-co
vector
opentelemetry-collector-co | vector | |
---|---|---|
10 | 96 | |
- | 16,561 | |
- | 1.8% | |
- | 9.9 | |
- | about 16 hours ago | |
Rust | ||
- | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opentelemetry-collector-co
-
All you need is Wide Events, not "Metrics, Logs and Traces"
The open telemetry collector does just that. https://github.com/open-telemetry/opentelemetry-collector-co...
-
Migrating to OpenTelemetry
If you are using the prometheus exporter, you can use the transform processor to get specific resource attributes into metric labels.
With the advantage that you get only the specific attributes you want, thus avoiding a cardinality explosion.
https://github.com/open-telemetry/opentelemetry-collector-co...
-
Vendor lock-in is in the small details
The article seems to suggest https://github.com/open-telemetry/opentelemetry-collector-co... was silently killed, yet it appears to have been merged in January, am I missing something?
-
Ask HN: What's Your Opinion on Opentelemetry?
OpenTelemetry is a large suite of software, that supports many use cases. I think you got what you wanted but didn't realised it!
The dedicated executable that you are after is called the OpenTelemtry Collector.
The OpenTelemetry SDK for language of choice should include many exporters, which describe the format and transport mechanism for the traces. The OpenTelemetry Collector can then use an appropriate receiver to ingest those traces.
Here is a file based receiver for the collector:
https://github.com/open-telemetry/opentelemetry-collector-co...
-
OpenTelemetry at Scale: Using Kafka to handle bursty traffic
This arch is how the big players do it at scale (ie. datadog, new relic - the second it passes their edge it lands in a kafka queue). Also otel components lack rate limiting(1) meaning its super easy to overload your backend storage (s3).
Grafana has some posts how they softened the s3 blow with memcached(2,3).
1. https://github.com/open-telemetry/opentelemetry-collector-co...
-
Show HN: HyperDX – open-source dev-friendly Datadog alternative
Ah yeah the easiest way is probably using the OpenTelemetry collector to set up a process to pull your logs out of jounrnald and send them via otel logs to HyperDX (or anywhere else that speaks otel) - the docs might be a bit tricky to go around depending on your familiarity with OpenTelemetry but this is what you'd be looking for:
https://github.com/open-telemetry/opentelemetry-collector-co...
Happy to dive more into the discord too if you'd like!
-
DataDog asked OpenTelemetry contributor to kill pull request
Link to exact comment: https://github.com/open-telemetry/opentelemetry-collector-co...
-
Elastic, Loki and SigNoz – A Perf Benchmark of Open-Source Logging Platforms
What schema does SigNoz use with Clickhouse? The Open Telemetry Collector uses this schema https://github.com/open-telemetry/opentelemetry-collector-co... and I found out that accesing map attributes is much slower (10-50x) compared to regular columns. I expected some slow down but this is too much.
-
Podman: A tool for managing OCI containers and pods
Podman does support docker API so you can use something like the OpenTelemetry Collector to fetch metrics using the docker API and forward them to prometheus.
Collector: https://github.com/open-telemetry/opentelemetry-collector-co...
Docker receiver: https://github.com/open-telemetry/opentelemetry-collector-co...
Prometheus exporters: https://github.com/open-telemetry/opentelemetry-collector-co... and https://github.com/open-telemetry/opentelemetry-collector-co...
vector
-
Docker Log Observability: Analyzing Container Logs in HashiCorp Nomad with Vector, Loki, and Grafana
job "vector" { datacenters = ["dc1"] # system job, runs on all nodes type = "system" group "vector" { count = 1 network { port "api" { to = 8686 } } ephemeral_disk { size = 500 sticky = true } task "vector" { driver = "docker" config { image = "timberio/vector:0.30.0-debian" ports = ["api"] volumes = ["/var/run/docker.sock:/var/run/docker.sock"] } env { VECTOR_CONFIG = "local/vector.toml" VECTOR_REQUIRE_HEALTHY = "false" } resources { cpu = 100 # 100 MHz memory = 100 # 100MB } # template with Vector's configuration template { destination = "local/vector.toml" change_mode = "signal" change_signal = "SIGHUP" # overriding the delimiters to [[ ]] to avoid conflicts with Vector's native templating, which also uses {{ }} left_delimiter = "[[" right_delimiter = "]]" data=<
- FLaNK AI Weekly 18 March 2024
- Vector: A high-performance observability data pipeline
-
Hacks to reduce cloud spend
we are doing something similar with OTEL but we are looking at using https://vector.dev/
-
About reading logs
We don't pull logs, we forward logs to a centralized logging service.
-
Self hosted log paraer
opensearch - amazon fork of Elasticsearch https://opensearch.org/docs/latestif you do this an have distributed log sources you'd use logstash for, bin off logstash and use vector (https://vector.dev/) its better out of the box for SaaS stuff.
-
creating a centralize syslog server with elastic search
I have done something similar in the past: you can send the logs through a centralized syslog servers (I suggest syslog-ng) and from there ingest into ELK. For parsing I am advice to use something like Vector, is a lot more faster than logstash. When you have your logs ingested correctly, you can create your own dashboard in Kibana. If this fit your requirements, no need to install nginx (unless you want to use as reverse proxy for Kibana), php and mysql.
-
Show HN: Homelab Monitoring Setup with Grafana
I think there's nothing currently that combines both logging and metrics into one easy package and visualizes it, but it's also something I would love to have.
Vector[1] would work as the agent, being able to collect both logs and metrics. But the issue would then be storing it. I'm assuming the Elastic Stack might now be able to do both, but it's just to heavy to deal with in a small setup.
A couple of months ago I took a brief look at that when setting up logging for my own homelab (https://pv.wtf/posts/logging-and-the-homelab). Mostly looking at the memory usage to fit it on my synology. Quickwit[2] and Log-Store[3] both come with built in web interfaces that reduce the need for grafana, but neither of them do metrics.
- [1] https://vector.dev
-
Retaining Logs generated by service running in pod.
Log to stdout/stderr and collect your logs with a tool like vector (vector.dev) and send it to something like Grafana Loki.
-
Lightweight logging on RPi?
I would recommend that you run vector as a systems service so you don't have to worry about managing it. Here is a basic config to do that - https://github.com/vectordotdev/vector/blob/master/distribution/systemd/vector.service .
What are some alternatives?
podman-compose - a script to run docker-compose.yml using podman
graylog - Free and open log management
cockpit-podman - Cockpit UI for podman containers
Fluentd - Fluentd: Unified Logging Layer (project under CNCF)
traefik - The Cloud Native Application Proxy
agent - Vendor-neutral programmable observability pipelines.
logs-benchmark - Logs performance benchmark repo: Comparing Elastic, Loki and SigNoz
syslog-ng - syslog-ng is an enhanced log daemon, supporting a wide range of input and output methods: syslog, unstructured text, queueing, SQL & NoSQL.
dd-trace-py - Datadog Python APM Client
OpenSearch - 🔎 Open source distributed and RESTful search engine.
opentelemetry-collector-contrib - Contrib repository for the OpenTelemetry Collector
tracing - Application level tracing for Rust.