cockpit-podman
opentelemetry-collector-contrib
Our great sponsors
cockpit-podman | opentelemetry-collector-contrib | |
---|---|---|
4 | 43 | |
390 | 2,546 | |
5.1% | 5.8% | |
9.5 | 10.0 | |
2 days ago | 4 days ago | |
JavaScript | Go | |
GNU Lesser General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cockpit-podman
-
Monitoring and visibility of rootless containers running by different users on single server
Hey, I have homelab NUC server where I run different services as rootless podman pods and containers running by dedicated users, eg. nextcloud pod running by nextcloud user, gitea by gitea, znc by znc and more. Next step was trying to monitor these services. First trey was using cockpit-podman feature, but in UI I see only containers of my user and rootfull which both was empty. I cannot switch to another user because the're not capable for login to cockpit. Now I'm testing prometheus and podman-exporter which seems ok, but again I see containers only if I run prometheus-podman-exporter service as user who run another podman container (e.g. as nextcloud user). Of course I can run this service parallel as dedicated user with another port and add them as target to prometheus scrape config but from obvious reason I want to avoid that. Is it more gentle way to monitor my pods? I know that those namespaces are one of the main feature of Podman but I don't consider this before my deploys :)
- Cockpit Project
-
Front end ...gui for podman.
Cockpit has a podman module (cockpit-podman)
-
Podman: A tool for managing OCI containers and pods
Tested podman to replace docker (the cli) on a mac yesterday Most of it works fine. They have an easy way to setup a vm now with `podman machine`: https://podman.io/getting-started/installation#macos
If you want the management GUI, install cockpit: https://github.com/cockpit-project/cockpit-podman
Try podman, you'll be impressed.
opentelemetry-collector-contrib
- OpenTelemetry at Scale: what buffer we can use at the behind to buffer the data?
-
All you need is Wide Events, not "Metrics, Logs and Traces"
The open telemetry collector does just that. https://github.com/open-telemetry/opentelemetry-collector-co...
-
OpenTelemetry Collector Anti-Patterns
There are two official distributions of the OpenTelemetry Collector: Core, and Contrib.
-
OpenTelemetry Journey #00 - Introduction to OpenTelemetry
Maybe, you are asking yourself: "But I already had instrumented my applications with vendor-specific libraries and I'm using their agents and monitoring tools, why should I change to OpenTelemetry?". The answer is: maybe you're right and I don't want to encourage you to update the way how you are doing observability in your applications, that's a hard and complex task. But, if you are starting from scratch or you are not happy with your current observability infrastructure, OpenTelemetry is the best choice, independently of the backend telemetry tool that you are using. I would like to invite you to take a look at the number of exporters available in the collector contrib section, if your backend tracing tool is not there, probably it's already using the Open Telemetry Protocol (OTLP) and you will be able to use the core collector. Otherwise, you should consider changing your backend telemetry tool or contributing to the project creating a new exporter.
-
Building an Observability Stack with Docker
To receive OTLP data, you set up the standard otlp receiver to receive data in HTTP or gRPC format. To forward traces and metrics, a batch processor was defined to accumulate data and send it every 100 milliseconds. Then set up a connection to Tempo (in otlp/tempo exporter, with a standard top exporter) and to Prometheus (in prometheus exporter, with a control exporter). A debug exporter also was added to log info on container standard I/O and see how the collector is working.
-
Spotlight: Sentry for Development
Thanks for the reply. Would the Spotlight sidecar possibly be able to run independently and consume spans emitted by the Sentry exporter[0] or some other similar flow beyond strictly exporting directly from the Sentry SDK provided by Spotlight?
This tooling looks really cool and I'd love to play around with it, but am already pretty entrenched into OTel and funneling data through the collector and don't want to introduce too much additional overhead for devs.
[0] https://github.com/open-telemetry/opentelemetry-collector-co...
-
Amazon EKS Monitoring with OpenTelemetry [Step By Step Guide]
A list of all metric definitions can be found here.
-
Spring Boot Monitoring with Open-Source Tools
receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 hostmetrics: collection_interval: 60s scrapers: cpu: {} disk: {} load: {} filesystem: {} memory: {} network: {} paging: {} process: mute_process_name_error: true mute_process_exe_error: true mute_process_io_error: true processes: {} prometheus: config: global: scrape_interval: 60s scrape_configs: - job_name: otel-collector-binary scrape_interval: 60s static_configs: - targets: ["localhost:8889>"] - job_name: "jvm-metrics" scrape_interval: 10s metrics_path: "/actuator/prometheus" static_configs: - targets: ["localhost:8090>"] processors: batch: send_batch_size: 1000 timeout: 10s # Ref: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/README.md resourcedetection: detectors: [env, system] # Before system detector, include ec2 for AWS, gcp for GCP and azure for Azure. # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels. timeout: 2s system: hostname_sources: [os] # alternatively, use [dns,os] for setting FQDN as host.name and os as fallback extensions: health_check: {} zpages: {} exporters: otlp: endpoint: "ingest.{region}.signoz.cloud:443" tls: insecure: false headers: "signoz-access-token": logging: verbosity: normal service: telemetry: metrics: address: 0.0.0.0:8888 extensions: [health_check, zpages] pipelines: metrics: receivers: [otlp] processors: [batch] exporters: [otlp] metrics/internal: receivers: [prometheus, hostmetrics] processors: [resourcedetection, batch] exporters: [otlp] traces: receivers: [otlp] processors: [batch] exporters: [otlp] logs: receivers: [otlp] processors: [batch] exporters: [otlp]
-
Migrating to OpenTelemetry
If you are using the prometheus exporter, you can use the transform processor to get specific resource attributes into metric labels.
With the advantage that you get only the specific attributes you want, thus avoiding a cardinality explosion.
https://github.com/open-telemetry/opentelemetry-collector-co...
-
Exploring the OpenTelemetry Collector
OpenTelemetry Operators
What are some alternatives?
traefik - The Cloud Native Application Proxy
uptrace - Open source APM: OpenTelemetry traces, metrics, and logs
podman-compose - a script to run docker-compose.yml using podman
signoz - SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in a single application. An open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open source Application Performance Monitoring (APM) & Observability tool
machine
toolbox - Tool for interactive command line environments on Linux
singularity - Singularity has been renamed to Apptainer as part of us moving the project to the Linux Foundation. This repo has been persisted as a snapshot right before the changes.
serilog-sinks-seq - A Serilog sink that writes events to the Seq structured log server
gns3-server - GNS3 server
opentelemetry-collector-co