helm-charts
vector
helm-charts | vector | |
---|---|---|
98 | 96 | |
4,647 | 16,512 | |
0.9% | 1.5% | |
9.7 | 9.9 | |
6 days ago | 5 days ago | |
Mustache | Rust | |
Apache License 2.0 | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
helm-charts
-
You get what you Measure: Understanding your applications health with Grafana, Loki and Prometheus
Prometheus can be deployed using the Prometheus Helm Chart. This helm chart contains a lot of features such as the already mentioned Push Gateway, Alert Manager and so on. For simplicity reasons of this tutorial I will not show all the Helm chart configuration but you can see a real example used by me here.
-
Multi-Cluster Prometheus: Scaling Metrics Across Kubernetes Clusters
Building upon Bartłomiej Płotka's insightful blog on Prometheus and its passthrough agent mode, this post dives into implementing multi-cluster Prometheus support. Notably, the official inclusion of support in the widely-used kube-prometheus-stack came with the release in July 2023, making it easier to extend Prometheus monitoring across clusters.
-
Hands On: Pull metrics into Kubernetes from anywhere and treat them generically with the Keptn Metrics Server
The first thing you'll need, of course, is at least one backend to store metrics. So install Prometheus now:
-
Kubernetes Ingress Visibility
For the request following, something like jeager https://www.jaegertracing.io/, because you are talking more about tracing than necessarily logging. For just monitoring, https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack would be the starting point, then it depends. Nginx gives metrics out of the box, then you can pull in the dashboard like https://grafana.com/grafana/dashboards/14314-kubernetes-nginx-ingress-controller-nextgen-devops-nirvana/ , or full metal with something like service mesh monitoring which would provably fulfil most of the requirements
-
Smart-Cash project -Adding monitoring to EKS using Prometheus operator
kube-prometheus-stack is a Helm chart that contains several components to monitor the Kubernetes cluster, along with Grafana dashboards Grafana Dashboards to visualize the data. This option will be used in this article.
-
K8s Monitoring Per Namespace
This one I highly recommend: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
- Is Prometheus the right tool for my use case here?
-
Do we have any Prometheus metric to get the kubernetes cluster-level CPU/Memory requests/limits?
We use kube-prometheus-stack for metrics and have added the K8s views dashboards from grafana-dashboards-kubernetes. You should check out the k8s-views-global dashboard. I believe it's just what you are looking for.
-
Alertmanager SMTP configuration
You should take a look at "kube-prometheus-stack". It not only includes prometheus, node-exporter and Grafana but also a ton of preconfigured alerts and dashboards. Will save you a lot of work!
-
How do I find / edit Prometheus configuration after deploying it on Kubernetes ?
Since their are different ways to install what exactly did you install? Vanilla charts , stack, operator? https://github.com/prometheus-community/helm-charts/tree/main/charts
vector
-
Docker Log Observability: Analyzing Container Logs in HashiCorp Nomad with Vector, Loki, and Grafana
job "vector" { datacenters = ["dc1"] # system job, runs on all nodes type = "system" group "vector" { count = 1 network { port "api" { to = 8686 } } ephemeral_disk { size = 500 sticky = true } task "vector" { driver = "docker" config { image = "timberio/vector:0.30.0-debian" ports = ["api"] volumes = ["/var/run/docker.sock:/var/run/docker.sock"] } env { VECTOR_CONFIG = "local/vector.toml" VECTOR_REQUIRE_HEALTHY = "false" } resources { cpu = 100 # 100 MHz memory = 100 # 100MB } # template with Vector's configuration template { destination = "local/vector.toml" change_mode = "signal" change_signal = "SIGHUP" # overriding the delimiters to [[ ]] to avoid conflicts with Vector's native templating, which also uses {{ }} left_delimiter = "[[" right_delimiter = "]]" data=<
- FLaNK AI Weekly 18 March 2024
- Vector: A high-performance observability data pipeline
-
Hacks to reduce cloud spend
we are doing something similar with OTEL but we are looking at using https://vector.dev/
-
About reading logs
We don't pull logs, we forward logs to a centralized logging service.
-
Self hosted log paraer
opensearch - amazon fork of Elasticsearch https://opensearch.org/docs/latestif you do this an have distributed log sources you'd use logstash for, bin off logstash and use vector (https://vector.dev/) its better out of the box for SaaS stuff.
-
creating a centralize syslog server with elastic search
I have done something similar in the past: you can send the logs through a centralized syslog servers (I suggest syslog-ng) and from there ingest into ELK. For parsing I am advice to use something like Vector, is a lot more faster than logstash. When you have your logs ingested correctly, you can create your own dashboard in Kibana. If this fit your requirements, no need to install nginx (unless you want to use as reverse proxy for Kibana), php and mysql.
-
Show HN: Homelab Monitoring Setup with Grafana
I think there's nothing currently that combines both logging and metrics into one easy package and visualizes it, but it's also something I would love to have.
Vector[1] would work as the agent, being able to collect both logs and metrics. But the issue would then be storing it. I'm assuming the Elastic Stack might now be able to do both, but it's just to heavy to deal with in a small setup.
A couple of months ago I took a brief look at that when setting up logging for my own homelab (https://pv.wtf/posts/logging-and-the-homelab). Mostly looking at the memory usage to fit it on my synology. Quickwit[2] and Log-Store[3] both come with built in web interfaces that reduce the need for grafana, but neither of them do metrics.
- [1] https://vector.dev
-
Retaining Logs generated by service running in pod.
Log to stdout/stderr and collect your logs with a tool like vector (vector.dev) and send it to something like Grafana Loki.
-
Lightweight logging on RPi?
I would recommend that you run vector as a systems service so you don't have to worry about managing it. Here is a basic config to do that - https://github.com/vectordotdev/vector/blob/master/distribution/systemd/vector.service .
What are some alternatives?
tanka - Flexible, reusable and concise configuration for Kubernetes
graylog - Free and open log management
kube-thanos - Kubernetes specific configuration for deploying Thanos.
Fluentd - Fluentd: Unified Logging Layer (project under CNCF)
kube-prometheus - Use Prometheus to monitor Kubernetes and applications running on Kubernetes
agent - Vendor-neutral programmable observability pipelines.
kustomize - Customization of kubernetes YAML configurations
syslog-ng - syslog-ng is an enhanced log daemon, supporting a wide range of input and output methods: syslog, unstructured text, queueing, SQL & NoSQL.
pihole-kubernetes - PiHole on kubernetes
OpenSearch - 🔎 Open source distributed and RESTful search engine.
pack - CLI for building apps using Cloud Native Buildpacks
tracing - Application level tracing for Rust.