metallb
loki
Our great sponsors
metallb | loki | |
---|---|---|
78 | 80 | |
6,629 | 22,213 | |
2.0% | 3.7% | |
9.4 | 9.9 | |
1 day ago | 1 day ago | |
Go | Go | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
metallb
-
Self hosted kubernetes
Hey guys, I want to share a guide I’m pretty proud of which is talking about setting up kubernetes which leverages https://kubespray.io/#/ and https://metallb.universe.tf/ so you can host this yourself most people when spinning up kubernetes opt for k3s or get stuck with all the options or unable to setup the external ips for their services so these tools will eliminate the problem.
- Deploy web app in port 80 using kubernetes
-
How to load balance highly available bare metal Kubernetes cluster control plane nodes?
Have a closer look at MetallLB.
-
Trouble with RKE2 HA Setup: Part 2
To avoid that, you can use a combination of haproxy and keepalived, an enterprise grade load balancer like the one from F5 or Citrix. Besides that you can also work with https://kube-vip.io or https://metallb.universe.tf.
-
Kubernetes and feeling defeated
Not sure if klipper is usable in a cluster with multiple nodes, as it binds to one port only. You may want to use MetalLB instead: https://metallb.universe.tf/
-
Cool stuff to deploy for a project ideas
Then deploy MetalLB https://metallb.universe.tf/
- Load balance ingress for baremetal
-
Own kubernetes cluster
What issue do you see with the load balancer? For self hosted clusters, one can use MetalLB for example to have such single outfacing IP which will failover to another node keeping the same IP if a node dies.
-
PaperLB: A Kubernetes Network Load Balancer Implementation
Quoting from their docs:
-
libvirt-k8s-provisioner - Ansible and terraform to build a cluster from scratch in less than 10 minutes ok KVM - Updated for 1.26
metalLB to manage bare-metal LoadBalancer services - WIP - Only L2 configuration can be set-up via playbook.
loki
- Loki 3.0 Released
-
List of your reverse proxied services
I also needed to make a small patch to Promtail to make this work: https://github.com/grafana/loki/pull/10256
-
About reading logs
We don't pull logs, we forward logs to a centralized logging service.
-
loki VS openobserve - a user suggested alternative
2 projects | 30 Aug 2023
-
Logs monitoring with Loki, Node.js and Fastify.js
Over the past few months, I've been spending a lot of time creating dashboards on Grafana using Loki for MyUnisoft (the company I work for).
-
OpenObserve: Open source Elasticsearch alternative in Rust for logs. 140x lower storage cost
For log systems you generally don't migrate data. Logs lose value over time. What you want to do is to go ahead and start ingesting data into the new system (OpenObserve in this case) and slowly, the data in the old system will become stale and then you can retire it. However if you need to export logs anyhow, there is no straightforward way in loki to do this. You could run a script to query loki and export it to a file. If found this thread with a sample script - https://github.com/grafana/loki/issues/409
-
Config files of snaps?
That snap is woefully out of date. The upstream repo was recently updated to 2.8.2, but the snap stable channel has 2.4.1 from 18 months ago. https://github.com/grafana/loki/releases/tag/v2.8.2
-
i need to visualize all logs from remote dir
Loki
- Loki Helm charts that use DynamoDB
-
I can't recommend serious use of an all-in-one local Grafana Loki setup
I installed promtail a few weeks back and I ran into this bug, that has been outstanding for months: https://github.com/grafana/loki/issues/8663 (e.g. a fix had been written but had not been released):
Due to a buffering issue, Loki would exit in case of configuration error without printing any error message or anything at all
There is definitely something weird about how the project is run.
What are some alternatives?
kube-vip - Kubernetes Control Plane Virtual IP and Load-Balancer
ClickHouse - ClickHouse® is a free analytics DBMS for big data
calico - Cloud native networking and network security
fluent-bit - Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX and Windows
ingress-nginx - Ingress-NGINX Controller for Kubernetes
Zabbix - Real-time monitoring of IT components and services, such as networks, servers, VMs, applications and the cloud.
external-dns - Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
VictoriaMetrics - VictoriaMetrics: fast, cost-effective monitoring solution and time series database
cert-manager - Automatically provision and manage TLS certificates in Kubernetes
ElastiFlow - Network flow analytics (Netflow, sFlow and IPFIX) with the Elastic Stack
rancher - Complete container management platform
loki-multi-tenant-proxy - Grafana Loki multi-tenant Proxy. Needed to deploy Grafana Loki in a multi-tenant way