jaeger
signoz
Our great sponsors
jaeger | signoz | |
---|---|---|
94 | 310 | |
19,370 | 16,811 | |
1.3% | 2.8% | |
9.7 | 9.9 | |
1 day ago | 7 days ago | |
Go | TypeScript | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jaeger
-
Observability with OpenTelemetry, Jaeger and Rails
Jaeger maps the flow of requests and data as they traverse a distributed system. These requests may make calls to multiple services, which may introduce their own delays or errors. https://www.jaegertracing.io/
-
Show HN: An open source performance monitoring tool
As engineers at past startups, we often had to debug slow queries, poor load times, inconsistent errors, etc... While tools like Jaegar [2] helped us inspect server-side performance, we had no way to tie user events to the traces we were inspecting. In other words, although we had an idea of what API route was slow, there wasn’t much visibility into the actual bottleneck.
This is where our performance product comes in: we’re rethinking a tracing/performance tool that focuses on bridging the gap between the client and server.
What’s unique about our approach is that we lean heavily into creating traces from the frontend. For example, if you’re using our Next.js SDK, we automatically connect browser HTTP requests with server-side code execution, all from the perspective of a user. We find this much more powerful because you can understand what part of your frontend codebase causes a given trace to occur. There’s an example here [3].
From an instrumentation perspective, we’ve built our SDKs on-top of OTel, so you can create custom spans to expand highlight-created traces in server routes that will transparently roll up into the flame graph you see in our UI. You can also send us raw OTel traces and manually set up the client-server connection if you want. [4] Here’s an example of what a trace looks like with a database integration using our Golang GORM SDK, triggered by a frontend GraphQL query [5] [6].
In terms of how it's built, we continue to rely heavily on ClickHouse as our time-series storage engine. Given that traces require that we also query based on an ID for specific groups of spans (more akin to an OLTP db), we’ve leveraged the power of CH materialized views to make these operations efficient (described here [7]).
To try it out, you can spin up the project with our self hosted docs [8] or use our cloud offering at app.highlight.io. The entire stack runs in docker via a compose file, including an OpenTelemetry collector for data ingestion. You’ll need to point your SDK to export data to it by setting the relevant OTLP endpoint configuration (ie. environment variable OTEL_EXPORTER_OTLP_LOGS_ENDPOINT [9]).
Overall, we’d really appreciate feedback on what we’re building here. We’re also all ears if anyone has opinions on what they’d like to see in a product like this!
[1] https://github.com/highlight/highlight/blob/main/LICENSE
[2] https://www.jaegertracing.io
[3] https://app.highlight.io/1383/sessions/COu90Th4Qc3PVYTXbx9Xe...
[4] https://www.highlight.io/docs/getting-started/native-opentel...
[5] https://static.highlight.io/assets/docs/gorm.png
[6] https://github.com/highlight/highlight/blob/1fc9487a676409f1...
[7] https://highlight.io/blog/clickhouse-materialized-views
[8] https://www.highlight.io/docs/getting-started/self-host/self...
[9] https://opentelemetry.io/docs/concepts/sdk-configuration/otl...
-
Kubernetes Ingress Visibility
For the request following, something like jeager https://www.jaegertracing.io/, because you are talking more about tracing than necessarily logging. For just monitoring, https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack would be the starting point, then it depends. Nginx gives metrics out of the box, then you can pull in the dashboard like https://grafana.com/grafana/dashboards/14314-kubernetes-nginx-ingress-controller-nextgen-devops-nirvana/ , or full metal with something like service mesh monitoring which would provably fulfil most of the requirements
-
Migrating to OpenTelemetry
Have you checked out Jaeger [1]? It is lightweight enough for a personal project, but featureful enough to really help "turn on the lightbulb" with other engineers to show them the difference between logging/monitoring and tracing.
-
The Road to GraphQL At Enterprise Scale
From the perspective of the realization of GraphQL infrastructure, the interesting direction is "Finding". How to find the problem? How to find the bottleneck of the system? Distributed Tracing System (DTS) will help answer this question. Distributed tracing is a method of observing requests as they propagate through distributed environments. In our scenario, we have dozens of subgraphs, gateway, and transport layer through which the request goes. We have several tools that can be used to detect the whole lifecycle of the request through the system, e.g. Jaeger, Zipkin or solutions that provided DTS as a part of the solution NewRelic.
-
OpenTelemetry Exporters - Types and Configuration Steps
Jaeger is an open-source, distributed tracing system that monitors and troubleshoots the flow of requests through complex, microservices-based applications, providing a comprehensive view of system interactions.
-
Fault Tolerance in Distributed Systems: Strategies and Case Studies
However, ensuring fault tolerance in distributed systems is not at all easy. These systems are complex, with multiple nodes or components working together. A failure in one node can cascade across the system if not addressed timely. Moreover, the inherently distributed nature of these systems can make it challenging to pinpoint the exact location and cause of fault - that is why modern systems rely heavily on distributed tracing solutions pioneered by Google Dapper and widely available now in Jaeger and OpenTracing. But still, understanding and implementing fault tolerance becomes not just about addressing the failure but predicting and mitigating potential risks before they escalate.
-
Observability in Action Part 3: Enhancing Your Codebase with OpenTelemetry
In this article, we'll use HoneyComb.io as our tracing backend. While there are other tools in the market, some of which can be run on your local machine (e.g., Jaeger), I chose HoneyComb because of their complementary tools that offer improved monitoring of the service and insights into its behavior.
-
Building for Failure
The best way to do this, is with the help of tracing tools such as paid tools such as Honeycomb, or your own instance of the open source Jaeger offering, or perhaps Encore's built in tracing system.
-
Distributed Tracing and OpenTelemetry Guide
In this example, I will create 3 Node.js services (shipping, notification, and courier) using Amplication, add traces to all services, and show how to analyze trace data using Jaeger.
signoz
-
Show HN: OneUptime – open-source Datadog Alternative
You should also check out SigNoz [1], we are an open-core alternative to DataDog - based natively on OpenTelemetry. We also have a cloud product if you don't want to host yourself
-
Indexing one petabyte of logs per day with Quickwit
You might want to have a look at SigNoz [1] as well. We have also published some perf benchmark wrt Elastic & Loki [2] and have some cool features like logs pipeline for manipulating logs before ingestion
- Open-Source Observability – SigNoz
-
Tools used by the top 1% of Platform Engineers and their Commercial Open Source Alternatives
Check Signoz's repo on GitHub
-
Show HN: Quickwit – OSS Alternative to Elasticsearch, Splunk, Datadog
SigNoz maintainer here.
We also have traces, metrics and logs in a single application which makes correlation across them much easier. From what I can understand from Quickwit website, they use Grafana and Jaeger for UI.
Here'e our github repo if you want to check it out. https://github.com/signoz/signoz
-
Sentry new TOS to use data to train AI with no opt-out
Using user's private with no opt-out option is unethical.
If anyone is looking self-hosted for alternatives then they should try SigNoz: https://github.com/SigNoz/signoz
-
Top 11 New Relic Alternatives & Competitors
SigNoz is a great New Relic alternative that is open-source and provides three signals in a single pane of glass. You can monitor logs, metrics, and traces and correlate signals for better insights into application performance.
-
Share your DevOps setups
If anyone wants to check the project, here's our github repo - https://github.com/signoz/signoz
-
Amazon EKS Monitoring with OpenTelemetry [Step By Step Guide]
You need a backend to which you can send the collected data for monitoring and visualization. SigNoz is an OpenTelemetry-native APM that is well-suited for visualizing OpenTelemetry data.
-
Spring Boot Monitoring with Open-Source Tools
Once the data is collected, it needs to be sent to a backend. That’s where SigNoz comes into the picture. SigNoz is an open-source OpenTelemetry-native APM that provides logs, metrics and traces under a single pane of glass.
What are some alternatives?
Sentry - Developer-first error tracking and performance monitoring
skywalking - APM, Application Performance Monitoring System
prometheus - The Prometheus monitoring system and time series database.
uptrace - Open source APM: OpenTelemetry traces, metrics, and logs
Pinpoint - APM, (Application Performance Management) tool for large-scale distributed systems.
zipkin - Zipkin is a distributed tracing system
fluent-bit - Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX and Windows
hypertrace - An open source distributed tracing & observability platform
apm-server - APM Server