prometheus-explorer
qryn
prometheus-explorer | qryn | |
---|---|---|
1 | 11 | |
27 | 1,290 | |
- | 3.6% | |
0.0 | 9.3 | |
over 2 years ago | 21 days ago | |
JavaScript | JavaScript | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
prometheus-explorer
qryn
-
Quickwit Joins Datadog
I'm not the person you asked -- and I also want to be transparent that I only PoC-ed it and due to external circumstances didn't get it all the way out to production -- but I really like how https://github.com/metrico/qryn (AGPLv3) thinks about the world. It is, like SigNoz, unified (logs, metrics, traces) but it actually implements several of the common endpoint schemes allowing it to pretend to be "your favorite tool" which plausibly helps any integration story <https://github.com/metrico/qryn#%EF%B8%8F-query> and <https://github.com/metrico/qryn#-vendors-compatibility>
I was going to take advantage of Clickhouse using S3 as warm-to-cold storage since my mental model is that most logs, metrics, and traces are written and not read https://clickhouse.com/docs/en/integrations/s3#configuring-s...
I believe one could do that with SigNoz, too, so I don't mean to imply that trickery was qryn specific, just that I didn't want to get into the "constantly resizing io3 PVC" game
- Show HN: Pyroscope/Phlare drop-in compatible replacement with OLAP storage
-
Coinbase (?) had a $65M Datadog bill per Datadog's Q1 earnings call
Thanks for mentioning qryn! We are a non-corporate alternative and feature full ingestion compatibility with DataDog (including Cloudflare emitters, etc), Loki, Prometheus, Tempo, Elastic & others for both on-prem (https://qryn.dev) and Cloud (https://qryn.cloud) deployments, without the killer price tag.
Note: in qryn s3/r2 are as close to /dev/null as it gets!
-
What I like using Grafana Loki for (and where I avoid it)
qryn and vector get along very well! We use it all the time for testing and developing qryn and qryn.cloud and most of our users love it! But we're just as compatible with Loki/LogQL, Influx protocol for metrics and logs, Elastic Bulk, Prometheus for metrics, opentelemetry for everything... and more coming!
Feel free to open an issue on our repository if you end up trying it and/or would like us to help out!
https://qryn.dev
- Making a Homegrown ClickHouse Log for $20/mo
-
Building the world’s fastest website analytics (2021)
> *it would be nice to use ClickHouse as a Prometheus backend*
Well... that's already possible and it works great! As you might know https://qryn.dev turns ClickHouse into a powerful Prometheus *remote_write* backend and the GO/cloud version supports full PromQL queries off ClickHouse transparently (the JS/Node version transpiles to LogQL instead) and from a performance point of view its well on par with Prometheus, Mimir and Victoriametrics in our internal benchmarks (including Clickhouse as part of the resource set) with millions of inserts/s and broad client compatibility. Same for Logs (LogQL) and Traces (Tempo)
Disclaimer: I work on qryn
-
Think Prometheus, but for logs (not metrics). Simple, efficient, fast log store
Thanks for mentioning our project! qryn (formerly cloki) is currently more focused on the polyglot factor and trying to unify logs, metrics and telemetry on a single stateless platform, easy to scale without hundreds of services and moving parts. At this stage, its a lightweight Grafana Cloud alternative just requiring clickhouse - no sidecar databases, redis, or plugins needed, and no new query languages or rules to learn. Latest info is at https://qryn.dev
-
Show HN: Distributed Tracing Using OpenTelemetry and ClickHouse
cloki can be used to read metrics out of any CH table so it should work fine.
we also just introduced experimental support for ingesting OTLP/ZIPKIN spans and a tempo-compatible API in cloki, looking for testers to validate this feature:
https://github.com/lmangani/cLoki/wiki/Tempo-Tracing#clickho...
Internally trace spans are stored as tagged JSON logs, meaning they are available from both Loki and Tempo APIs and can be used from pretty much any visualization, too!
-
I Don't Think Elasticsearch Is a Good Logging System
There's also cLoki. It's a new project that puts a Loki gateway over a ClickHouse backend store. We're looking at it and plan a presentation from the author(s) at the next ClickHouse SF Bay Area Meetup.
https://github.com/lmangani/cLoki