log-analytics-starter-kit
qryn
log-analytics-starter-kit | qryn | |
---|---|---|
1 | 10 | |
52 | 947 | |
- | 4.1% | |
3.8 | 9.6 | |
6 months ago | 9 days ago | |
TypeScript | JavaScript | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
log-analytics-starter-kit
-
Making a Homegrown ClickHouse Log for $20/mo
This is awesome! We did something similar for our internal logging at Tinybird (which itself is built on ClickHouse) and I recently turned into it a (very simplified) Starter Kit that others can fork and use in their projects. https://github.com/tinybirdco/log-analytics-starter-kit
Rather than writing to files and using an agent like beats to tail them, it sends logs directly from the application code with a basic POST request. Obviously, you could just as happily tail the file and forward the logs, but this approach reduced the footprint of tools and made it work in serverless environments.
qryn
- Show HN: Pyroscope/Phlare drop-in compatible replacement with OLAP storage
-
Coinbase (?) had a $65M Datadog bill per Datadog's Q1 earnings call
Thanks for mentioning qryn! We are a non-corporate alternative and feature full ingestion compatibility with DataDog (including Cloudflare emitters, etc), Loki, Prometheus, Tempo, Elastic & others for both on-prem (https://qryn.dev) and Cloud (https://qryn.cloud) deployments, without the killer price tag.
Note: in qryn s3/r2 are as close to /dev/null as it gets!
-
What I like using Grafana Loki for (and where I avoid it)
qryn and vector get along very well! We use it all the time for testing and developing qryn and qryn.cloud and most of our users love it! But we're just as compatible with Loki/LogQL, Influx protocol for metrics and logs, Elastic Bulk, Prometheus for metrics, opentelemetry for everything... and more coming!
Feel free to open an issue on our repository if you end up trying it and/or would like us to help out!
https://qryn.dev
- Making a Homegrown ClickHouse Log for $20/mo
-
Building the world’s fastest website analytics (2021)
> *it would be nice to use ClickHouse as a Prometheus backend*
Well... that's already possible and it works great! As you might know https://qryn.dev turns ClickHouse into a powerful Prometheus *remote_write* backend and the GO/cloud version supports full PromQL queries off ClickHouse transparently (the JS/Node version transpiles to LogQL instead) and from a performance point of view its well on par with Prometheus, Mimir and Victoriametrics in our internal benchmarks (including Clickhouse as part of the resource set) with millions of inserts/s and broad client compatibility. Same for Logs (LogQL) and Traces (Tempo)
Disclaimer: I work on qryn
-
Think Prometheus, but for logs (not metrics). Simple, efficient, fast log store
Thanks for mentioning our project! qryn (formerly cloki) is currently more focused on the polyglot factor and trying to unify logs, metrics and telemetry on a single stateless platform, easy to scale without hundreds of services and moving parts. At this stage, its a lightweight Grafana Cloud alternative just requiring clickhouse - no sidecar databases, redis, or plugins needed, and no new query languages or rules to learn. Latest info is at https://qryn.dev
-
Show HN: Distributed Tracing Using OpenTelemetry and ClickHouse
cloki can be used to read metrics out of any CH table so it should work fine.
we also just introduced experimental support for ingesting OTLP/ZIPKIN spans and a tempo-compatible API in cloki, looking for testers to validate this feature:
https://github.com/lmangani/cLoki/wiki/Tempo-Tracing#clickho...
Internally trace spans are stored as tagged JSON logs, meaning they are available from both Loki and Tempo APIs and can be used from pretty much any visualization, too!
-
I Don't Think Elasticsearch Is a Good Logging System
There's also cLoki. It's a new project that puts a Loki gateway over a ClickHouse backend store. We're looking at it and plan a presentation from the author(s) at the next ClickHouse SF Bay Area Meetup.
https://github.com/lmangani/cLoki
What are some alternatives?
zeek-clickhouse
uptrace - Open source APM: OpenTelemetry traces, metrics, and logs
vector - A high-performance observability data pipeline.
signoz - SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in a single application. An open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open source Application Performance Monitoring (APM) & Observability tool
ClickHouse - ClickHouse® is a free analytics DBMS for big data
syslog-ng - syslog-ng is an enhanced log daemon, supporting a wide range of input and output methods: syslog, unstructured text, queueing, SQL & NoSQL.
helm-charts
grafana-prtg - A PRTG Datasource plugin for Grafana
elasticsearch-py - Official Python client for Elasticsearch
clickhouse-operator - Altinity Kubernetes Operator for ClickHouse creates, configures and manages ClickHouse clusters running on Kubernetes
lib - Autocode CLI and standard library tooling
parseable - Parseable is a log analytics system platform for modern, cloud native workloads