parseable
clp
parseable | clp | |
---|---|---|
26 | 2 | |
1,705 | 716 | |
2.4% | 0.6% | |
9.2 | 9.3 | |
7 days ago | 5 days ago | |
Rust | C++ | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
parseable
-
New release of Parseable [Log analytics system written in Rust] is now available
Checkout the release here: https://github.com/parseablehq/parseable/releases/tag/v0.7.0
-
OpenObserve: Elasticsearch/Datadog alternative in Rust.. 140x lower storage cost
How does this compare to Parseable?
https://github.com/parseablehq/parseable
First guess is that the underlying storage / query layer is pretty similar (Parquet + Datafusion), but OpenObserve has more built in use cases?
As an aside, itโs awesome that Datafusionโs existence and maturity makes launching a product with scalable analytical reads 10x easier than before and cool to see so many projects integrating it
-
Infino - Fast and scalable service to store time series and logs - written in Rust
Another cool rust project in this space for logs: https://github.com/parseablehq/parseable. Using Arrow for the memory format makes life easier for incorporating with other tools like Grafana.
- Lightweight ELK alternative for ingesting and analyzing local logs?
-
I can't recommend serious use of an all-in-one local Grafana Loki setup
- Visualize with Grafana
https://github.com/parseablehq/parseable
(founder here)
-
Parseable - an open source log observability platform
Hello DevOps community, we've been working on https://github.com/parseablehq/parseable for a while now. Would love to get any feedback, questions etc.
- Parseable โ unify log data to Parquet on S3
- Show HN: Columnar store for fast, lightweight logging
- Lightweight, high performance logging engine based on Apache Arrow & Parquet
-
Syslog server
Maybe also take a look at: https://github.com/parseablehq/parseable
clp
-
FOSS, cloud native, log storage and query engine build with Apache Arrow & Parquet, written in Rust and React.
Thoughts on integrating CLP with this infra? Not sure whether this even makes sense to try? LINK
-
Reducing logging cost by two orders of magnitude using CLP
Original CLP Paper: https://www.usenix.org/system/files/osdi21-rodrigues.pdf
Github project for CLP: https://github.com/y-scope/clp
The interesting part about the article isn't that structured data is easier to compress and store, its that there's a relatively new way to efficiently transform unstructured logs to structured data. For those shipping unstructured logs to an observability backend this could be a way to save significant money
What are some alternatives?
loki - Like Prometheus, but for logs.
Apache Cassandra - Mirror of Apache Cassandra
openobserve - ๐ 10x easier, ๐ 140x lower storage cost, ๐ high performance, ๐ petabyte scale - Elasticsearch/Splunk/Datadog alternative for ๐ (logs, metrics, traces, RUM, Error tracking, Session replay).
Snappy - A fast compressor/decompressor
kube-ns-suspender - A k8s controller that scales up and down namespaces on-demand with an embedded friendly UI and a Prometheus exporter. Inspired by kube-downscaler.
Scylla - NoSQL data store using the seastar framework, compatible with Apache Cassandra
qryn - qryn is a polyglot, high-performance observability framework for ClickHouse. Ingest, store and analyze logs, metrics and telemetry traces from any agent supporting Loki, Prometheus, OTLP, Tempo, Elastic, InfluxDB and many more formats and query transparently using Grafana or any other compatible client.
graylog - Free and open log management
pino - ๐ฒ super fast, all natural json logger
tracing - Application level tracing for Rust.
draco - Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.