exp-lazyproto
oteps
exp-lazyproto | oteps | |
---|---|---|
3 | 4 | |
7 | 317 | |
- | 1.3% | |
2.6 | 4.8 | |
almost 2 years ago | 10 days ago | |
Go | Makefile | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
exp-lazyproto
-
Cap'n Proto 1.0
This is also because Google's Protobuf implementations aren't doing a very good job with avoiding unnecessary allocations. Gogoproto is better and it is possible to do even better, here is an example prototype I have put together for Go (even if you do not use the laziness part it is still much faster than Google's implementation): https://github.com/splunk/exp-lazyproto
-
Faster Protocol Buffers
Here is a OneOf Go implementation I wrote that hopefully is less ugly and is significantly faster: https://github.com/splunk/exp-lazyproto#oneof-fields
oteps
-
OpenTelemetry in 2023
Oh nice, thank you (and also solumos) for the links! It looks like oteps/pull/171 (merged June 2023) expanded and superseded the opentelemetry-proto/pull/346 PR (closed Jul 2022) [0]. The former resulted in merging OpenTelemetry Enhancement Proposal 156 [1], with some interesting results especially for 'Phase 2' where they implemented columnar storage end-to-end (see the Validation section [2]):
* For univariate time series, OTel Arrow is 2 to 2.5 better in terms of bandwidth reduction ... and the end-to-end speed is 3.1 to 11.2 times faster
* For multivariate time series, OTel Arrow is 3 to 7 times better in terms of bandwidth reduction ... Phase 2 has [not yet] been .. estimated but similar results are expected.
* For logs, OTel Arrow is 1.6 to 2 times better in terms of bandwidth reduction ... and the end-to-end speed is 2.3 to 4.86 times faster
* For traces, OTel Arrow is 1.7 to 2.8 times better in terms of bandwidth reduction ... and the end-to-end speed is 3.37 to 6.16 times faster
[0]: https://github.com/open-telemetry/opentelemetry-proto/pull/3...
[1]: https://github.com/open-telemetry/oteps/blob/main/text/0156-...
[2]: https://github.com/open-telemetry/oteps/blob/main/text/0156-...
-
Grafana Phlare, open source database for continuous profiling at scale
https://github.com/open-telemetry/oteps/issues/139
It takes a lot of time and effort to bake a cross-vendor cross-language standard.
-
Faster Protocol Buffers
This. The statelessness of the OTLP is by design. I did consider stateful designs with e.g. shared state dictionary compression but eventually chose not to, so that the intermediaries can remain stateless.
An extension to OTLP that uses shared state (and columnar encoding) to achieve more compact representation and is suitable for the last network leg in the data delivery path has been proposed and may become a reality in the future: https://github.com/open-telemetry/oteps/pull/171
What are some alternatives?
tempest
zipkin-api - Zipkin's language independent model and HTTP Api Definitions
protobuf-flatbuffer-benchmark
b3-propagation - Repository that describes and sometimes implements B3 propagation
c-capnprotoc
odigos - Distributed tracing without code changes. 🚀 Instantly monitor any application using OpenTelemetry and eBPF
ClickHouse - ClickHouse® is a free analytics DBMS for big data
community - OpenTelemetry community content
FlatBuffers - FlatBuffers: Memory Efficient Serialization Library
terraform-aws-jaeger - Terraform module for Jeager
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing