oteps
Our great sponsors
oteps | protobuf-flatbuffer-benchmark | |
---|---|---|
4 | 1 | |
316 | 0 | |
1.9% | - | |
5.3 | 0.0 | |
10 days ago | over 1 year ago | |
Makefile | Starlark | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
oteps
-
OpenTelemetry in 2023
Oh nice, thank you (and also solumos) for the links! It looks like oteps/pull/171 (merged June 2023) expanded and superseded the opentelemetry-proto/pull/346 PR (closed Jul 2022) [0]. The former resulted in merging OpenTelemetry Enhancement Proposal 156 [1], with some interesting results especially for 'Phase 2' where they implemented columnar storage end-to-end (see the Validation section [2]):
* For univariate time series, OTel Arrow is 2 to 2.5 better in terms of bandwidth reduction ... and the end-to-end speed is 3.1 to 11.2 times faster
* For multivariate time series, OTel Arrow is 3 to 7 times better in terms of bandwidth reduction ... Phase 2 has [not yet] been .. estimated but similar results are expected.
* For logs, OTel Arrow is 1.6 to 2 times better in terms of bandwidth reduction ... and the end-to-end speed is 2.3 to 4.86 times faster
* For traces, OTel Arrow is 1.7 to 2.8 times better in terms of bandwidth reduction ... and the end-to-end speed is 3.37 to 6.16 times faster
[0]: https://github.com/open-telemetry/opentelemetry-proto/pull/3...
[1]: https://github.com/open-telemetry/oteps/blob/main/text/0156-...
[2]: https://github.com/open-telemetry/oteps/blob/main/text/0156-...
-
Grafana Phlare, open source database for continuous profiling at scale
https://github.com/open-telemetry/oteps/issues/139
It takes a lot of time and effort to bake a cross-vendor cross-language standard.
-
Faster Protocol Buffers
This. The statelessness of the OTLP is by design. I did consider stateful designs with e.g. shared state dictionary compression but eventually chose not to, so that the intermediaries can remain stateless.
An extension to OTLP that uses shared state (and columnar encoding) to achieve more compact representation and is suitable for the last network leg in the data delivery path has been proposed and may become a reality in the future: https://github.com/open-telemetry/oteps/pull/171
protobuf-flatbuffer-benchmark
-
Faster Protocol Buffers
OK, but I just want readers to be aware that the whole idea that it could take five minutes to parse a million protobufs is completely preposterous. I reimplemented their benchmark just now and it runs at roughly 8 million protos per second, orders of magnitude faster than they state, and I didn't even do anything to optimize it.
https://github.com/jwbee/protobuf-flatbuffer-benchmark
What are some alternatives?
zipkin-api - Zipkin's language independent model and HTTP Api Definitions
FlatBuffers - FlatBuffers: Memory Efficient Serialization Library
b3-propagation - Repository that describes and sometimes implements B3 propagation
exp-lazyproto - Experimental fast implementation of Protobufs in Go
odigos - Distributed tracing without code changes. 🚀 Instantly monitor any application using OpenTelemetry and eBPF
community - OpenTelemetry community content
terraform-aws-jaeger - Terraform module for Jeager
openobserve - 🚀 10x easier, 🚀 140x lower storage cost, 🚀 high performance, 🚀 petabyte scale - Elasticsearch/Splunk/Datadog alternative for 🚀 (logs, metrics, traces, RUM, Error tracking, Session replay).
opentelemetry-lambda - Create your own Lambda Layer in each OTel language using this starter code. Add the Lambda Layer to your Lamdba Function to get tracing with OpenTelemetry.
semantic-conventions - Defines standards for generating consistent, accessible telemetry across a variety of domains
tempo - Grafana Tempo is a high volume, minimal dependency distributed tracing backend.