Apache Orc
Protobuf
Our great sponsors
Apache Orc | Protobuf | |
---|---|---|
4 | 171 | |
654 | 63,657 | |
0.9% | 1.1% | |
9.4 | 10.0 | |
4 days ago | 2 days ago | |
Java | C++ | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Apache Orc
-
Java Serialization with Protocol Buffers
The information can be stored in a database or as files, serialized in a standard format and with a schema agreed with your Data Engineering team. Depending on your information and requirements, it can be as simple as CSV, XML or JSON, or Big Data formats such as Parquet, Avro, ORC, Arrow, or message serialization formats like Protocol Buffers, FlatBuffers, MessagePack, Thrift, or Cap'n Proto.
- Personal data of 120,000 Russian servicemen fighting in Ukraine made public
-
AWS EMR Cost Optimization Guide
Data formatting is another place to make gains. When dealing with huge amounts of data, finding the data you need can take up a significant amount of your compute time. Apache Parquet and Apache ORC are columnar data formats optimized for analytics that pre-aggregate metadata about columns. If your EMR queries column intensive data like sum, max, or count, you can see significant speed improvements by reformatting data like CSVs into one of these columnar formats.
-
Apache Hudi - The Streaming Data Lake Platform
The following stack captures layers of software components that make up Hudi, with each layer depending on and drawing strength from the layer below. Typically, data lake users write data out once using an open file format like Apache Parquet/ORC stored on top of extremely scalable cloud storage or distributed file systems. Hudi provides a self-managing data plane to ingest, transform and manage this data, in a way that unlocks incremental data processing on them.
Protobuf
-
Reverse Engineering Protobuf Definitions from Compiled Binaries
For at least 4 years protobuf has had decent support for self-describing messages (very similar to avro) as well as reflection
https://github.com/protocolbuffers/protobuf/blob/main/src/go...
Xgooglers trying to make do on the cheap will just create a Union of all their messages and include the message def in a self-describing message pattern. Super-sensitive network I/O can elide the message def (empty buffer) and any for RecordIO clone well file compression takes care of the definition.
Definitely useful to be able to dig out old defs but protobuf maintainers have surprisingly added useful features so you don’t have to.
Bonus points tho for extracting the protobuf defs that e.g. Apple bakes into their binaries.
- Show HN: AuthWin – Authenticator App for Windows
-
Create Production-Ready SDKs With gRPC Gateway
gRPC Gateway is a protoc plugin that reads gRPC service definitions and generates a reverse proxy server that translates a RESTful JSON API into gRPC.
-
Create Production-Ready SDKs with Goa
To use more recent versions of protoc in future applications, you can download them from the Protobuf repository.
-
Roll your own auth with Rust and Protobuf
Use the Protobuf CLI protoc and the plugin protoc-gen-tonic.
-
Add extra stuff to a “standard” encoding? Sure, why not
> didn’t find any standard for separating protobuf messages
The fact that protobufs are not self-delimiting is an endless source of frustration, but I know of 2 standards:
- SerializeDelimited* is part of the protobuf library: https://github.com/protocolbuffers/protobuf/blob/main/src/go...
- Riegeli is "a file format for storing a sequence of string records, typically serialized protocol buffers. It supports dense compression, fast decoding, seeking, detection and optional skipping of data corruption, filtering of proto message fields for even faster decoding, and parallel encoding": https://github.com/google/riegeli
-
Block YouTube Ads on AppleTV by Decrypting and Stripping Ads from Profobuf
It looks like it is in fact universal. Just glancing at the code here, it looks like the tool searches any arbitrary file for bytes that look like encoded protobuf descriptors, specifically looking for bytes that are plausibly the beginning of a FileDescriptorProto message defined here:
https://github.com/protocolbuffers/protobuf/blob/main/src/go...
This takes advantage of the fact that such descriptors are commonly compiled into programs that use protobuf. The descriptors are usually embedded as constant byte arrays. That said, not all protobuf implementations embed the descriptors and those that do often have an option to inhibit such embedding (at the expense of losing some dynamic introspection features).
- How to learn to use protoc in 21 easily infuriating steps
-
What's involved in protobuf encoding?
Not much. You can check the source code in https://github.com/protocolbuffers/protobuf. For example, for serializing a boolean in C#: https://github.com/protocolbuffers/protobuf/blob/main/csharp/src/Google.Protobuf/WritingPrimitives.cs#L165. Strings and objects are a bit more complicated, but it is all about turning the data into its byte representation.
-
Trying To Solve The Confusion of Choice Between gRPC vs REST🕵
One of the key feature of gRPC is protobuf .proto file(nothing but just a contract for me between two communicator code components) This file and protobuff compiler is so mature, then it generates a direct client implementation using protoccompiler. ref
What are some alternatives?
Apache Parquet - Apache Parquet
FlatBuffers - FlatBuffers: Memory Efficient Serialization Library
Apache Avro - Apache Avro is a data serialization system.
SBE - Simple Binary Encoding (SBE) - High Performance Message Codec
hudi - Upserts, Deletes And Incremental Processing on Big Data.
MessagePack - MessagePack implementation for C and C++ / msgpack.org[C/C++]
Apache Thrift - Apache Thrift
cereal - A C++11 library for serialization
tape - A lightning fast, transactional, file-based FIFO for Android and Java.
debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
Bond - Bond is a cross-platform framework for working with schematized data. It supports cross-language de/serialization and powerful generic mechanisms for efficiently manipulating data. Bond is broadly used at Microsoft in high scale services.