beam
gRPC
Our great sponsors
beam | gRPC | |
---|---|---|
30 | 200 | |
7,445 | 40,532 | |
1.1% | 1.2% | |
10.0 | 9.9 | |
4 days ago | about 2 hours ago | |
Java | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
beam
-
Ask HN: Does (or why does) anyone use MapReduce anymore?
The "streaming systems" book answers your question and more: https://www.oreilly.com/library/view/streaming-systems/97814.... It gives you a history of how batch processing started with MapReduce, and how attempts at scaling by moving towards streaming systems gave us all the subsequent frameworks (Spark, Beam, etc.).
As for the framework called MapReduce, it isn't used much, but its descendant https://beam.apache.org very much is. Nowadays people often use "map reduce" as a shorthand for whatever batch processing system they're building on top of.
-
beam VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Releasing Temporian, a Python library for processing temporal data, built together with Google
Flexible runtime ☁️: Temporian programs can run seamlessly in-process in Python, on large datasets using Apache Beam.
-
Real Time Data Infra Stack
Apache Beam: Streaming framework which can be run on several runner such as Apache Flink and GCP Dataflow
-
Google Cloud Reference
Apache Beam: Batch/streaming data processing 🔗Link
-
Apache Beam Moved From Jira to GitHub Issues - You Can Too!
The Apache Beam community recently migrated to GitHub Issues after years of using Jira as our issue tracker. This post details why we made the move, how we did it, and how to decide if migrating is appropriate for your project.
-
A trick to have arbitrary infix operators in Python
Apache Beam works this way. Code sample: https://github.com/apache/beam/blob/master/examples/multi-la...
-
Jinja2 not formatting my text correctly. Any advice?
ListItem(name='Apache Beam', website='https://beam.apache.org/', category='Batch Processing', short_description='Apache Beam is an open source unified programming model to define and execute data processing pipelines, including ETL, batch and stream processing'),
-
The Data Engineer Roadmap 🗺
Apache Beam
-
Frameworks of the Future?
I asked a similar question in a different community, and the closest they came up with was the niche Apache Beam and the obligatory vague hand-waving about no-code systems. So, maybe DEV seeming to skew younger and more deliberately technical might get a better view of things? Is anybody using a "Framework of the Future" that we should know about?
gRPC
-
Reverse Engineering Protobuf Definitions from Compiled Binaries
Yes, grpc_cli tool uses essentially the same mechanism except implemented as a grpc service rather than as a stubby service. The basic principle of both is implementing the C++ proto library's DescriptorDatabase interface with cached recursive queries of (usually) the server's compiled in FileDescriptorProtos.
See also https://github.com/grpc/grpc/blob/master/doc/server-reflecti...
The primary difference between what grpc does and what stubby does is that grpc uses a stream to ensure that the reflection requests all go to the same server to avoid incompatible version skew and duplicate proto transmissions. With that said, in practice version skew is rarely a problem for grpc_cli style "issue a single RPC" usecases: even if requests do go to two or more different versions of a binary that might have incompatible proto graphs, it is very common for the request and response and RPC to all be in the same proto file so you only need to make one RPC in the first place unless you're using an extension mechanism like proto2 extensions or google.protobuf.Any.
-
Delving Deeper: Enriching Microservices with Golang with CloudWeGo
While gRPC and Apache Thrift have served the microservice architecture well, CloudWeGo's advanced features and performance metrics set it apart as a promising open source solution for the future.
-
gRPC Name Resolution & Load Balancing on Kubernetes: Everything you need to know (and probably a bit more)
The loadBalancingConfig is what we use in order to decide which policy to go for (round_robin in this case). This JSON representation is based on a protobuf message, then why does the name resolver returns it in the JSON format? The main reason is that loadBalancingConfig is a oneof field inside the proto message and so it can not contain values unknown to the gRPC if used in the proto format. The JSON representation does not have this requirement so we can use a custom loadBalancingConfig .
-
Dart on the Server: Exploring Server-Side Dart Technologies in 2024
The Dart implementation of gRPC which puts mobile and HTTP/2 first. It's built and maintained by the Dart team. gRPC is a high-performance RPC (remote procedure call) framework that is optimized for efficient data transfer.
- Usando Spring Boot RestClient
-
How to Build & Deploy Scalable Microservices with NodeJS, TypeScript and Docker || A Comprehesive Guide
gRPC is a high-performance, open-source RPC (Remote Procedure Call) framework initially developed by Google. It uses Protocol Buffers for serialization and supports bidirectional streaming.
-
Actual SSH over HTTPS
In general, tunneling through HTTP2 turns out to be a great choice. There is a RPC protocol built on top of HTTP2: gRPC[1].
This is because HTTP2 is great at exploiting a TCP connection to transmit and receive multiple data structures concurrently - multiplexing.
There may not be a reason to use HTTP3 however, as QUIC already provides multiplexing.
I expect that in the future most communications will be over encrypted HTTP2 and QUIC simply because middleware creators can not resist to discriminate.
[1] <https://grpc.io>
-
SGSG (Svelte + Go + SQLite + gRPC) - open source application
gRPC
-
Level UP your RDBMS Productivity in GO
I have decided to use gRPC because it's a very simple protocol and it's very easy to use.
-
Create Production-Ready SDKs with Goa
Goa generates gRPC code for you. gRPC is an efficient alternative to plain HTTP, over which you can provide your API. It requires the use of protocol buffers, made by Google. Our repository already provides the protoc app for you, in completed_app/lib.
What are some alternatives?
ZeroMQ - ZeroMQ core engine in C++, implements ZMTP/3.1
Apache Thrift - Apache Thrift
Cap'n Proto - Cap'n Proto serialization/RPC system - core tools and C++ library
zeroRPC - zerorpc for python
rpclib - rpclib is a modern C++ msgpack-RPC server and client library
nanomsg - nanomsg library
RPyC - RPyC (Remote Python Call) - A transparent and symmetric RPC library for python
asio-grpc - Asynchronous gRPC with Asio/unified executors
bloomrpc - Former GUI client for gRPC services. No longer maintained.
Nameko - Python framework for building microservices
awesome-json-rpc - Curated list of JSON-RPC resources.
eCAL - Please visit the new repository: https://github.com/eclipse-ecal/ecal