rules_go
gRPC
rules_go | gRPC | |
---|---|---|
6 | 201 | |
1,331 | 40,775 | |
-0.2% | 0.6% | |
9.0 | 9.9 | |
10 days ago | 2 days ago | |
Go | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rules_go
-
When to Use Bazel?
There’s an issue I reported (along with a proof of concept fix) over 4 years ago, that has yet to be fixed: building a mixed source project containing Go & C++ & C++ protocol buffers results in silently broken binaries as rules_go will happily not forward along the linker arguments that the C++ build targets (the protobuf ones, using the built in C++ rules) declare.
See https://github.com/bazelbuild/rules_go/issues/1486
Not very confidence inspiring when Google’s build system falls over when you combine three technologies that are used commonly throughout Google’s code base (two of which were created by Google).
If you’re Google, sure, use Bazel. Otherwise, I wouldn’t recommend it. Google will cater to their needs and their needs only — putting the code out in the open means you get the privilege of sharing in their tech debt, and if something isn’t working, you can contribute your labor to them for free.
No thanks :)
-
Caculating Go type sets is harder than you think
Bazel in theory maintains its own directory of generated code that your IDE should refer to. Back when I last used Bazel, there was a bug open to make gopls properly understand this ("go packages driver" is the search term). Nobody touched this bug for a couple years, so I gave up.
Here's the bug: https://github.com/bazelbuild/rules_go/issues/512
I basically wouldn't use Bazel with Go. Go already has a build system, Bazel is best for languages that don't ship a build system, like C++.
-
Buf raises $93M to deprecate REST/JSON
`proto_library` for building the `.bin` file from protos works great. Generating stubs/messages for "all" languages does not. Each language does not want to implement gRPC rules, the gRPC team does not want to implement rules for each language. Sort of a deadlock situation. For example:
- C++: https://github.com/grpc/grpc/blob/master/bazel/cc_grpc_libra...
- Python: https://github.com/grpc/grpc/blob/master/bazel/python_rules....
- ObjC: https://github.com/grpc/grpc/blob/master/bazel/objc_grpc_lib...
- Java: https://github.com/grpc/grpc-java/blob/master/java_grpc_libr...
- Go (different semantics than all of the other): https://github.com/bazelbuild/rules_go/blob/master/proto/def...
But there's also no real cohesion within the community. The biggest effort to date has been in https://github.com/stackb/rules_proto which integrates with gazelle.
tl;dr: Low alignment results in diverging implementations that are complicated to understand for newcomers. Buff's approach is much more appealing as it's a "this is the one way to do the right thing" and having it just work by detecting `proto_library` and doing all of the linting/registry stuff automagically in CI would be fantastic.
-
Why does Bazel not get more love?
This can be ugly in some languages. There’s decent go support in VSCode if you follow these copy & paste instructions here https://github.com/bazelbuild/rules_go/wiki/Editor-setup
- GOPACKAGESDRIVER support for Bazel's rules_go, fixes Bazel + gopls
-
What is the preferred way to package static files (html/css/js) along with your standalone binary in 2020?
Bazel go_embed_data
gRPC
-
Golang: out-of-box backpressure handling with gRPC, proven by a Grafana dashboard
gRPC, built on HTTP/2, inherently supports flow control. The server can push updates, but it must also respect flow control signals from the client, ensuring that it doesn't send data faster than what the client can handle.
-
Reverse Engineering Protobuf Definitions from Compiled Binaries
Yes, grpc_cli tool uses essentially the same mechanism except implemented as a grpc service rather than as a stubby service. The basic principle of both is implementing the C++ proto library's DescriptorDatabase interface with cached recursive queries of (usually) the server's compiled in FileDescriptorProtos.
See also https://github.com/grpc/grpc/blob/master/doc/server-reflecti...
The primary difference between what grpc does and what stubby does is that grpc uses a stream to ensure that the reflection requests all go to the same server to avoid incompatible version skew and duplicate proto transmissions. With that said, in practice version skew is rarely a problem for grpc_cli style "issue a single RPC" usecases: even if requests do go to two or more different versions of a binary that might have incompatible proto graphs, it is very common for the request and response and RPC to all be in the same proto file so you only need to make one RPC in the first place unless you're using an extension mechanism like proto2 extensions or google.protobuf.Any.
-
Delving Deeper: Enriching Microservices with Golang with CloudWeGo
While gRPC and Apache Thrift have served the microservice architecture well, CloudWeGo's advanced features and performance metrics set it apart as a promising open source solution for the future.
-
gRPC Name Resolution & Load Balancing on Kubernetes: Everything you need to know (and probably a bit more)
The loadBalancingConfig is what we use in order to decide which policy to go for (round_robin in this case). This JSON representation is based on a protobuf message, then why does the name resolver returns it in the JSON format? The main reason is that loadBalancingConfig is a oneof field inside the proto message and so it can not contain values unknown to the gRPC if used in the proto format. The JSON representation does not have this requirement so we can use a custom loadBalancingConfig .
-
Dart on the Server: Exploring Server-Side Dart Technologies in 2024
The Dart implementation of gRPC which puts mobile and HTTP/2 first. It's built and maintained by the Dart team. gRPC is a high-performance RPC (remote procedure call) framework that is optimized for efficient data transfer.
- Usando Spring Boot RestClient
-
How to Build & Deploy Scalable Microservices with NodeJS, TypeScript and Docker || A Comprehesive Guide
gRPC is a high-performance, open-source RPC (Remote Procedure Call) framework initially developed by Google. It uses Protocol Buffers for serialization and supports bidirectional streaming.
-
Actual SSH over HTTPS
In general, tunneling through HTTP2 turns out to be a great choice. There is a RPC protocol built on top of HTTP2: gRPC[1].
This is because HTTP2 is great at exploiting a TCP connection to transmit and receive multiple data structures concurrently - multiplexing.
There may not be a reason to use HTTP3 however, as QUIC already provides multiplexing.
I expect that in the future most communications will be over encrypted HTTP2 and QUIC simply because middleware creators can not resist to discriminate.
[1] <https://grpc.io>
-
Why gRPC is not natively supported by Browsers
Even in the https://grpc.io blog says this
-
SGSG (Svelte + Go + SQLite + gRPC) - open source application
gRPC
What are some alternatives?
go-bindata - A small utility which generates Go code from any file. Useful for embedding binary data in a Go program.
ZeroMQ - ZeroMQ core engine in C++, implements ZMTP/3.1
statik - Embed files into a Go executable
Apache Thrift - Apache Thrift
go - The Go programming language
Cap'n Proto - Cap'n Proto serialization/RPC system - core tools and C++ library
edotool - edotool: simulate keyboard input and mouse activity
zeroRPC - zerorpc for python
statics - :file_folder: Embeds static resources into go files for single binary compilation + works with http.FileSystem + symlinks
rpclib - rpclib is a modern C++ msgpack-RPC server and client library
buildtools - A bazel BUILD file formatter and editor
nanomsg - nanomsg library