Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR. Learn more →
Top 23 Go Python Projects
-
This, but here are some things I've learned to do:
* Use a .local directory under my home directory instead of ~/bin. That's a great prefix when installing from source or tarball at the user level, keeps the top-level of the home directory from getting cluttered with /share /lib /include /etc /lib etc. etc.
* Reach for the package manager first when installing new software, unless there is a good reason not to. It makes keeping things up-to-date easy, and since I use Arch, which uses a rolling release, you pretty much get the latest stuff.
* If I can't get what I want from the package manager, I'll look at what is available using asdf-vm (https://asdf-vm.com/), and failing that, build from source or install from tarball.
* I don't use snap or the like.
I gave up on Windows over 20 years ago, and I can't say enough how liberating it has been. One of the nicest things is that there is a distro for almost every need (see https://distrowatch.com/). I use Arch; but your use case may point to a beginner-friendly distro, such as Mint, Ubuntu, etc., or a repeatable install type of distro, such as NixOS or Guix, or many others.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
Project mention: Pulumi vs. Terraform: Choosing the Best Infrastructure as Code Solution | dev.to | 2025-02-10
License Pulumi is released under the Apache 2.0 license, which means you can build products using it and sell those products to customers. Terraform, on the other hand, used to be released under the Mozilla Public License but has since changed to the Business Source Licnese. This license still allows you to use Terraform internally, but if you want to build your own product on top of it, then you’re going to run into legal issues.
-
-
Project mention: Show HN: Perforator – cluster-wide profiling tool for large data centers | news.ycombinator.com | 2025-02-01
- Pyroscope symbolizes profiles on an agent, while Perforator symbolizes profiles offline, greatly reducing symbolization costs and agent's overhead. It seems Pyroscope is heading toward the same architecture we use: https://github.com/grafana/pyroscope/pull/3799.
-
awesomo
Cool open source projects. Choose your project and get involved in Open Source development now.
-
flyte
Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
Data orchestration tools are key for managing data pipelines in modern workflows. When it comes to tools, Apache Airflow, Dagster, and Flyte are popular tools serving this need, but they serve different purposes and follow different philosophies. Choosing the right tool for your requirements is essential for scalability and efficiency. In this blog, I will compare Apache Airflow, Dagster, and Flyte, exploring their evolution, features, and unique strengths, while sharing insights from my hands-on experience with these tools in a weather data pipeline project.
-
dbmate – A simple, language-agnostic approach to managing database migrations.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
Project mention: Running Durable Workflows in Postgres Using DBOS | news.ycombinator.com | 2024-12-10
Disclaimer: I'm a co-founder of Hatchet (https://github.com/hatchet-dev/hatchet), which is a Postgres-backed task queue that supports durable execution.
> Because a step transition is just a Postgres write (~1ms) versus an async dispatch from an external orchestrator (~100ms), it means DBOS is 25x faster than AWS Step Functions
Durable execution engines deployed as an external orchestrator will always been slower than direct DB writes, but the 1ms delay versus ~100ms doesn't seem inherent to the orchestrator being external. In the case of Hatchet, pushing work takes ~15ms and invoking the work takes ~1ms if deployed in the same VPC, and 90% of that execution time is on the database. In the best-case, the external orchestrator should take 2x as long to write a step transition (round-trip network call to the orchestrator + database write), so an ideal external orchestrator would be ~2ms of latency here.
There are also some tradeoffs to a library-only mode that aren't discussed. How would work that requires global coordination between workers behave in this model? Let's say, for example, a global rate limit -- you'd ideally want to avoid contention on rate limit rows, assuming they're stored in Postgres, but each worker attempting to acquire a rate limit simultaneously would slow down start time significantly (and place additional load on the DB). Whereas with a single external orchestrator (or leader election), you can significantly increase throughput by acquiring rate limits as part of a push-based assignment process.
The same problem of coordination arises if many workers are competing for the same work -- for example if a machine crashes while doing work, as described in the article. I'm assuming there's some kind of polling happening which uses FOR UPDATE SKIP LOCKED, which concerns me as you start to scale up the number of workers.
-
-
-
-
-
The key to restoring order is to isolate cloud resource details behind an abstraction. Instead of importing AWS S3 or Google Cloud Storage SDKs directly in your application code, you can use a framework like Nitric that exposes common operations—like creating an API route or storing a file—without tying you to a specific cloud provider.
-
-
-
Project mention: RustPython: A Python Interpreter Written in Rust | news.ycombinator.com | 2024-08-02
-
bruin
Build data pipelines with SQL and Python, ingest data from different sources, add quality checks, and build end-to-end flows.
Bruin CLI is an end-to-end data pipeline tool that brings together data ingestion, data transformation with SQL and Python, and data quality in a single framework.
-
-
-
parca-agent
eBPF based always-on profiler auto-discovering targets in Kubernetes and systemd, zero code changes or restarts needed!
-
iwf
iWF is a WorkflowAsCode microservice orchestration platform offering an orchestration coding framework and service for building resilient, fault-tolerant, scalable long-running processes
-
aqueduct
Aqueduct is no longer being maintained. Aqueduct allows you to run LLM and ML workloads on any cloud infrastructure. (by RunLLM)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Go Python discussion
Go Python related posts
-
Top 16 DevOps Tools for 2025: (Excellent for SREs, Too!)
-
Pulumi vs. Terraform: Choosing the Best Infrastructure as Code Solution
-
Deploying ML projects with Argo CD
-
User authentication in go
-
How I suffered my first burnout as software developer
-
Running Durable Workflows in Postgres Using DBOS
-
Why Docker Compose Falls Short as Self-Hosting Scales
-
A note from our sponsor - CodeRabbit
coderabbit.ai | 17 Mar 2025
Index
What are some of the best open-source Python projects in Go? This list will help you:
# | Project | Stars |
---|---|---|
1 | asdf | 23,114 |
2 | Pulumi | 22,631 |
3 | sqlc | 14,490 |
4 | pyroscope | 10,387 |
5 | awesomo | 9,497 |
6 | flyte | 6,086 |
7 | dbmate | 5,774 |
8 | gaia | 5,203 |
9 | hatchet | 4,616 |
10 | shell-operator | 2,481 |
11 | fibratus | 2,280 |
12 | gopy | 2,125 |
13 | nodebook | 1,643 |
14 | nitric | 1,623 |
15 | dataframe-go | 1,209 |
16 | buildpacks | 1,015 |
17 | gpython | 907 |
18 | bruin | 903 |
19 | hulk | 851 |
20 | faas-cli | 799 |
21 | parca-agent | 597 |
22 | iwf | 541 |
23 | aqueduct | 520 |