Examples for the Dagger Python SDK
> A) Why async in the user code? Is it really necessary?
It's not a requirement, but it's simpler to default to one and mention the other. You can see an example of sync code in https://github.com/helderco/dagger-examples/blob/main/say_sy... and we'll add a guide in the docs website to explain the difference.
It's more inclusive. If you want to run dagger from an async environment (say FastAPI), you don't want to run blocking code. You can run the whole pipeline in a thread, but not really taking advantage of the event loop. It's simpler to do the opposite because if you run in a sync environment (like all our examples, running from CLI), it's much easier to just spin an event loop with `anyio.run`.
It's more powerful. For most examples probably the difference is small, unless you're using a lot of async features. Just remove async/await keywords and the event loop. But you can easily reach for concurrency if there's benefit. While the dagger engine ensures most of the parallelism and efficiency, some pipelines can benefit from doing this at the language level. See this example where I'm testing a library (FastAPI) with multiple Python versions: https://github.com/helderco/dagger-examples/blob/main/test_c.... It has an obvious performance benefit compared to running "synchronously": https://github.com/helderco/dagger-examples/blob/main/test_m...
Dagger has a client and a server architecture, so you're sending requests through the network. This is an especially common use case for using async.
Async Python is on the rise. More and more libraries are supporting it, more users are getting to know it, and sometimes it feels very transitional. It's very hard to maintain both async and sync code. There's a lot of duplication because you need blocking and non-blocking versions for a lot of things like network requests, file operations and running subprocesses. But I've made quite an effort to support both and meet you where you're at. I especially took great care to hide the sync/async classes and methods behind common names so it's easy to change from one to another.
I'm very interested to know the community's adoption or preference of one vs the other. :)
Application Delivery as Code that Runs Anywhere (by dagger)
It's up to you how granular you make your CI configuration. Much of it depends on the context and how your team works.
If you've already found yourself integrating a Makefile in a CI job, and figuring out the best mapping of Make rules to CI job/step/workflow: this is exactly the same. Ultimately you're just executing a tool which happens to depend on the Dagger engine. How and when you execute it is entirely up to you.
For example, here's the Github Actions job we use to test the Dagger Python SDK. It executes a custom tool written in Go. hhttps://github.com/dagger/dagger/blob/bd75d17f9625f837d7a2f9...
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
A fast dependency injector for Android and Java.
Confusing. I initially thought someone ported the Dagger DI framework to Python: https://dagger.dev/
An orchestration platform for the development, production, and observation of data assets.
I wondered how it related to https://dagster.io/
Experience with Dagster.io?
1 project | news.ycombinator.com | 25 Jul 2023
1 project | /r/dataengineering | 26 Jun 2023
The Dagster Master Plan
2 projects | /r/dataengineering | 16 Jun 2023
The Why and How of Dagster User Code Deployment Automation
1 project | dev.to | 1 May 2023
dbt Cloud Alternatives?
2 projects | /r/dataengineering | 23 Jan 2023