aws-sdk-go
metaflow
aws-sdk-go | metaflow | |
---|---|---|
34 | 24 | |
8,548 | 7,607 | |
0.2% | 1.2% | |
9.4 | 9.2 | |
1 day ago | 2 days ago | |
Go | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aws-sdk-go
- my first go project, a CLI application to store IP addresses
-
Go 1.21 will (probably) download newer toolchains on demand by default
I'm... really not sure I agree with this, from a philosophical point of view. It feels like this is making "eh, we'll just upgrade our Go version next quarter" too easy; ultimately some responsibility toward updating your application's Go version to work with what new dependencies require should fall on Us, the application developers. Sure, we're bad at it. Everyone's lived through running years-old versions of some toolchain. But I think this just makes the problem worse, not better.
Its compounded by the problem that, when you're setting up a new library, the `go` directive in the mod file defaults to your current toolchain; most likely a very current one. It would take a not-insignificant effort on the library author's part to change that to assert the true-minimum version of Go required, based on libraries and language features and such. That's an effort most devs won't take on.
I'd also guess that many developers, up-to this point if not indefinitely because education is hard, interpreted that `go` directive to mean more-of "the version of go this was built with"; not necessarily "the version of go minimally required". There are really major libraries (kubernetes/client-go [1]) which assert a minimum go version of 1.20; the latest version (see, for comparison, the aws-sdk, which specifies a more reasonable go1.11 [2]). I haven't, you know, fully audited these libraries, but 1.20 wasn't exactly a major release with huge language and library changes; do they really need 1.20? If devs haven't traditionally operated in this world where keeping this value super-current results in actually significant downstream costs in network bandwidth (go1.20 is 100mb!) and CI runtime, do we have confidence that the community will adapt? There's millions of Go packages out there.
Or, will a future version of Go patch a security update, not backport it more than one version or so, and libraries have to specify the newest `go` directive version, because manifest security scanning and policy and whatever? Like, yeah, I get the rosy worldview of "your minimum version encodes required language and library features", but its not obvious to me that this is how this field is, or even will be, used.
Just a LOT of tertiary costs to this change which I hope the team has thought through.
[1] https://github.com/kubernetes/client-go/blob/master/go.mod#L...
[2] https://github.com/aws/aws-sdk-go/blob/main/go.mod
- How to get better on golang
-
Send an Email through AWS SES with GoLang
This email was sent with " + "Amazon SES using the " + "AWS SDK for Go.
-
Looking for library recommendations: Django -> Golang port
I figured I'd ask the community for some recommendations for the following capabilities that Django + python stack is giving me at the moment: 1. Amazon SES Mailing (considering - aws-sdk-go) 2. Django Admin (considering go-admin 3. Django Signals (considering syncsignals 4. Celery (No contenders here)
-
S3 upload with progress
I've been trying to implement some logging of progress when uploading objects to S3. My code is building on this example and can be found here.
-
Background process in Lambda using SQS
Now that you have everything you need, let’s install the AWS SDK for Go library.
- Node.js 18 support in Lambda added to Go SDK
- Node.js 18 Runtime support added to Golang SDK
-
AWS and its complicated shit needs to die
Counterpoint 2: Amazon is bad and should feel bad for making this an internal and embedding it in the Credentials struct.
metaflow
- FLaNK Stack 05 Feb 2024
-
metaflow VS cascade - a user suggested alternative
2 projects | 5 Dec 2023
- In Need of Guidance: Implementing MLOps in a Complex Organization as a Junior Data Engineer
-
What are some open-source ML pipeline managers that are easy to use?
I would recommend the following: - https://www.mage.ai/ - https://dagster.io/ - https://www.prefect.io/ - https://metaflow.org/ - https://zenml.io/home
-
Needs advice for choosing tools for my team. We use AWS.
1) I've been looking into [Metaflow](https://metaflow.org/), which connects nicely to AWS, does a lot of heavy lifting for you, including scheduling.
-
Selfhosted chatGPT with local contente
even for people who don't have an ML background there's now a lot of very fully-featured model deployment environments that allow self-hosting (kubeflow has a good self-hosting option, as do mlflow and metaflow), handle most of the complicated stuff involved in just deploying an individual model, and work pretty well off the shelf.
-
[OC] Gender diversity in Tech companies
They had to figure out video compression that worked at the volume that they wanted to deliver. They had to build and maintain their own CDN to be able to have a always available and consistent viewing experience. Don’t even get me started on the resiliency tools like hystrix that they were kind enough to open source. I mean, they have their own fucking data science framework and they’re looking into using neural networks to downscale video.. Sound familiar? That’s cause that’s practically the same thing as Nvidia’s DLSS (which upscales instead of downscales).
-
Model artifacts mess and how to deal with it?
Check out Metaflow by Netflix
-
Going to Production with Github Actions, Metaflow and AWS SageMaker
Github Actions, Metaflow and AWS SageMaker are awesome technologies by themselves however they are seldom used together in the same sentence, even less so in the same Machine Learning project.
-
Small to Reasonable Scale MLOps - An Approach to Effective and Scalable MLOps when you're not a Giant like Google
It's undeniable that leadership is instrumental in any company and project success, however I was intrigued with one of their ML tool choices that helped them reach their goal. I was so curious about this choice that I just had to learn more about it, so in this article will be talking about a sound strategy of effectively scaling your AI/ML undertaking and a tool that makes this possible - Metaflow.
What are some alternatives?
minio-go - MinIO Go client SDK for S3 compatible object storage
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
Moto - A library that allows you to easily mock out tests based on AWS infrastructure.
zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
botocore - The low-level, core functionality of boto3 and the AWS CLI.
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
twitter-scraper - Scrape the Twitter frontend API without authentication with Golang.
kedro-great - The easiest way to integrate Kedro and Great Expectations
cachet - Go(lang) client library for Cachet (open source status page system).
clearml - ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution
goamz
dvc - 🦉 ML Experiments and Data Management with Git