ck
aws-deployment-framework
Our great sponsors
ck | aws-deployment-framework | |
---|---|---|
9 | 4 | |
579 | 635 | |
2.4% | 2.4% | |
10.0 | 7.7 | |
5 days ago | 9 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ck
-
Do you have an idle @Nvidia GPU? Can you please help the community test the beta version of the open-source framework for composable benchmarking and design space exploration of ML Systems?
If you have an idle Nvidia GPU and Linux, can you please help the community test the beta version of the open-source framework for composable benchmarking and design space exploration of ML systems: https://github.com/mlcommons/ck/blob/master/cm-mlops/project/mlperf-inference-v3.0-submissions/docs/crowd-benchmark-mlperf-bert-inference-cuda.md ?
- Sharing a tutorial to modularize ML Systems
-
[N] Tutorial to modularize ML Systems benchmarks from the Student Cluster Competition'22
Hi! Just sharing this tutorial from the Student Cluster Competition at SuperComputing'22 to learn how to modularize and run ML Systems benchmarks. 10 international teams had about 30 minutes to run it and most of them succeeded while sharing their results at the live dashboard . It is a part of the ongoing effort to modularize ML Systems and automate their benchmarking and optimization. Feedback is very welcome!
-
Asking for a favor to test modular ML benchmark for Student Cluster Competition
We would like to ask for a favor: we have prepared a tutorial to help students run the MLPerf inference benchmark across different platforms at the Student Cluster Competition at SuperComputing'22 in a few days: https://github.com/mlcommons/ck/blob/master/docs/tutorials/s... .
We would like to test it across different machines before students run it ;) . If you have time, please help us go through this tutorial and run this benchmark on any available system - it should not take more than 20..30 minutes.
If you encounter any issues, please report them at https://github.com/mlcommons/ck/issues so that we could fix them before the competition.
Thank you for supporting this community project!
- MLCommons is creating a new working group to modularize ML Systems
-
[N] Open working group to modularize ML Systems
Just to let you know that we are preparing a new working group at MLCommons to help the community modularize ML/AI Systems and automate their benchmarking, optimization and deployment. It will be based on the MLPerf methodology and MLCommons "Collective Knowledge" automation meta-framework that was already used to automate recent MLPerf inference benchmark submissions from Qualcomm, HPE, Lenovo, Krai, DELL and OctoML. Please join the group here to provide your feedback and help with this community effort! Thank you!
-
[N] Releasing the MLPerf automation framework to plug in real-world ML models, data sets and tools
Hi! Just sharing our open-source project to automate MLPerf benchmarks and make it easier for everyone to plug in their real-world ML models, data sets, frameworks/SDKs and hardware. Feedback is very welcome!
-
Research software code is likely to remain a tangled mess
– Their solution product https://cknowledge.io/ and source code https://github.com/ctuning/ck\
I guess it should be helpful to the researchers community.
aws-deployment-framework
-
Sync AWS CodeCommit repositories
In some scenarios you might have the need to replicate an AWS CodeCommit repository. When I was setting up a test organization using AWS Deployment Framework (ADF) I ran into this myself. Because I want to test the deployment of my landing zone I needed to have a close replica. This includes the CodeCommit setup. But at the same time I did not want to change the development workflow. The workflow is pretty straight forward. You create a feature branch to work in. When you are ready you merge it to a development branch. When it needs to go to production you merge it into the main branch. So we will use the development branch to deploy to the test organization. But, because the test organization is a replica of production. Merging to the development branch would not have effect on the test organization. For this we need to synchronize the development branch to the test organization.
-
Testing your Landing Zone when using AWS Deployment Framework
Within AWS Organizations you can apply Service control policies (SCPs). All AWS Accounts under the OU (Organization Unit) with the SCP will be subjective to this SCP. What if you need to make a change in this SCP? How can you test this change? SCPs are not the only things you might want to test. Remember that I mentioned that ADF is also bootstrapping the accounts? That could be a VPC with subnets for networking. How do you ensure that the change that you made works as intended? When merging to your main branch. It will trigger a rollout process depending on your configuration.
-
Customising AWS Control Tower with CfCT
AWS Deployment Framework (ADF)
- CDK pipelines for managing AWS Organizations
What are some alternatives?
osmnx - OSMnx is a Python package to easily download, model, analyze, and visualize street networks and other geospatial features from OpenStreetMap.
cookiecutter-django-ecs-github - Complete Walkthrough: Blue/Green Deployment to AWS ECS using Cookiecutter-Django using GitHub actions
SmartSim - SmartSim Infrastructure Library.
StackStorm - StackStorm (aka "IFTTT for Ops") is event-driven automation for auto-remediation, incident responses, troubleshooting, deployments, and more for DevOps and SREs. Includes rules engine, workflow, 160 integration packs with 6000+ actions (see https://exchange.stackstorm.org) and ChatOps. Installer at https://docs.stackstorm.com/install/index.html
budgetml - Deploy a ML inference service on a budget in less than 10 lines of code.
superwerker - superwerker can help you get started with the AWS Cloud quickly without investing in consultants or devoting time to extensive research. superwerker is a free, open-source solution that lets you quickly set up an AWS Cloud environment following best practices for security and efficiency so you can focus on your core business.
dslinter - `dslinter` is a pylint plugin for linting data science and machine learning code. We plan to support the following Python libraries: TensorFlow, PyTorch, Scikit-Learn, Pandas and NumPy.
terraform-aws-control_tower_account_factory - AWS Control Tower Account Factory
frontends-team-compass - A repository for team interaction, syncing, and handling meeting notes across the JupyterLab ecosystem.
aws-control-tower-customizations - The Customizations for AWS Control Tower solution combines AWS Control Tower and other highly-available, trusted AWS services to help customers more quickly set up a secure, multi-account AWS environment using AWS best practices.
terraform-tui - Terraform textual UI
aws-lambda-git - This repository demonstrates how you can run the git binary. Inside an AWS Lambda function.