amazon-sagemaker-local-mode
quick-deploy
amazon-sagemaker-local-mode | quick-deploy | |
---|---|---|
1 | 1 | |
232 | 6 | |
1.3% | - | |
7.7 | 0.0 | |
about 1 month ago | about 2 years ago | |
Python | Python | |
MIT No Attribution | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
amazon-sagemaker-local-mode
-
Debugging Python Code in Amazon SageMaker Locally Using Visual Studio Code and PyCharm: A Step-by-Step Guide
git clone https://github.com/aws-samples/amazon-sagemaker-local-mode/ cd amazon-sagemaker-local-mode/general_pipeline_local_debug python3 -m venv .venv source .venv/bin/activate pip install jupyter jupyter lab
quick-deploy
-
[P] Quick-Deploy - Optimize, convert and deploy machine learning models
github: https://github.com/rodrigobaron/quick-deploy
What are some alternatives?
mljar-supervised - Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation
tritony - Tiny configuration for Triton Inference Server
mars - Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.
examples - 📝 Examples of how to use Neptune for different use cases and with various MLOps tools
aws-lambda-docker-serverless-inference - Serve scikit-learn, XGBoost, TensorFlow, and PyTorch models with AWS Lambda container images support.
kserve - Standardized Serverless ML Inference Platform on Kubernetes
nebuly - The user analytics platform for LLMs
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.