aws-lambda-docker-serverless-inference
amazon-sagemaker-examples
Our great sponsors
aws-lambda-docker-serverless-inference | amazon-sagemaker-examples | |
---|---|---|
1 | 17 | |
91 | 9,424 | |
- | 2.1% | |
4.0 | 9.3 | |
about 2 months ago | 7 days ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT No Attribution | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aws-lambda-docker-serverless-inference
-
AWS - NLP newsletter - 2021. Aug.
GitHub: Train a BlazingText text classification algorithm in SageMaker, inference with AWS Lambda
amazon-sagemaker-examples
-
Using AWS for Text Classification Part-1
Additionally, you can easily deploy pretrained fastText models on their own to live SageMaker endpoints to compute embedding vectors on the fly for use in relevant word-level tasks. See the following GitHub example for more details.
-
Migrate local Data Science workspaces to SageMaker Studio
Amazon SageMaker provides XGBoost as a built-in algorithm and data science team decided to use it and re-train the model. So, data scientists just need to call built-in version and provide path to data on S3, more detailed description can be found in documentation. Example notebook can be found here.
-
What's New with AWS: Amazon SageMaker built-in algorithms now provides four new Tabular Data Modeling Algorithms
Amazon SageMaker provides four new tabular data modeling algorithms: LightGBM, CatBoost, AutoGluon-Tabular and TabTransformer. These popular, state-of-the-art algorithms can be used for both tabular classification and regression tasks. They are available through the SageMaker JumpStart UI inside of SageMaker Studio, as well as through python code using SageMaker Python SDK. To learn how to use these algorithms, you can find SageMaker example notebooks below:
-
How InfoJobs (Adevinta) improves NLP model prediction performance with AWS Inferentia and Amazon SageMaker
In this section, we go through an example in which we show you how to compile a BERT model with Neo for AWS Inferentia. We then deploy that model to a SageMaker endpoint. You can find a sample notebook describing the whole process in detail on GitHub.
-
NLP@AWS Newsletter 04/2022
Train EleutherAI GPT-J using SageMaker EleutherAI released GPT-J 6B as an open-source alternative to OpenAI's GPT-3. EleutherAIās goal was to train a model that is equivalent in size to GPTā -ā 3 and make it available to the public under an open license and has since gained a lot of interest from Researchers, Data Scientists, and even Software Developers. This notebook shows you how to easily train and tune GPT-J using Amazon SageMaker Distributed Training and Hugging Face on NVIDIA GPU instances.
-
AWS - NLP newsletter November 2021
Amazon SageMaker Asynchronous Inference with Hugging Face Model Amazon SageMaker Asynchronous Inference is a new capability in SageMaker that queues incoming requests and processes them asynchronously. SageMaker currently offers two inference options for customers to deploy machine learning models: 1) a real-time option for low-latency workloads 2) Batch transform, an offline option to process inference requests on batches of data available upfront. Real-time inference is suited for workloads with payload sizes of less than 6 MB and require inference requests to be processed within 60 seconds. Batch transform is suitable for offline inference on batches of data. This notebook provides an introduction on how to use the SageMaker Asynchronous inference capability with Hugging Face models. This notebook will cover the steps required to create an Asynchronous inference endpoint and test it with some sample requests.
- I can't find a way to use pytorch for machine learning
-
Sorting my socks with deep learningāāāPart 1
A more extensive explanation here
What are some alternatives?
LightGBM - A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
ganbert-pytorch - Enhancing the BERT training with Semi-supervised Generative Adversarial Networks in Pytorch/HuggingFace
catboost - A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
sp-api-sdk - Amazon Selling Partner SPI - PHP SDKs
keytotext - Keywords to Sentences
jetson-containers - Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
sagemaker-studio-auto-shutdown-extension
Popular-RL-Algorithms - PyTorch implementation of Soft Actor-Critic (SAC), Twin Delayed DDPG (TD3), Actor-Critic (AC/A2C), Proximal Policy Optimization (PPO), QT-Opt, PointNet..
Hello-AWS-Data-Services - AWS Data/MLServices sample code & notes for my LinkedIn Learning courses
AnnA_Anki_neuronal_Appendix - Using machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
multi-label-sentiment-classifier - How to build a multi-label sentiment classifiers with Tez and PyTorch
pytorch-imagenet-wds