amplify-cli
BentoML
Our great sponsors
amplify-cli | BentoML | |
---|---|---|
16 | 16 | |
2,786 | 6,537 | |
0.3% | 3.0% | |
9.3 | 9.8 | |
about 19 hours ago | 4 days ago | |
TypeScript | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
amplify-cli
-
The Amplify Series, Part 6: Using the power of AI and Machine Learning with Amplify Predictions
Bug: Before continuing, we need to add some manual changes to the generated output since there is a bug in the 10.8.1 version of the Amplify CLI. To fix the issue, open the amplify/backend/predictions/identifyText<>/parameters.json file and add the following three key-value pairs to it:
-
Contribute to AWS Amplify
AWS Amplify CLI - GitHub
-
Add Auth To Your Nuxt 3 App in Minutes With Amplify
If this is the first time using Amplify you'll need to install the Amplify CLI. This tool will help us setup and add Amplify's services.
-
Amplify UI – Don't just prototype. Connect your UI to the cloud
This disconnection between the initial business cases of DynamoDB and Amplify can even be seen within the AWS teams themselves. [4] We don't believe any of them are to blame. The solo Front End Engineer bootstrapping a quick Amplify app for a PoC feels like a way different use case than a team of highly trained data engineers working on their Single Table Design for their micro-service. Amplify rightfully tries to offer an easy way to deal with storing data. And so it follows a standard SQL design with DynamoDB. This though leads to bad performance (original selling point of DynamoDB) or other limitations hard to anticipate.
Overall it is pretty clear and fine that Amplify focuses on PoC projects rather than production ones (with features like Geo-tagging [5] but no way to migrate data). However, when starting to get traction, it is a shame we need to completely eject instead of being able to extend because of lack of (boring but necessary) fundamentals.
[1] https://github.com/aws-amplify/amplify-cli/issues/10164
-
Amplify and AWS? Do the work together at all?
Unfortunately, and this is a bad look for AWS, Amplify still hasn't migrated to CDK v2, meaning they're still using the maintenance-only CDK v1, which is incredibly easy to break due to incompatible packages, and isn't getting any new features. Further, Amplify's implementation of CDK (I think it's the StackSynthesizer subclass) doesn't even support some of the most productive features of CDK like the ability to create a Lambda image from a Dockerfile. You have to define and push the image separately which misses the point of having the entire app defined within Amplify.
-
Problem with spaces in name of amplify location
Hi u/recursivebob , can you please submit an issue to the Amplify CLI on Github using this link
-
BaaS (Firebase, AWS Amplify) and alternatives
Hi all, for a new project I need to create a web app and mobile app and would like to hear your opinion on what would be the right architecture. The architecture has two peculiarities, it needs: - a service for crawling/scraping and writing into a (document) DB - full search text support for the crawled/scraped data Since the budget for running is low and time is tight, I was looking into serverless and BaaS architectures. I like the idea of using a BaaS architecture so I was I thinking if Firebase or AWS Amplify are a good fit. Regarding the full search text, Firestore does not support full text search out of the box, but it is possible to use Elastic, Algolia, Typesense (https://firebase.google.com/docs/firestore/solutions/search). With AWS Amplify it's apparently even more easier - it appears that it can be directly connected to AWS ElasticSearch. Regarding the crawling/scraping, I could use e.g. EC2 instances. Now from the time perspective looks good, but not from the budget perspective. As I found here: https://github.com/aws-amplify/amplify-cli/issues/3860 an ElasticSearch Instance on AWS can cost 70$ for 2 hours!!! This is around the budget for a whole month... So what you guys would advice me? Are there any alternatives? How can I move fast with a minimal cost?
-
Deploy AWS Amplify GraphQL Transformers with AWS CDK
AWS Amplify code is open-sourced and could be found on GitHub at https://github.com/aws-amplify/amplify-cli. We could make use of the NPM packages deployed independently to recreate the functionality offered by the AWS Amplify CLI regarding the GraphQL transformers. The entry point for generating the Appsync resolvers could be found here and the class we are interested in is GraphQLTransform which takes as a parameter all the individual transformers and iterates over them to generate the GraphQL resolvers and the associated CloudFormation stacks for deploying those resolvers.
-
Login in Amplify CLI with SSO not working using AWS access keys
Can you check that issue: https://github.com/aws-amplify/amplify-cli/issues/6338
-
MLH, Open Source, Mapillary & Me
AWS Amplify - The AWS Amplify CLI is a toolchain which includes a robust feature set for simplifying mobile and web application development. The CLI uses AWS CloudFormation and nested stacks to allow you to add or modify configurations locally before you push them for execution in your account.
BentoML
-
Who's hiring developer advocates? (December 2023)
Link to GitHub -->
-
project ideas/advice for entry-level grad jobs?
there are a few tools you can use as "cheat mode" shortcuts to give you a leg up as you're getting started. here's one: https://github.com/bentoml/BentoML
-
Two high schoolers trying to use Azure/GCP/AWS- need help!
Then you can look into bentoml https://github.com/bentoml/BentoML which is used to deploy ml stuff with many more benifits.
- Ask HN: Who is hiring? (November 2022)
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
- PostgresML is 8-40x faster than Python HTTP microservices
- Congratulations on v1.0, BentoML 🍱 ! You are r/mlops OSS of the month!
-
Show HN: Truss – serve any ML model, anywhere, without boilerplate code
In this category I’m a big fan of https://github.com/bentoml/BentoML
What I like about it is their idiomatic developer experience. It reminds me of other Pythonic frameworks like Flask and Django in a good way.
I have no affiliation with them whatsoever, just an admirer.
-
[P] Introducing BentoML 1.0 - A faster way to ship your models to production
Github Page: https://github.com/bentoml/BentoML
- Show HN: BentoML goes 1.0 – A faster way to ship your models to production
What are some alternatives?
awesome-readme - A curated list of awesome READMEs
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
aws-cdk - The AWS Cloud Development Kit is a framework for defining cloud infrastructure in code
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
amplify-js - A declarative JavaScript library for application development using cloud services.
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
aws-amplify-cdk - AWS Amplify GraphQL transformers deployed with AWS CDK
clearml - ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution
amplify-cli-export-construct
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
antlir - ANother Linux Image buildeR
kubeflow - Machine Learning Toolkit for Kubernetes