percol VS aws-cli

Compare percol vs aws-cli and see what are their differences.

percol

adds flavor of interactive filtering to the traditional pipe concept of UNIX shell (by mooz)

aws-cli

Universal Command Line Interface for Amazon Web Services (by aws)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
percol aws-cli
2 48
3,235 14,790
- 1.1%
0.0 9.8
almost 2 years ago 7 days ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

percol

Posts with mentions or reviews of percol. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-05.

aws-cli

Posts with mentions or reviews of aws-cli. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-14.
  • Top 10 CLI Tools for DevOps Teams
    11 projects | dev.to | 14 Aug 2023
    The AWS CLI is a must-have tool if your team relies on Amazon Web Services. It lets you effortlessly interact with AWS services, orchestrate resource management, and automate tasks from the comfort of your terminal. Once you get used to the tool, you'll notice how convenient and quick it is to fit into your processes – especially compared to going through AWS's web-based user interface.
  • My First Impressions of Nix
    33 projects | news.ycombinator.com | 19 Jun 2023
    Just for your consideration, the network effect is very real with package managers, too:

    https://search.nixos.org/packages?channel=23.05&show=awscli2 is 2.11.27 (even on the "unstable" channel), versus https://formulae.brew.sh/formula/awscli#default that is 2.12.1, which correctly is the most current (https://github.com/aws/aws-cli/tags)

  • [Engineering_Stuff] S3FS-FUSE - Permet de monter votre lien de seau S3 / Minio vers votre répertoire local
    2 projects | /r/enfrancais | 28 Apr 2023
  • s3fs-fuse - allows to mount your s3/minio bucket link to your local directory
    3 projects | /r/engineering_stuff | 30 Mar 2023
    s3fs allows Linux, macOS, and FreeBSD to mount an S3 bucket via FUSE(Filesystem in Userspace). s3fs makes you operate files and directories in S3 bucket like a local file system. s3fs preserves the native object format for files, allowing use of other tools like AWS CLI.
  • AWS Announces Open Source Mountpoint for Amazon S3
    4 projects | news.ycombinator.com | 26 Mar 2023
    AFAIK it's still a Python package: https://github.com/aws/aws-cli/tree/2.11.6
  • Event Based System with Localstack (Elixir Edition): Uploading files to S3 with PresignedURL's
    3 projects | dev.to | 9 Feb 2023
    And this is the init_localstack.sh file content, a unique thing about localstack its that you can move all strings like an aws-cli tool, also the container deletes all the content and config once the container stops, so the script file must create all the resources that you need from Localstack
  • Yq is a portable yq: command-line YAML, JSON, XML, CSV and properties processor
    11 projects | news.ycombinator.com | 4 Feb 2023
    Not to mention that JMESpath appears to be abandoned.

    There is a fork (https://github.com/jmespath-community/jmespath.spec), but it seems unlikely to be used by the aws cli (https://github.com/aws/aws-cli/issues/7396). Although, for that matter jq is semi-abandoned itself.

  • Setting up ad hoc development environments for Django applications with AWS ECS, Terraform and GitHub Actions
    4 projects | dev.to | 11 Jun 2022
    #!/bin/bash # This script will be called to update an ad hoc environment backend # with a new image tag. It will first run pre-update tasks (such as migrations) # and then do a rolling update of the backend services. # It is called from the ad_hock_backend_update.yml GitHub Actions file # Required environment variables that need to be exported before running this script: # WORKSPACE - ad hoc environment workspace # SHARED_RESOURCES_WORKSPACE - shared resources workspace # BACKEND_IMAGE_TAG - backend image tag to update services to (e.g. v1.2.3) # AWS_ACCOUNT_ID - AWS account ID is used for the ECR repository URL echo "Updating backend services..." # first define a variable containing the new image URI NEW_BACKEND_IMAGE_URI="$AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/backend:$BACKEND_IMAGE_TAG" # register new task definitions # https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-task-definition.html#description for TASK in "migrate" "gunicorn" "default" "beat" do echo "Updating $TASK task definition..." # in Terraform we name our tasks based on the ad hoc environment name # (also the Terraform workspace name) and the name of the task # (e.g. migrate, gunicorn, default, beat) TASK_FAMILY=$WORKSPACE-$TASK # save the task definition JSON to a variable TASK_DESCRIPTION=$(aws ecs describe-task-definition \ --task-definition $TASK_FAMILY \ ) # save container definitions to a file for each task echo $TASK_DESCRIPTION | jq -r \ .taskDefinition.containerDefinitions \ > /tmp/$TASK_FAMILY.json # write new container definition JSON with updated image echo "Writing new $TASK_FAMILY container definitions JSON..." # replace old image URI with new image URI in a new container definitions JSON cat /tmp/$TASK_FAMILY.json \ | jq \ --arg IMAGE "$NEW_BACKEND_IMAGE_URI" '.[0].image |= $IMAGE' \ > /tmp/$TASK_FAMILY-new.json # Get the existing configuration for the task definition (memory, cpu, etc.) # from the variable that we saved the task definition JSON to earlier echo "Getting existing configuration for $TASK_FAMILY..." MEMORY=$( echo $TASK_DESCRIPTION | jq -r \ .taskDefinition.memory \ ) CPU=$( echo $TASK_DESCRIPTION | jq -r \ .taskDefinition.cpu \ ) ECS_EXECUTION_ROLE_ARN=$( echo $TASK_DESCRIPTION | jq -r \ .taskDefinition.executionRoleArn \ ) ECS_TASK_ROLE_ARN=$( echo $TASK_DESCRIPTION | jq -r \ .taskDefinition.taskRoleArn \ ) # check the content of the new container definition JSON cat /tmp/$TASK_FAMILY-new.json # register new task definition using the new container definitions # and the values that we read off of the existing task definitions echo "Registering new $TASK_FAMILY task definition..." aws ecs register-task-definition \ --family $TASK_FAMILY \ --container-definitions file:///tmp/$TASK_FAMILY-new.json \ --memory $MEMORY \ --cpu $CPU \ --network-mode awsvpc \ --execution-role-arn $ECS_EXECUTION_ROLE_ARN \ --task-role-arn $ECS_TASK_ROLE_ARN \ --requires-compatibilities "FARGATE" done # Now we need to run migrate, collectstatic and any other commands that need to be run # before doing a rolling update of the backend services # We will use the new task definitions we just created to run these commands # get the ARN of the most recent revision of the migrate task definition TASK_DEFINITION=$( \ aws ecs describe-task-definition \ --task-definition $WORKSPACE-migrate \ | jq -r \ .taskDefinition.taskDefinitionArn \ ) # get private subnets as space separated string from shared resources VPC SUBNETS=$( \ aws ec2 describe-subnets \ --filters "Name=tag:env,Values=$SHARED_RESOURCES_WORKSPACE" "Name=tag:Name,Values=*private*" \ --query 'Subnets[*].SubnetId' \ --output text \ ) # replace spaces with commas using tr SUBNET_IDS=$(echo $SUBNETS | tr ' ' ',') # https://github.com/aws/aws-cli/issues/5348 # get ecs_sg_id - just a single value ECS_SG_ID=$( \ aws ec2 describe-security-groups \ --filters "Name=tag:Name,Values=$SHARED_RESOURCES_WORKSPACE-ecs-sg" \ --query 'SecurityGroups[*].GroupId' \ --output text \ ) echo "Running database migrations..." # timestamp used for log retrieval (milliseconds after Jan 1, 1970 00:00:00 UTC) START_TIME=$(date +%s000) # run the migration task and capture the taskArn into a variable called TASK_ID TASK_ID=$( \ aws ecs run-task \ --cluster $WORKSPACE-cluster \ --task-definition $TASK_DEFINITION \ --network-configuration "awsvpcConfiguration={subnets=[$SUBNET_IDS],securityGroups=[$ECS_SG_ID],assignPublicIp=ENABLED}" \ | jq -r '.tasks[0].taskArn' \ ) echo "Task ID is $TASK_ID" # wait for the migrate task to exit # https://docs.aws.amazon.com/cli/latest/reference/ecs/wait/tasks-stopped.html#description # > It will poll every 6 seconds until a successful state has been reached. # > This will exit with a return code of 255 after 100 failed checks. aws ecs wait tasks-stopped \ --tasks $TASK_ID \ --cluster $WORKSPACE-cluster # timestamp used for log retrieval (milliseconds after Jan 1, 1970 00:00:00 UTC) END_TIME=$(date +%s000) # print the CloudWatch log events to STDOUT aws logs get-log-events \ --log-group-name "/ecs/$WORKSPACE/migrate" \ --log-stream-name "migrate/migrate/${TASK_ID##*/}" \ --start-time $START_TIME \ --end-time $END_TIME \ | jq -r '.events[].message' echo "Migrations complete. Starting rolling update for backend services..." # update backend services for TASK in "gunicorn" "default" "beat" do # get taskDefinitionArn for each service to be used in update-service command # this will get the most recent revision of each task (the one that was just created) # https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-task-definition.html#description TASK_DEFINITION=$( \ aws ecs describe-task-definition \ --task-definition $WORKSPACE-$TASK \ | jq -r \ .taskDefinition.taskDefinitionArn \ ) # update each service with new task definintion aws ecs update-service \ --cluster $WORKSPACE-cluster \ --service $WORKSPACE-$TASK \ --task-definition $TASK_DEFINITION \ --no-cli-pager done echo "Services updated. Waiting for services to become stable..." # wait for all service to be stable (runningCount == desiredCount for each service) aws ecs wait services-stable \ --cluster $WORKSPACE-cluster \ --services $WORKSPACE-gunicorn $WORKSPACE-default $WORKSPACE-beat echo "Services are now stable. Backend services are now up to date with $BACKEND_IMAGE_TAG." echo "Backend update is now complete!"
  • AWS linux container image
    2 projects | /r/aws | 5 Mar 2022
    The Dockerfile is here (to confirm that it builds on Amazon Linux 2): https://github.com/aws/aws-cli/blob/v2/docker/Dockerfile
  • AWS CLI releases
    2 projects | /r/aws | 15 Feb 2022
    > AWS CLI 2.0.0dev preview release

What are some alternatives?

When comparing percol and aws-cli you can also consider the following projects:

rclone - "rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

boto3 - AWS SDK for Python

SAWS - A supercharged AWS command line interface (CLI).

httpie - 🥧 HTTPie CLI — modern, user-friendly command-line HTTP client for the API era. JSON support, colors, sessions, downloads, plugins & more.

thefuck - Magnificent app which corrects your previous console command.

aws-vault - A vault for securely storing and accessing AWS credentials in development environments

pgcli - Postgres CLI with autocompletion and syntax highlighting

cookiecutter - A cross-platform command-line utility that creates projects from cookiecutters (project templates), e.g. Python package projects, C projects.

awesome-aws - A curated list of awesome Amazon Web Services (AWS) libraries, open source repos, guides, blogs, and other resources. Featuring the Fiery Meter of AWSome.

autopexpect - autoexpect for pexpect

mycli - A Terminal Client for MySQL with AutoCompletion and Syntax Highlighting.