Running .sh file why does it pause with a : waiting for user input before continuing to run? [duplicate] - shell

I am attempting to utilize the AWS CLI along with a for loop in bash to iteratively purge multiple SQS message queues. The bash script works almost as intended, the problem I am having is with the return value each time the AWS CLI sends a request. When the request is successful, it returns an empty value and opens up an interactive pager in the command line. I then have to manually type q to exit the interactive screen and allow the for loop to continue to the next iteration. This becomes very tedious and time consuming when attempting to purge a large number of queues.
Is there a way to configure AWS CLI to disable this interactive pager from popping up for every return value? Or a way to pipe the return values into a separate file instead of being displayed?
I have played around with configuring different return value types (text, yaml, JSON) but haven't had any luck. Also the --no-pagination parameter doesn't change the behavior.
Here's an example of the bash script I'm trying to run:
for x in 1 2 3; do
aws sqs purge-queue --queue-url https://sqs.<aws-region>.amazonaws.com/<id>/<env>-$x-<queueName>.fifo;
done

Just running into this issue myself, I was able to disable the behaviour by invoking the aws cli as AWS_PAGER="" aws ....
Alternatively you could simply export AWS_PAGER="" at the top of your (bash) script.
Source: https://github.com/aws/aws-cli/pull/4702#issue-344978525

You can also use --no-cli-pager in AWS CLI version 2.
See the "Client-side pager" section here https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html

You can disable pager either by exporting AWS_PAGER="" or by modifying you AWS cli config file.
export AWS_PAGER=""
### or update your ~/.aws/config with
[default]
cli_pager=
Alternatively, you can enable the default pager to output of less program as
export AWS_PAGER="less"
or corresponding config change.
Ref: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html#cli-usage-pagination-clientside

You can set the environment variable PAGER to "cat" to force awscli to not start up less:
PAGER=cat aws sqs list-queues
I set up as a shell alias to make my life easier:
# ~/.zshrc
alias aws="PAGER=cat aws"

I am using the aws cli v2 via docker and passing the --env AWS_PAGER="" on the docker run command fixed this issue for me on windows 10 using git bash.
I set it up as an alias as well so things work with jq.
How to set your docker env values:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file
Example alias:
docker run --rm -it -v c:/users/me/.aws:/root/.aws --env AWS_PAGER="" amazon/aws-cli

Inside your ~/.aws/config file, add:
cli_pager=

Related

I want to exclude messages during startup when setting docker-compose up results to a variable

I am trying to run a command in docker-compose from shellscript and store the result in a variable.
$list_accout_aliases=$(docker-compose run --rm aws iam list-account-aliases)
In this case, the variable will also include the logs during container startup.
Creating terraform_aws_run ... done.
...
Any good ideas for removing the running messages from the variables?
Oddly enough, messages during startup seem to be stored in STDERR.
$ list_accout_aliases =$(docker-compose run --rm aws iam list-account-aliases 2>/dev/null)
{
"AccountAliases": [
"xxx"
]
}
By discarding the STDERR, we have achieved what we want to do.
I don't know if this is the best way to do it...

How to execute gcloud command in bash script from crontab -e

I am trying execute some gcloud commands in bash script from crontab. The script execute sucessfully from command shell but not from the cron job.
I have tried with:
Settng the full path to gcloud like:
/etc/bash_completion.d/gcloud
/home/Arturo/.config/gcloud
/usr/bin/gcloud
/usr/lib/google-cloud-sdk/bin/gcloud
Setting in the begin the script:
/bin/bash -l
Setting in the crontab:
51 21 30 5 6 CLOUDSDK_PYTHON=/usr/bin/python2.7;
/home/myuser/folder1/myscript.sh param1 param2 param3 -f >>
/home/myuser/folder1/mylog.txt`
Setting inside the script:
export CLOUDSDK_PYTHON=/usr/bin/python2.7
Setting inside the script:
sudo ln -s /home/myuser/google-cloud-sdk/bin/gcloud /usr/bin/gcloud
Version Ubuntu 18.04.3 LTS
command to execute: gcloud config set project myproject
but nothing is working, maybe I am doing something wrongly. I hope you can help me.
You need to set your user in your crontab, for it to run the gcloud command. As well explained in this other post here, you need to modify your crontab to fetch the data in your Cloud SDK, for the execution to occur properly - it doesn't seem that you have made this configuration.
Another option that I would recommend you to try out, it's using a Cloud Scheduler to run your gcloud commands. This way, you can use gcloud for your cron jobs in a more integrated and easy way. You can verify more information about this option here: Creating and configuring cron jobs
Let me know if the information helped you!
I found my error, the problem here was only in the command: "gcloud dns record-sets transaction start", the others command was executing sucesfully but only no logging nothing, by that I though that was not executng the other commands. This Command create a temp file ex. transaction.yaml and that file could not be created in the default path for gcloud(snap/bin), but the log simply dont write any thing!. I had to specify the path and name for that file with the flag --transaction-file=mytransaction.yaml. Thanks for your supprot and ideas
I have run into the same issue before. I fixed it by forcing the profile to load in my script.sh,loading the gcloud environment variables with it. Example below:
#!/bin/bash
source /etc/profile
gcloud config set project myprojectecho
echo "Project set to myprojectecho."
I hope this can help others in the future with similar issues, as this also helped me when trying to set GKE nodes from 0-4 on a schedule.
Adding the below line to the shell script fixed my issue
#Execute user profile
source /root/.bash_profile

How to turn off the pager for AWS CLI return value?

I am attempting to utilize the AWS CLI along with a for loop in bash to iteratively purge multiple SQS message queues. The bash script works almost as intended, the problem I am having is with the return value each time the AWS CLI sends a request. When the request is successful, it returns an empty value and opens up an interactive pager in the command line. I then have to manually type q to exit the interactive screen and allow the for loop to continue to the next iteration. This becomes very tedious and time consuming when attempting to purge a large number of queues.
Is there a way to configure AWS CLI to disable this interactive pager from popping up for every return value? Or a way to pipe the return values into a separate file instead of being displayed?
I have played around with configuring different return value types (text, yaml, JSON) but haven't had any luck. Also the --no-pagination parameter doesn't change the behavior.
Here's an example of the bash script I'm trying to run:
for x in 1 2 3; do
aws sqs purge-queue --queue-url https://sqs.<aws-region>.amazonaws.com/<id>/<env>-$x-<queueName>.fifo;
done
Just running into this issue myself, I was able to disable the behaviour by invoking the aws cli as AWS_PAGER="" aws ....
Alternatively you could simply export AWS_PAGER="" at the top of your (bash) script.
Source: https://github.com/aws/aws-cli/pull/4702#issue-344978525
You can also use --no-cli-pager in AWS CLI version 2.
See the "Client-side pager" section here https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html
You can disable pager either by exporting AWS_PAGER="" or by modifying you AWS cli config file.
export AWS_PAGER=""
### or update your ~/.aws/config with
[default]
cli_pager=
Alternatively, you can enable the default pager to output of less program as
export AWS_PAGER="less"
or corresponding config change.
Ref: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html#cli-usage-pagination-clientside
You can set the environment variable PAGER to "cat" to force awscli to not start up less:
PAGER=cat aws sqs list-queues
I set up as a shell alias to make my life easier:
# ~/.zshrc
alias aws="PAGER=cat aws"
I am using the aws cli v2 via docker and passing the --env AWS_PAGER="" on the docker run command fixed this issue for me on windows 10 using git bash.
I set it up as an alias as well so things work with jq.
How to set your docker env values:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file
Example alias:
docker run --rm -it -v c:/users/me/.aws:/root/.aws --env AWS_PAGER="" amazon/aws-cli
Inside your ~/.aws/config file, add:
cli_pager=

Docker Ubuntu environment variables

During the build stage of my docker images, i would like to set some environment variables automatically for every subsequent "RUN" command.
However, I would like to set these variables from within the docker conatiner, because setting them depends on some internal logic.
Using the dockerfile "ENV" command is not good, because that cannot rely on internal logic. (It cannot rely on a command run inside the docker container)
Normally (if this were not docker) I would set my ~/.profile file. However, docker does not load this file in non-interactive shells.
So at them moment I have to run each docker RUN command with:
RUN bash -c "source ~/.profile && do_something_here"
However, this is very tedious (and unclean) when I have to repeat this every time I want to run a bash command. Is there some other "profile" file I can use instead.
you can try setting the arg as env like this
ARG my_env
ENV my_env=${my_env}
in Dockerfile,
and pass the 'my_env=prod' in build-args so that you can use the set env for subsequent RUN commands
you can also use env_file: option in docker compose yml file in case of a stack deploy
I had a similar problem and couldn't find a satisfactory solution. What I did was creating a script that would source the variables, then do the operation. I would then rewrite the RUN commands in the Dockerfile to use that script instead.
In your case, if you need to run multiple commands, you could create a wrapper that loads the variables, runs the command given as argument, and include that script in the docker image.

Managing multiple AWS accounts on the same computer

I have multiple AWS accounts, and depending on which project directory I'm in I want to use a different one when I type commands into the AWS CLI.
I'm aware that the AWS credentials can be passed in via environmental variables, so I was thinking that one solution would be to make it set AWS_CONFIG_FILE based on which directory it's in, but I'm not sure how to do that.
Using Mac OS X, The AWS version is aws-cli/1.0.0 Python/2.7.1 Darwin/11.4.2, and I'm doing this all for the purpose of utilize AWS in a Rails 4 app.
I recommend using different profiles in the configuration file, and just specify the profile that you want with:
aws --profile <your-profile> <command> <subcommand> [parameters]
If you don't want to type the profile for each command, just run:
export AWS_DEFAULT_PROFILE=<your-profile>
before a group of commands.
If you want to somehow automate the process of setting that environment variable when you change to a directory in your terminal, see Dynamic environment variables in Linux?
I think you can create an alias to aws command, which will export the AWS_CONFIG_FILE variable depending on the directory you are in. Something like following (bash) may work.
First create following shell script, lets call it match.sh and put it in /home/user/
export AWS_CONFIG_FILE=`if [[ "$PWD" =~ "MATCH" ]]; then echo "ABC"; else echo "DEF"; fi`
aws "$#"
Now define alias in ~/.bashrc script
alias awsdirbased="/home/user/match.sh $#"
Now whenever you wanna run "aws" comand instead run "awsdirbased" and it should work

Resources