I have multiple AWS accounts, and depending on which project directory I'm in I want to use a different one when I type commands into the AWS CLI.
I'm aware that the AWS credentials can be passed in via environmental variables, so I was thinking that one solution would be to make it set AWS_CONFIG_FILE based on which directory it's in, but I'm not sure how to do that.
Using Mac OS X, The AWS version is aws-cli/1.0.0 Python/2.7.1 Darwin/11.4.2, and I'm doing this all for the purpose of utilize AWS in a Rails 4 app.
I recommend using different profiles in the configuration file, and just specify the profile that you want with:
aws --profile <your-profile> <command> <subcommand> [parameters]
If you don't want to type the profile for each command, just run:
export AWS_DEFAULT_PROFILE=<your-profile>
before a group of commands.
If you want to somehow automate the process of setting that environment variable when you change to a directory in your terminal, see Dynamic environment variables in Linux?
I think you can create an alias to aws command, which will export the AWS_CONFIG_FILE variable depending on the directory you are in. Something like following (bash) may work.
First create following shell script, lets call it match.sh and put it in /home/user/
export AWS_CONFIG_FILE=`if [[ "$PWD" =~ "MATCH" ]]; then echo "ABC"; else echo "DEF"; fi`
aws "$#"
Now define alias in ~/.bashrc script
alias awsdirbased="/home/user/match.sh $#"
Now whenever you wanna run "aws" comand instead run "awsdirbased" and it should work
Related
I am attempting to utilize the AWS CLI along with a for loop in bash to iteratively purge multiple SQS message queues. The bash script works almost as intended, the problem I am having is with the return value each time the AWS CLI sends a request. When the request is successful, it returns an empty value and opens up an interactive pager in the command line. I then have to manually type q to exit the interactive screen and allow the for loop to continue to the next iteration. This becomes very tedious and time consuming when attempting to purge a large number of queues.
Is there a way to configure AWS CLI to disable this interactive pager from popping up for every return value? Or a way to pipe the return values into a separate file instead of being displayed?
I have played around with configuring different return value types (text, yaml, JSON) but haven't had any luck. Also the --no-pagination parameter doesn't change the behavior.
Here's an example of the bash script I'm trying to run:
for x in 1 2 3; do
aws sqs purge-queue --queue-url https://sqs.<aws-region>.amazonaws.com/<id>/<env>-$x-<queueName>.fifo;
done
Just running into this issue myself, I was able to disable the behaviour by invoking the aws cli as AWS_PAGER="" aws ....
Alternatively you could simply export AWS_PAGER="" at the top of your (bash) script.
Source: https://github.com/aws/aws-cli/pull/4702#issue-344978525
You can also use --no-cli-pager in AWS CLI version 2.
See the "Client-side pager" section here https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html
You can disable pager either by exporting AWS_PAGER="" or by modifying you AWS cli config file.
export AWS_PAGER=""
### or update your ~/.aws/config with
[default]
cli_pager=
Alternatively, you can enable the default pager to output of less program as
export AWS_PAGER="less"
or corresponding config change.
Ref: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html#cli-usage-pagination-clientside
You can set the environment variable PAGER to "cat" to force awscli to not start up less:
PAGER=cat aws sqs list-queues
I set up as a shell alias to make my life easier:
# ~/.zshrc
alias aws="PAGER=cat aws"
I am using the aws cli v2 via docker and passing the --env AWS_PAGER="" on the docker run command fixed this issue for me on windows 10 using git bash.
I set it up as an alias as well so things work with jq.
How to set your docker env values:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file
Example alias:
docker run --rm -it -v c:/users/me/.aws:/root/.aws --env AWS_PAGER="" amazon/aws-cli
Inside your ~/.aws/config file, add:
cli_pager=
I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.
I have some variables I use quite frequently to configure and tweak my Ubuntu LAMP server stack but I'm getting tired of having to copy and paste the export command into my SSH window to register the variable and its value.
Essentially I would like to keep my variables and their values in a file inside the user profiles home directory so when I type a command into a SSH window or execute a bash script the variables can be easily used. I don't want to set any system-wide variables as some of these variables are for setting passwords etc.
What's the easiest way of doing this?
UPDATE 1
So essentially I could store the variables and values in a file and then each time I login into a SSH session I call this file up once to setup the variables?
cat <<"EOF" >> ~/my_variables
export foo='bar'
export hello="world"
EOF
ssh root#example.com
$ source ~/my_variables
$ echo "$foo"
$ bar
and then to call the variable from within a script I place source ~/my_variables at the top of the script?
#!/bin/bash
source ~/my_variables
echo "$hello"
Just add your export commands to a file and then run source <the-file> (or . <the-file> for non-bash shells) in your SSH session.
I'm trying to create an EC2 User-data script to run other scripts on boot up. However, the scripts that I run fail to recognize some commands and variables that I'd already declared. I'm running the commands as the "ubuntu" user but it still isn't working.
My user-data script looks something like this:
export user="ubuntu"
sudo su $user -c ". ./run_script"
Within the script, I have these lines:
THIS_PATH="/some/path"
echo "export SOME_PATH=$THIS_PATH" >> ~/.bashrc
source ~/.bashrc
However, the script can't run SOME_PATH/application, and echo $SOME_PATH this returns a blank line. I'm confused because $SOME_PATH/application works when I log into the EC2 using SSH and my debug logs using whoami returns "ubuntu."
Am I missing something here?
Your data script is executed as root and su command leaves $HOME and other ENV variables intact (note that sudo is redundant). "su -" does not help either
So, do not use ~ or $HOME but full path /home/ubuntu/.bashrc
I found out the problem. It seems that source ~/.bashrc isn't enough to restart the shell -- the environment variables worked after I referenced them in another bash script.
I'm trying to create a Shell Script to automate my local dev environment. I need it start some processes (Redis, MongoDB, etc.), set the environment variables then start the local web server. I'm working on OS X El Capitan.
Everything is working so far, except the environment variables. Here is the script:
#!/bin/bash
# Starting the Redis Server
if pgrep "redis-server" > /dev/null
then
printf "Redis is already running.\n"
else
brew services start redis
fi
# Starting the Mongo Service
if pgrep "mongod" > /dev/null
then
printf "MongoDB is already running.\n"
else
brew services start mongodb
fi
# Starting the API Server
printf "\nStarting API Server...\n"
source path-to-file.env
pm2 start path-to-server.js --name="api" --watch --silent
# Starting the Auth Server
printf "\nStarting Auth Server...\n"
source path-to-file.env
pm2 start path-to-server.js --name="auth" --watch --silent
# Starting the Client Server
printf "\nStarting Local Client...\n"
source path-to-file.env
pm2 start path-to-server.js --name="client" --watch --silent
The .env file is using the format export VARIABLE="value"
The environment variables are just not being set at all. But, if I run the exact command source path-to-file.env before running the script then it works. I'm wondering why the command would work independently but not inside the shell script.
Any help would be appreciated.
When you execute a script, it executes in a subshell, and its environment settings are lost when the subshell exits. If you want to configure your interactive shell from a script, you must source the script in your interactive shell.
$ source start-local.sh
Now the environment should appear in your interactive shell. If you want that environment to be inherited by subshells, you must also export any variables that will be required. So, for instance, in path-to-file.env, you'd want lines like:
export MY_IMPORTANT_PATH_VAR="/example/blah"