Terraform `local-exec` to set a local alias - shell

I'm trying to set up an alias to quickly ssh into the newly created host when I create an AWS instance in terraform. I do this by running
# Handy alias to quickly ssh into newly created host
provisioner "local-exec" {
command = "alias sshopenldap='ssh -i ${var.key_path} ubuntu#${aws_instance.ldap_instance.public_dns}'"
}
When I see the output of this execution:
aws_instance.ldap_instance (local-exec): Executing: /bin/sh -c "alias sshopenldap='ssh -i ~/.ssh/mykey.pem ubuntu#ec2-IP.compute-1.amazonaws.com'"
It seems to be ok, but the alias is not set. Could this be that the way the command is being run it wraps it in a new scope and not the one of the current shell? If I copy paste the command as is to the console the alias is set fine.
Is there a workaround for this?
I'm running terraform a MacOS X Mountain Lion's terminal.

You could try something like:
# Handy alias to quickly ssh into newly created host
provisioner "local-exec" {
command = "echo \"alias sshopenldap='ssh -i ${var.key_path} ubuntu#${aws_instance.ldap_instance.public_dns}'\" > script.sh && source script.sh && rm -rf source.sh"
}
Not sure how the quote escaping will go...

It is indeed not possible to set an alias for the current shell in a script file, which is what your are trying to do. The only way to get out of this is to not run a script, but instead source it. So:
source somefile.sh
instead of executing it should do the trick.

Related

execute aws command in script with sudo

I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.

AWS EC2 User Data: Commands not recognized when using sudo

I'm trying to create an EC2 User-data script to run other scripts on boot up. However, the scripts that I run fail to recognize some commands and variables that I'd already declared. I'm running the commands as the "ubuntu" user but it still isn't working.
My user-data script looks something like this:
export user="ubuntu"
sudo su $user -c ". ./run_script"
Within the script, I have these lines:
THIS_PATH="/some/path"
echo "export SOME_PATH=$THIS_PATH" >> ~/.bashrc
source ~/.bashrc
However, the script can't run SOME_PATH/application, and echo $SOME_PATH this returns a blank line. I'm confused because $SOME_PATH/application works when I log into the EC2 using SSH and my debug logs using whoami returns "ubuntu."
Am I missing something here?
Your data script is executed as root and su command leaves $HOME and other ENV variables intact (note that sudo is redundant). "su -" does not help either
So, do not use ~ or $HOME but full path /home/ubuntu/.bashrc
I found out the problem. It seems that source ~/.bashrc isn't enough to restart the shell -- the environment variables worked after I referenced them in another bash script.

bash script for ssh connect and change folder

There is the following bash script:
#!/bin/bash
set -o errexit
# Общие параметры
server="some_server"
login="admin"
default_path="/home/admin/web/"
html_folder="/public_html"
# Параметры проекта
project_folder="project_name"
go_to_folder() {
ssh "$login#$server"
cd "/home/admin/web/"
}
go_to_folder
I got error "deploy.sh: line 16: cd: /home/admin/web/: No such file or directory", but if I connect manually and change directory through "cd" it works. How can I change my script?
Yes it is obvious, didn't it? You are trying to do cd to the local machine and not on the target machine. The commands to be passed to ssh much be provided in-line with its parameters, on a separate newline its looking as if you are doing no-op on the remote machine and running the cd in the local.
go_to_folder() {
ssh "$login#$server" "cd /home/admin/web/"
}
Or a more cleaner way to do would be to use here-docs
go_to_folder() {
ssh "$login#$server" <<EOF
cd /home/admin/web/
EOF
}
other ways to make ssh read from stanard input on the commands to run would be to use here-strings(<<<) also.

Append to a remote environment variable for a command started via ssh on RO filesystem

I can run a Python script on a remote machine like this:
ssh -t <machine> python <script>
And I can also set environment variables this way:
ssh -t <machine> "PYTHONPATH=/my/special/folder python <script>"
I now want to append to the remote PYTHONPATH and tried
ssh -t <machine> 'PYTHONPATH=$PYTHONPATH:/my/special/folder python <script>'
But that doesn't work because $PYTHONPATH won't get evaluated on the remote machine.
There is a quite similar question on SuperUser and the accepted answer wants me to create an environment file which get interpreted by ssh and another question which can be solved by creating and copying a script file which gets executed instead of python.
This is both awful and requires the target file system to be writable (which is not the case for me)!
Isn't there an elegant way to either pass environment variables via ssh or provide additional module paths to Python?
How about using /bin/sh -c '/usr/bin/env PYTHONPATH=$PYTHONPATH:/.../ python ...' as the remote command?
EDIT (re comments to prove this should do what it's supposed to given correct quoting):
bash-3.2$ export FOO=bar
bash-3.2$ /usr/bin/env FOO=$FOO:quux python -c 'import os;print(os.environ["FOO"])'
bar:quux
WFM here like this:
$ ssh host 'grep ~/.bashrc -e TEST'
export TEST="foo"
$ ssh host 'python -c '\''import os; print os.environ["TEST"]'\'
foo
$ ssh host 'TEST="$TEST:bar" python -c '\''import os; print os.environ["TEST"]'\'
foo:bar
Note the:
single quotes around the entire command, to avoid expanding it locally
embedded single quotes are thus escaped in the signature '\'' pattern (another way is '"'"')
double quotes in assignment (only required if the value has whitespace, but it's good practice to not depend on that, especially if the value is outside your control)
avoiding of $VAR in command: if I typed e.g. echo "$TEST", it would be expanded by shell before replacing the variable
a convenient way around this is to make var replacement a separate command:
$ ssh host 'export TEST="$TEST:bar"; echo "$TEST"'
foo:bar

Embedded terminal startup script

I usually use bash scripts to setup my environments (mostly aliases that interact with Docker), ie:
# ops-setup.sh
#!/bin/bash
PROJECT_NAME="my_awesome_project"
PROJECT_PATH=`pwd`/${BASH_SOURCE[0]}
WEB_CONTAINER=${PROJECT_NAME}"_web_1"
DB_CONTAINER=${PROJECT_NAME}"_db_1"
alias chroot_project="cd $PROJECT_PATH"
alias compose="chroot_project;COMPOSE_PROJECT_NAME=$PROJECT_NAME docker-compose"
alias up="compose up -d"
alias down="compose stop;compose rm -f --all nginx web python"
alias web_exec="docker exec -ti $WEB_CONTAINER"
alias db="docker exec -ti $DB_CONTAINER su - postgres -c 'psql $PROJECT_NAME'"
# ...
I'd like them to be run when I open the embedded terminal.
I tried Startup Tasks but they are not run in my terminal contexts.
Since I have a dedicated script for each of my projects, I can't run them from .bashrc or other.
How can I get my aliases automatically set at terminal opening ?
Today I'm running . ./ops-setup.sh manually each time I open a new embedded terminal.
You can create an alias in your .bashrc file like so:
alias ops-setup='bash --init-file <(echo '. /home/test/ops-setup.sh'; echo '. /home/test/.bashrc')'
If you call ops-setup, it will open up a new bash inside your terminal, and source .bashrc like it normally would, as well as your own script.
The only way I see to completely automate this is to modify the source code of your shell, e.g. bash, and recompile it. The files that are sourced are hardcoded into the source code.

Resources