cloud-init runcmd (using MAAS) - bash

I'm unable to run bash scripts in "runcmd:" that aren't inline.
runcmd:
- [ bash, -c, echo "=========hello world=========" >>foo1.bar ]
- [ bash, -c, echo "=========hello world=========" >>foo2.bar ]
- [ bash, -c, /usr/local/bin/foo.sh ]
The first two lines are successfully run on the deployed Ubuntu instance. However, the foo.sh doesn't seem to run.
Here is /usr/local/bin/foo.sh:
#!/bin/bash
echo "=========hello world=========" >>foosh.bar
foo.sh has executable permissions for root and resides on the MAAS server.
I've looked at the following but they don't seem to sort out my issue:
Cannot make bash script work from cloud-init
run GO111MODULE=on go install . ./cmd/... in cloud init
https://gist.github.com/aw/40623531057636dd858a9bf0f67234e8
Any help or tips would be greatly appreciated.

Anything you run using runcmd must already exist on the filesystem. There is no provision for automatically fetching something from a remote host.
You have several options for getting files there. Two that come to mind immediately are:
You could embed the script in your cloud-init configuration using the write-files directive:
write_files:
- path: /usr/local/bin/foo.sh
permissions: '0755'
content: |
#!/bin/bash
echo "=========hello world=========" >>foosh.bar
runcmd:
- [bash, /usr/local/bin/foo.sh]
You could fetch the script from a remote location using curl (or similar tool):
runcmd:
- [curl, -o, /usr/local/bin/foo.sh, http://somewhere.example.com/foo.sh]
- [bash, /usr/local/bin/foo.sh]

Related

run GO111MODULE=on go install . ./cmd/... in cloud init

I have a bash script which is deployed with cloud init, my bash script contains the following part of code
GO111MODULE=on go install . ./cmd/...
when running my bash script directly in the terminal of the deplyed server, it works as expected. But when i run it with runcmd in the cloud config, this part of the script:
GO111MODULE=on go install . ./cmd/...
does not get executed, anyone knows why?
runcmd:
- [ bash, /usr/local/bin/myscript.sh ]
A proper shell execution in runcmd would be (as seen in Cloud config examples):
- [ bash, -c, /usr/local/bin/myscript.sh ]
or:
- [ /usr/local/bin/myscript.sh ]
Assuming your script starts with a shebang #!/bin/bash
Plus, you need to add any environment variable inside the script, as Cloud config examples do not include any obvious way to set them.
#!/bin/bash
export GO111MODULE=on
export ...
Thanks to the tip from VonC, i was able to fix the issue. i added the following to myscript.sh
GOCACHE=/root/.cache/go-build
export GOCACHE
export GO111MODULE=on
go install . ./cmd/...
runcmd:
- [ bash, -c, /usr/local/bin/myscript.sh ]
the script now deploys and runs from cloud-init.

Why does running cloud-init script on EC2 change my shell?

Calling all cloud-init and EC2 gurus...
I can't figure this out. I'm using a cloud-init script to bootstrap an EC2 aws-ami instance (through AWS CloudFormation) and when I include the write_files property it changes the command prompt on the instance to -bash-4.2$. If I don't include write_files, I get the regular EC2 shell.
Here is my script so far:
#cloud-config
repo_update: true
repo_upgrade: all
packages:
- gcc
- git
- ruby24
- ruby24-devel
runcmd:
- update-alternatives --set ruby /usr/bin/ruby2.4
write_files:
- path: /home/ec2-user/some-file.yml
owner: root:root
permissions: '0644'
content: |
<<--SOME-CONTENT-->
final_message: 'The Build Server is ready!'
Anybody know why this is happening or what I might be doing wrong that's making cloud-init change the shell? Or maybe it's a bug/known-issue with cloud-init? This is driving me nuts.
I've already checked the logs /var/log/cloud-init.log and /var/log/cloud-init-output.log and there are no errors or anything to suggest anything went wrong.
I figured it out, something within cloud-init was not setting the variable $PS1, so the built-in default \s-\v\$ is used.
I fixed it by bootstrapping a modified ~/.bashrc file.
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
parse_git_branch() {
if ! git rev-parse --git-dir > /dev/null 2>&1; then
return 0
fi
git_branch=$(git branch 2>/dev/null| sed -n '/^\*/s/^\* //p')
echo "[$git_branch]"
}
PS1="${debian_chroot:+($debian_chroot)}\[\033[38;5;39m\]\u#\h\[\033[00m\]:\[\033[01;36m\]\w\[\033[00m\]\[\033[38;5;118m\]\$(parse_git_branch)\[\033[00m\]$ "

AWS EC2 User Data: Commands not recognized when using sudo

I'm trying to create an EC2 User-data script to run other scripts on boot up. However, the scripts that I run fail to recognize some commands and variables that I'd already declared. I'm running the commands as the "ubuntu" user but it still isn't working.
My user-data script looks something like this:
export user="ubuntu"
sudo su $user -c ". ./run_script"
Within the script, I have these lines:
THIS_PATH="/some/path"
echo "export SOME_PATH=$THIS_PATH" >> ~/.bashrc
source ~/.bashrc
However, the script can't run SOME_PATH/application, and echo $SOME_PATH this returns a blank line. I'm confused because $SOME_PATH/application works when I log into the EC2 using SSH and my debug logs using whoami returns "ubuntu."
Am I missing something here?
Your data script is executed as root and su command leaves $HOME and other ENV variables intact (note that sudo is redundant). "su -" does not help either
So, do not use ~ or $HOME but full path /home/ubuntu/.bashrc
I found out the problem. It seems that source ~/.bashrc isn't enough to restart the shell -- the environment variables worked after I referenced them in another bash script.

Ansible doesn't load ~/.profile

I'm asking myself why Ansible doesn't source ~/.profile file before execute template module on one host ?
Distant host ~/.profile:
export ENV_VAR=/usr/users/toto
A single Ansible task:
- template: src=file1.template dest={{ ansible_env.ENV_VAR }}/file1
Ansible fail with:
fatal: [distant-host] => One or more undefined variables: 'dict object' has no attribute 'ENV_VAR'
Ansible is not running remote tasks (command, shell, ...) in an interactive nor login shell. It's same like when you execute command remotely via 'ssh user#host "which python"'
To source ~/.bashrc won't work often because ansible shell is not interactive and ~/.bashrc implementation by default ignores non interactive shell (check its beginning).
The best solution for executing commands as user after its ssh interactive login I found is:
- hosts: all
tasks:
- name: source user profile file
#become: yes
#become_user: my_user # in case you want to become different user (make sure acl package is installed)
shell: bash -ilc 'which python' # example command which prints
register: which_python
- debug:
var: which_python
bash: '-i' means interactive shell, so .bashrc won't be ignored
'-l' means login shell which sources full user profile (/etc/profile and ~/.bash_profile, or ~/.profile - see bash manual page for more details)
Explanation of my example: my ~/.bashrc sets specific python from anaconda installed under that user.
Ansible is not running tasks in an interactive shell on the remote host. Michael DeHaan has answered this question on github some time ago:
The uber-basic description is ansible isn't really doing things through the shell, it's transferring modules and executing scripts that it transfers, not using a login shell.
i.e. Why does an SSH remote command get fewer environment variables then when run manually?
It's not a continous shell environment basically, nor is it logging in and typing commands and things.
You should see the same result (undefined variable) by running this:
ssh <host> echo $ENV_VAR
In a lot of places I've used below structure:
- name: Task Name
shell: ". /path/to/profile;command"
when ansible escalates the privilige to sudo it don't invoke the login shell of sudo user
we need to make changes in the way we call sudo like invoking it with -i and -H flags
"sudo_flags=-H" in your ansible.cfg file
If you can run as root, you can use runuser.
- shell: runuser -l '{{ install_user }}' -c "{{ cmd }}"
This effectively runs the command as install_user in a fresh login shell, as if you had used su - *install_user* (which loads the profile, though it might be .bash_profile and not .profile...) and then executed *cmd*.
I'd try not to run everything as root just so you can run it as someone else, though...
If you can modify the configuration of your target host and don't want to change your ansible yaml code. You can try this:
add the variable ENV_VAR=/usr/users/toto into /etc/environment file rather than ~/.profile.
shell: "bash -l scala -version"
by using bash -l will allow ansible to load corresponding bash_profile.
bash: '-i' (interactive shell) won't allow the ansible to run other task.
add the variable ENV_VAR=/usr/users/toto into /etc/environment file rather than ~/.profile.
You really can use /etc/environment, but only if a variable has a static value. If we use variable which gets the value of another variable it doesn't work. For example, if we put this line to /etc/environment
XDG_RUNTIME_DIR=/run/user/$(id -u)
Ansible can see exactly XDG_RUNTIME_DIR=/run/user/$(id -u), not XDG_RUNTIME_DIR=/run/user/1012.
And if we put this line to ~/.bash_profile or ~/.bashrc:
export XDG_RUNTIME_DIR=/run/user/$(id -u)
User can see XDG_RUNTIME_DIR=/run/user/1012 (if user's id is 1012) when he works manually, but Ansible doesn't get variable XDG_RUNTIME_DIR at all.

.zshrc is not loaded in Ansible?

I am experimenting whether I can check the version of bundle in localhost using ansible-playbook local.yml as shown in local.yml below.
local.yml
---
- hosts: local
remote_user: someuser
tasks:
- name: Check bundle version
shell: "{{ansible_user_shell}} -l -c 'bundle --version'"
args:
chdir: "/path/to/rails/dir"
Inventory file is as follows:
hosts
[local]
127.0.0.1
[local:vars]
ansible_ssh_user=someuser
However I got the error saying,
stderr: zsh:1: command not found: bundle`
I have no idea why I am getting this error because I confirmed bundle is installed on localhost. Also I found that shell module does not use login shell so environmental variables in .zshrc is not loaded so I ran zsh with -l(use login shell) option. But it's not working. Is there anything I am missing?
I figured out the problem by myself. The problem was the configuration of zsh. I thought .zshrc is executed on every login. This is inaccurate because .zshrc is only loaded on login and interactive shell. In the above case, the command is NOT run on interactive shell so .zshrc was not loaded.
To load .zshrc every time I use login shell, I created .zprofile which is loaded on login shell as follows:
# include .zshrc if it exists
if [ -f "$HOME/.zshrc" ]; then
. "$HOME/.zshrc"
fi
Another solution might be to add -i(interactive shell) option :)

Resources