I am getting familiar with Terraform and Ansible through books. Could someone enlighten me about the following block of code?
provisioner "local-exec" {
command = "ansible-playbook -u ubuntu --key-file ansible-key.pem -T 300 -i '${self.public_ip},', app.yml"
}
The short answer is local-exec is for anything you want to do on your local machine instead of the remote machine.
You can do a bunch of different things:
write an ssh key into your ~/.ssh to access the server
run a sleep 30 or something to make sure the next commands wait a bit for your machine to provision
write logs to your local directory (last run, date completed, etc.)
write some env_vars to your local machine you can use to access the machine
the ansible example you provided
FYI, hashicorp hates local- and remote- exec. If you talk to one of their devs, they will tell you that it is a necessary evil. Other than maybe a sleep or write this or that, avoid it for any stateful data.
I would interpret that as Terraform should execute a local command on the Control Node.
Reading the documentation about local-exec Provisioner it turns out that
The local-exec provisioner invokes a local executable after a (annot.: remote) resource is created. This invokes a process on the machine running Terraform ...
and not on the Remote Resource.
So after Terraform has in example created a Virtual Machine, it calls an Ansible playbook to proceed further on it.
Related
I need to run a bash script that takes some parameters from server-1 and then from my local server where I ran the script with
ssh user#server-1 bash -s <script.sh
I then need to use those parameters to be executed with all kind of commands on my local server and also server-2 is involved. But the script will still be running on server-1 because of
ssh user#server-1 bash -s <script.sh
Maybe I can use 2 scripts but I want them to be only on local server. and putting in the script more commands after SSH doesn't seem to be working.
I would place the script on the remote server and remote execute it via SSH.
If the script should change over time, then break it up into 2-3 steps
1. gather any additional parameter from remote machine
2. copy script to remote machine using scp
3. ssh to "remote execute" script on remote machine
Am not sure what parameter you need from the remote system.
I would try to hand it over via command line options to the script in #3.
Otherwise "hack"/patch it in before #2.
I am wondering if during an Ansible task is it safe to send credentials (password, api key) in a command line task?
No one on the remote server should see the command line (and even less credentials).
Thank you.
If you are not trusting remote server - you should never expose sensitive credentials to it, since anyone having root access on that server can easily intercept traffic, files and memory allocated by you on that server. The easiest way for someone to get you secrets would be to dump temporary files that ansible creating to do it's job on remote server, since it requires only privileges of the user you are connecting as!
There is a special environment variable called ANSIBLE_KEEP_REMOTE_FILES=1 used to troubleshoot problems. It should give you an idea about what information is actually stored by ansible on remote disks, even for a brief seconds. I've executed
ANSIBLE_KEEP_REMOTE_FILES=1 ansible -m command -a "echo 'SUPER_SECRET_INFO'" -i 127.0.0.1, all
command on my machine to see files ansible creates on remote machine. After it's execution i see temporary file in my home directory, named ~/.ansible/tmp/ansible-tmp-1492114067.19-55553396244878/command.py
So let's grep out secret info:
grep SUPER_SECRET ~/.ansible/tmp/ansible-tmp-1492114067.19-55553396244878/command.py
Result:
ANSIBALLZ_PARAMS = '{"ANSIBLE_MODULE_ARGS": {"_ansible_version": "2.2.2.0", "_ansible_selinux_special_fs": ["fuse", "nfs", "vboxsf", "ramfs"], "_ansible_no_log": false, "_ansible_module_name": "command", "_raw_params": "echo \'SUPER_SECRET_INFO\'", "_ansible_verbosity": 0, "_ansible_syslog_facility": "LOG_USER", "_ansible_diff": false, "_ansible_debug": false, "_ansible_check_mode": false}}'
As you can see - nothing is safe from the prying eyes! So if you are really concerned about your secrets - don't use anything critical on suspected hosts, use one time passwords, keys or revokable tokens to mitigate this issue.
It depends on how paranoid are you about this credentials. In general: no, it is not safe.
I guess root user on remote host can see anything.
For example, run strace -f -p$(pidof -s sshd) on remote host and try to execute any command via ssh.
By default Ansible write all invocations to syslog on remote host, you can set no_log: no for task to avoid this.
There are following two ways to run a script on the target machine:
1. - name: run the script from the control machine directly.
script: "{{path_to_scripts}}/script.sh"
2. - name: Copying the script from target machine.
copy: src="{{path_to_scripts}}/script.sh" dest="{{path_to_scripts}}/script.sh" mode=0777
- name: Execute script locally.
command: /bin/sh {{path_to_scripts}}/script.sh
As i am running the playbook against more than 30 target machines. I would like to know which one will be a better choice ?
Also what is the performance penalty if i prefer one over other ?
If you are executing the script from the ansible machine, the ansible server will copy the script to temp location in the remote machine to execute.
So, the better choice is "run the script from the control machine directly" because of below reasons
you dont need to ssh to all 30 machines for copy the scripts
you can have a single line of code to do the same that needs 2
steps(copy and execute)
no performance difference as both methods is doing the same
operations
If the script has to do something on the remote machine, would be better if you copy it and execute directly on the remote. I don't think you would see any noticeable performance decrease in any of the two cases.
The only thing is that in case 1. you will have to ssh to the remote and execute the commands you need, thing that ansible already does for you.
It's possible to open ports, network files, and there are plug-ins that allow for running guest or host [shell] commands during Vagrant's Provisioning process.
What I'd like to do is be able to (perhaps through a bash alias) run a command in the Vagrant guest/VM, and have this execute a command on the host, ideally with a variable being passed on the command line.
Example: In my host I run the Atom editor (same applies to TextMate, whatever). If I want to work on a shared file in the VM, I have to manually open that file from over in the host, either by opening it directly in the editor, or running the 'atom filename' shell command.
I want parity, so while inside the VM, I can run 'atom filename', and this will pass the filename to the 'atom $1' script outside of the VM, in the host, and open it in my host editor (Atom).
Note: We use Salt for Vagrant Provisioning, and NFS for mounting, for what it's worth. And of course, ssh with key.
Bonus question: Making this work with .gitconfig as its merge conflict editor (should just work, if the former is possible, right?).
This is a very interesting use case that I haven't heard before. There isn't a native method of handling this in Vagrant, but this functionality was added to Packer in the form of a local shell provisioner. You could open a GitHub issue on the Vagrant project and propose the same feature. Double check the current list of issues, though, because it's possible someone has beaten you to it.
In the meantime, though, you do have a workaround if you're determined to do this...
Create an ssh key pair on your host.
Use Salt to add the private key in /home/vagrant/.ssh on the box.
Use a shell provisioner to run remote ssh commands on the host from the guest.
These commands would take the form of...
ssh username#192.168.0.1 "ls -l ~"
In my experience, the 192.168.0.1 IP always points back to the host, but your mileage may vary. I'm not a networking expert by any means.
I hope this works for you and I think a local shell provisioner for Vagrant would be a reasonable feature.
I have a shell provisioning script that invokes a command that requires user input - but when I run vagrant provision, the process hangs at that point in the script, as the command is waiting for my input, but there is nowhere to give it. Is there any way around this - i.e. to force the script to run in some interactive mode?
The specifics are that I creating a clean Ubuntu VM, and then invoking the Heroku CLI to download a database backup (this is in my provisioning script):
curl -o /tmp/db.backup `heroku pgbackups:url -a myapp`
However, because this is a clean VM, and therefore this is the first time that I have run an Heroku CLI command, I am prompted for my login credentials. Because the script is being managed by Vagrant, there is no interactive shell attached, and so the script just hangs there.
If you want to pass temporary input or variables to a Vagrant script, you can have them enter their credentials as temporary environment variables for that command by placing them first on the same line:
username=x password=x vagrant provision
and access them from within Vagrantfile as
$u = ENV['username']
$p = ENV['password']
Then you can pass them as an argument to your bash script:
config.vm.provision "shell" do |s|
s.inline: "echo username: $1, password: $2"
s.args: [$u, $p]
end
You can install something like expect in the vm to handle passing those variables to the curl command.
I'm assuming you don't want to hard code your credentials in plain text thus trying to force an interactive mode.
Thing is just as you I don't see such option in vagrant provision doc ( http://docs.vagrantup.com/v1/docs/provisioners/shell.html ) so one way or another you need to embed the authentication within your script.
Have you thought about using something like getting a token and use the heroku REST Api instead of the CLI?
https://devcenter.heroku.com/articles/authentication