I'm attempting to write an Ansible task that utilizes an environment variable on the remote host.
Based on the docs I've thought to use either lookup('env', 'SSH_AUTH_SOCK') oransible_env.SSH_AUTH_SOCK` but neither is returning the correct value. If I use the former it returns the value from my local host (not the remote host). If I use the latter is returns nothing.
If I ssh into the machine I'm able to run echo $SSH_AUTH_SOCK without issue.
My understanding was that ansible_env was the proper access point for remote host environment variables but that seems to not be the case.
Any help is appreciated.
It is possible the env variable (SSH_AUTH_SOCK) is not in the remote's env, so it is returning nothing. One way to rule this out is to get something that is always available, eg: USER or SSH_CLIENT. If you can get that value, then you can safely assume SSH_AUTH_SOCK is not set in remote's env.
- debug: msg={{ ansible_env.USER }}
The reason you see SSH_AUTH_SOCK is set when you ssh into the machine could be: Your login profile or bash script is starting ssh-agent which sets SSH_AUTH_SOCK variable with the unix socket so that ssh-add works correctly.
Related
I am getting familiar with Terraform and Ansible through books. Could someone enlighten me about the following block of code?
provisioner "local-exec" {
command = "ansible-playbook -u ubuntu --key-file ansible-key.pem -T 300 -i '${self.public_ip},', app.yml"
}
The short answer is local-exec is for anything you want to do on your local machine instead of the remote machine.
You can do a bunch of different things:
write an ssh key into your ~/.ssh to access the server
run a sleep 30 or something to make sure the next commands wait a bit for your machine to provision
write logs to your local directory (last run, date completed, etc.)
write some env_vars to your local machine you can use to access the machine
the ansible example you provided
FYI, hashicorp hates local- and remote- exec. If you talk to one of their devs, they will tell you that it is a necessary evil. Other than maybe a sleep or write this or that, avoid it for any stateful data.
I would interpret that as Terraform should execute a local command on the Control Node.
Reading the documentation about local-exec Provisioner it turns out that
The local-exec provisioner invokes a local executable after a (annot.: remote) resource is created. This invokes a process on the machine running Terraform ...
and not on the Remote Resource.
So after Terraform has in example created a Virtual Machine, it calls an Ansible playbook to proceed further on it.
I have an ever changing IP address that I like to write into an env variable $MY_IP and use it in my SSH config. But this doesn't work:
Host my_server
ForwardAgent yes
User root
Port 22
HostName $MY_IP
I also read, that something like ${MY_IP} could work, but alas, no luck so far.
Only option I see is writing a special shell-script that does the work. Or does anybody have another idea?
I am familiar with the solution of ansible-vault feature.
Our passwords are stored as a call to an external lookup (to be specific - Cyberark password).
However, a regular user can still with simple debug command to see them
ansible -m debug -a var=ansible_password <some host>
I am familiar with ansible feature known as "no_log". When you set this attribute on a task, or on a specific variable (in Ansible argument spec) - the output is hidden, even with high verbosity
Is there a way to set this attribute on ansible_password variable? so no one can print it?
The only other solution we came up with is to use vault, but all the cyberarcpassword lookup came up in order to "cut of" the vault feature...
You can set the password to expire or change in Cyberark after each call or execution. Why to worry about user seeing Cyberark's password? It may be useless after Ansible using it.
I would like to keep tmp directory on the VM in my test region. There is the following solution for the problem: setting ANSIBLE_KEEP_REMOTE_FILES to 1 on the Ansible machine.
The issue is that the ansible machine is a local docker container so I need to ensure that this variable is always set. Otherwise I'm loosing some documents. When I reboot my system and start this docker container with Ansible I'm loosing this variable.
Is there a way to set this environment variable somewhere in Ansible configuration = or in a playbook configuration somewhere? I need a permanent solution in order not to forget this variable.
Thank you!
Q: "Is there a way to set this environment variable somewhere in Ansible configuration?"
A: Yes. It is. For example
$ cat ansible.cfg
[defaults]
keep_remote_files = true
See DEFAULT_KEEP_REMOTE_FILES.
I am wondering if during an Ansible task is it safe to send credentials (password, api key) in a command line task?
No one on the remote server should see the command line (and even less credentials).
Thank you.
If you are not trusting remote server - you should never expose sensitive credentials to it, since anyone having root access on that server can easily intercept traffic, files and memory allocated by you on that server. The easiest way for someone to get you secrets would be to dump temporary files that ansible creating to do it's job on remote server, since it requires only privileges of the user you are connecting as!
There is a special environment variable called ANSIBLE_KEEP_REMOTE_FILES=1 used to troubleshoot problems. It should give you an idea about what information is actually stored by ansible on remote disks, even for a brief seconds. I've executed
ANSIBLE_KEEP_REMOTE_FILES=1 ansible -m command -a "echo 'SUPER_SECRET_INFO'" -i 127.0.0.1, all
command on my machine to see files ansible creates on remote machine. After it's execution i see temporary file in my home directory, named ~/.ansible/tmp/ansible-tmp-1492114067.19-55553396244878/command.py
So let's grep out secret info:
grep SUPER_SECRET ~/.ansible/tmp/ansible-tmp-1492114067.19-55553396244878/command.py
Result:
ANSIBALLZ_PARAMS = '{"ANSIBLE_MODULE_ARGS": {"_ansible_version": "2.2.2.0", "_ansible_selinux_special_fs": ["fuse", "nfs", "vboxsf", "ramfs"], "_ansible_no_log": false, "_ansible_module_name": "command", "_raw_params": "echo \'SUPER_SECRET_INFO\'", "_ansible_verbosity": 0, "_ansible_syslog_facility": "LOG_USER", "_ansible_diff": false, "_ansible_debug": false, "_ansible_check_mode": false}}'
As you can see - nothing is safe from the prying eyes! So if you are really concerned about your secrets - don't use anything critical on suspected hosts, use one time passwords, keys or revokable tokens to mitigate this issue.
It depends on how paranoid are you about this credentials. In general: no, it is not safe.
I guess root user on remote host can see anything.
For example, run strace -f -p$(pidof -s sshd) on remote host and try to execute any command via ssh.
By default Ansible write all invocations to syslog on remote host, you can set no_log: no for task to avoid this.