I'm trying to automate some of my manual tasks on a VM.
As part of that my VM doesn't have direct root access.
So I've to use a different user and then escalate to root.
When I try to switch to root user, the password prompt is different than the default prompt.
The prompt I see is as shown below
==================
[user1#vm-1 tmp]$ su - root
Enter login password:
I wrote a playbook to test the connectivity. The play looks as below
=====================================
hosts: vm-1
any_errors_fatal: true
become: true
become_method: su
become_user: root
gather_facts: no
vars:
ansible_become_pass: "r00t"
tasks:
name: Test me
command: 'echo works'
=====================================
My host file looks as below
localhost ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
vm-1 ansible_ssh_host=1.2.3.4
ansible_connection=ssh
ansible_ssh_user=user1
ansible_ssh_pass=password
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
=====================================
With this config, when I try to run the play, I'm getting below error
fatal: [vm-1]: FAILED! => {"msg": "Timeout (12s) waiting for privilege
escalation prompt: "}
The same playbook works on a different VM but the prompt while trying to switch user to root is simply "Passowrd"
Appreciate your help on this.
By the way I tried this in ansible 2.4, 2.5 versions. In both the releases I got the same error.
Thanks in advance.
Ramu
I had difficulties tracking down an open ticket but here is one that is closed and has some workarounds and some solutions that may or may not work for you:
https://github.com/ansible/ansible/issues/14426
I have had at least two machines where none of the listed solutions work. It also slows down a direct SSH without Ansible and a reboot does not work. I was unable to figure out the issue so now I just rebuild the machine.
As #AHT said, you could just increase the timeout to 30 seconds in ansible.cfg, however, I think this should only be temporary being it is masking the bigger issue.
Related
In my job there is a playbook developed in the following way that is executed by ansible tower.
This is the file that ansible tower executes and calls a playbook
report.yaml:
- hosts: localhost
gather_facts: false
connection: local
tasks:
- name: "Execute"
include_role:
name: 'fusion'
main.yaml from fusion role:
- name: "hc fusion"
include_tasks: "hc_fusion.yaml"
hc_fusion.yaml from fusion role:
- name: "FUSION"
shell: ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars 'fusion_ip_ha={{item.ip}} fusion_user={{item.username}} fusion_pass={{item.password}} fecha="{{fecha.stdout}}" fusion_ansible_become_user={{item.ansible_become_user}} fusion_ansible_become_pass={{item.ansible_become_pass}}'
fusion.yaml from fusion role:
- hosts: localhost
vars:
ansible_become_user: "{{fusion_ansible_become_user}}"
ansible_become_pass: "{{fusion_ansible_become_pass}}"
tasks:
- name: Validate
ignore_unreachable: yes
shell: service had status
delegate_to: "{{fusion_user}}#{{fusion_ip_ha}}"
become: True
become_method: su
This is a summary of the entire run.
Previously it worked but throws the following error.
stdout: PLAY [localhost] \nTASK [Validate] [1;31mfatal: [localhost -> gandalf#10.66.173.14]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to connect to the host via ssh: Warning: Permanently added '10.66.173.14' (RSA) to the list of known hosts.\ngandalf#10.66.173.14: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password), \"skip_reason\": \"Host localhost is unreachable\"
When I execute ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars XXXXXXXX from the command line with user awx it works.
Also I validated the connection from the server where ansible tower is running to where you want to connect with the ssh command and if it allows me to connect without requesting a password with the user awx
fusion.yaml does not explicitly specify connection plugin, thus default ssh type is being used. For localhost this approach usually brings a number of related problems (ssh keys, known_hosts, loopback interfaces etc.). If you need to run tasks on localhost you should define connection plugin local just like in your report.yaml playbook.
Additionally, as Zeitounator mentioned, running one ansible playbook from another with shell model is a really bad practice. Please, avoid this. Ansible has a number of mechanism for code re-use (includes, imports, roles etc.).
I'm am using Ansible and want to automate my VPS & Homelab setups. I'm running into an issue, which is the initial connection.
If I have a fresh VPS that has never been used or logged into, how can I remotely configure the node from my laptop?
ansible.cfg
[defaults]
inventory = ./inventory
remote_user = root
host_key_checking = false
ansible_ssh_common_args = "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
inventory
[homelab]
0.0.0.0 <--- actual IP here
./playbooks/add_pub_keys.yaml
---
- hosts: all
become: yes
tasks:
- name: Install public key on remote node
authorized_key:
state: present
user: root
key: "{{lookup('file','~/.ssh/homelab.pub')}}"
Command
ansible-playbook playbooks/add_public_keys.yaml
Now, this fails with permission denied, which makes sense because there is nothing that would allow connection to the remote node.
I tried adding -ask-pass to the command:
ansible-playbook playbooks/add_public_keys.yaml -ask-pass
and typing in the root password, but that fails and says I need sshpass, which is not recommended and not readily available to install on Mac due to security. How should I think about this initial setup process?
When I get issues like this I try and replicate the problem using ansible ad-hoc commands and go back to basics. It helps to prove where the issue is located.
Are you able to run ansible ad-hoc commands against your remote server using the password?
ansible -i ip, all -m shell -a 'uptime' -u root -k
If you can't, something is up with the password or possible in the ansible.cfg.
I am working on a simple playbook that will ultimately be able to start/stop/restart windows services and I ran into an issue:
fatal: [mspdbwn1w01]: FAILED! => {
"msg": "The powershell shell family is incompatible with the sudo become plugin"
}
Below is the playbook:
- name: Add Host
hosts: localhost
connection: local
strategy: linear
tasks:
- name: Add Temp Host
add_host:
name: "{{ win_client }}"
group: temp
- name: Target Server
connection: winrm
hosts: temp
tasks:
- name: Stop a service
win_service:
name: "{{ service }}"
state: stopped
Google hasn't been much help, and I've tried everything I could find, every variation of become*.
I don't know if it matters, but due to the nature of the environment I work in, I have 2 separate users to log into *nix hosts vs. windows hosts.
Any assistance or guideance would be greatly appreciated.
Your system seems to use sudo as the default become method, which is not compatible with PowerShell. For Windows (and PowerShell), you can use runas as the become method. Add:
become_method: runas
to your playbook or task. You can get a list of all available become methods with:
ansible-doc -t become -l
Example:
doas Do As user
dzdo Centrify's Direct Authorize
enable Switch to elevated permissions on a network device
ksu Kerberos substitute user
machinectl Systemd's machinectl privilege escalation
pbrun PowerBroker run
pfexec profile based execution
pmrun Privilege Manager run
runas Run As user
sesu CA Privileged Access Manager
su Substitute User
sudo Substitute User DO
You can view the documentation for a particular become method with:
ansible-doc -t become runas
If you still get erros, pay attention to the error message, as it most probably is a different one. Using privilege escalation requires the definition of a username and a password for this purpose, for example.
My playbook:
-hosts: devops
tasks:
- name: Test Connection
ping:
register: res
- name: Print the pint result
debug:
msg: "{{ res }}"
My inventory:
[devops]
XX.XX.XX.XX ansible_user=userName ansible_ssh_pass=userPass
First time, I ran the Ansible playbook with the right SSH password, and the playbook can be run successfully. Then I ran the playbook with a wrong ssh password in a short time, the playbook can run successfully thought the SSH password is wrong. But after a while, with the same wrong password, the playbook cannot be run successfully.
So my question is that whether there is something like cache or session with Ansible playbook? If yes, how can I resolve this issue?
The version of my Ansible is 2.4.3.
For the second run, I need the result to be failed, not successful.
Remove or set a different value in ssh_args option in ansible.cfg.
Remember that removing it completely will cause Ansible playbooks to run significantly slower.
I've been banging my head on this one for most of the day, I've tried everything I could without success, even with the help of my sysadmin. (note that I am not at all an ansible expert, I've discovered that today)
context: I try to run implement continuous integration of a java service via gitlab. a pipeline will, on a push, run tests, package the jar, then run an ancible playbook to stop the existing service, replace the jar, launch the service again. We have that for the production in google cloud, and it works fine. I'm trying to add an extra step before that, to do the same on localhost.
And I just can't understand why ansible fails to do a "sudo service XXXX stop|start" . All I got is
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Sorry, try again.\n[sudo via ansible, key=nbjplyhtvodoeqooejtlnhxhqubibbjy] password: \nsudo: 1 incorrect password attempt\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Here is the the gitlab pipeline stage that I call :
indexer-integration:
stage: deploy integration
script:
- ansible-playbook -i ~/git/ansible/inventory deploy_integration.yml --vault-password-file=/home/gitlab-runner/vault.txt
when: on_success
vault.txt contains the vault encryption password. Here is the deploy_integration.yml
---
- name: deploy integration saleindexer
hosts: localhost
gather_facts: no
user: test-ccc #this is the user that I created as a test
connection: local
vars_files:
- /home/gitlab-runner/secret.txt #holds the sudo password
tasks:
- name: Stop indexer
service: name=indexer state=stopped
become: true
become_user: root
- name: Clean JAR
become: true
become_user: root
file:
state: absent
path: '/PATH/indexer-latest.jar'
- name: Copy JAR
become: true
become_user: root
copy:
src: 'target/indexer-latest.jar'
dest: '/PATH/indexer-latest.jar'
- name: Start indexer
service: name=indexer state=started
become: true
become_user: root
the user 'test-ccc' is another user that I created ( part of the group root and in the sudoer file) to make sure it was not an issue related to the gitlab-runner user ( and because apparently no one here can remembers the sudo password of that user xD )
I've try a lot od thing, including
shell: echo 'password' | sudo -S service indexer stop
that works in command line. But if executed by ansible, all I got is a prompt message asking me to enter the sudo password
Thanks
edit per comment request : The secret.txt has :
ansible_become_pass: password
When using that user in command line (su user / sudo service start ....) and prompted for that password, it works fine. The problem I believe is that either ansible always prompts for password, or the password is not properly passed to the task.
The sshd_config has a line 'PermitRootLogin yes'
ok, thanks to a reponse(now deleted) from techraf, I noticed that the line
user: test-ccc
is actually useless, everything was still run by the 'gitlab-runner' user. So I :
put all my action in a script postbuild.sh
add gitlab-runners to the sudoers and gave the nopassword for that script
gitlab-runner ALL=(ALL) NOPASSWD:/home/PATH/postbuild.sh
removed everrything about passing the password and the secret from the ansible task, and used instead :
shell: sudo -S /home/PATH/postbuild.sh
So that works, the script is executed, service is stop/start. I'll mark this as answered, even though using service: name=indexer state=started and giving NOPASSWD:ALL for the user still caused an error (the one in my comment on the question ) . If anyone can shed light on that in the comment ....