When i Invoking Ansible through Jenkins i have added the below script in my Playbook
- name: HELLO WORLD PLAY
hosts: webserver
become: yes
become_method: sudo
tasks:
- debug:
msg: "HELLO......."
- shell: echo "HELLO WORLD"
I am getting below error when i build job
TASK [setup] *******************************************************************
fatal: [10.142.0.13]: UNREACHABLE! =>
{
"changed": false,
"msg": "ERROR! SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue",
"unreachable": true
}
when I run this playbook through CLI it is running successfully
but I am not able to run through Jenkins as (i have already done the set up by pasting private key in Jenkins)
Related
I'm finding it difficult to run a simple playbook. I already ping target and it was successful. When i run the playbook i get this error:
PLAY [install httpd and start services] ***********************************
TASK [Gathering Facts] ****************************************************
fatal:[192.168.112.66]: UNREACHABLE!=> {"changed": false "msg": "Failed to connect to the host via ssh: jay#192.168.112.66: Permission denied (publickey password)." "unreachable": true}
What's the problem with this?
The remote server is denying you the access due your key has a password.
Try this before run the playbook:
$ eval `ssh-agent`
$ ssh-add /path/to/your/private/key
Then run the playbook with the options -u and --private-key pointing to the user with access permissions on remote server and the private key you use.
I am guessing you used a password instead of ssh-key. So at the end of your command, add
--ask-pass
Let's say you're running your playbook. Your command will become:
ansible-playbook playbook.yml --ask-pass
I am trying to run a simple playbook against Openstack in admin tenant using Ansible Tower, both running on localhost. Here is the script:
--- #
- hosts: localhost
gather_facts: no
connection: local
tasks:
- name: Security Group
os_security_group:
state: present
name: example
I have done the following configuration:
Credentials:
Template:
Inventory test:
With this configuration, I am getting this error:
TASK [Security Group] **********************************************************
13:35:48
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
Any idea what can be? Looks like is a credential problem.
Untick Enable Privilege Escalation - it's not necessary. Your OpenStack privilege/authorisation will be tied to your OpenStack credentials (admin in this case), not the user running the Ansible task.
I am able to create a user on a windows server as part of a playbook, but when the playbook is re-run, the create task fails.
I'm trying to work out if I am missing something.
playbook:
---
# vim: set filetype=ansible ff=unix ts=2 sw=2 ai expandtab :
#
# Playbook to configure the environment
- hosts: createuser
tasks:
- name: create user
run_once: true
win_user:
name: gary
password: 'B0bP4ssw0rd123!^'
password_never_expires: true
account_disabled: no
account_locked: no
password_expired: no
state: present
groups:
- Administrators
- Users
if I run the playbook when the user does not exist, the create works fine.
When I re-run, I get:
PLAY [createuser] *******************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************
ok: [dsy-demo-mssql02]
TASK [create user] ******************************************************************************************************************************************************************************************************************
fatal: [dsy-demo-mssql02]: FAILED! => {"changed": false, "failed": true, "msg": "Exception calling \"ValidateCredentials\" with \"2\" argument(s): \"The network path was not found.\r\n\""}
I have verified that I can logon to the server using the created user credentials.
Anyone seen this before, or understand what can be happening?
It looks to me like it might be the
run_once: true
is only telling the task to run once. For the ansible documentation on that delegation you can go here https://docs.ansible.com/ansible/playbooks_delegation.html#run-once
I have the following task to print out the current version of jenkins that is installed on some servers:
---
- hosts: all
remote_user: user
tasks:
- name: Printing the Jenkins version running on the masters
yum:
name: jenkins
register: version
- debug: var=version
I am trying to avoid using the -v option when running the playbook with hopes to keep the output as clean as possible.
If the playbook is run without the -v option the output looks like this:
TASK [Printing the jenkins version that is installed on each of the servers]***************
ok: [Server1]
ok: [Server2]
ok: [Server3]
TASK [debug] ******************************************************************* ok: [Server1] => {
"changed": false,
"version": "VARIABLE IS NOT DEFINED!"
}
ok: [Server1] => {
"changed": false,
"version": "VARIABLE IS NOT DEFINED!"
}
ok: [Server1] => {
"changed": false,
"version": "VARIABLE IS NOT DEFINED!"
}
However it returns that version is not defined. I am confused as to why this is happening because I have done the printing the same way for a bunch of other tasks without any problems. Any suggestions are greatly appreciated.
You can acheive this using the shell and debug
---
- hosts: all
remote_user: user
become: True
become_method: sudo
tasks:
- name: Printing the Jenkins version running on the masters
shell: cat /var/lib/jenkins/config.xml | grep '<version>'
register: version
- debug: var={{ version['stdout'] }}
You can create ansible callback plugin, or use one available in network
i.e.
human_log
I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Problem With Original Use Case (Handler)
Playbook
- hosts: all
- remote_user: vagrant
- tasks:
...
- name: Forbid SSH root login
sudo: yes
lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present
notify:
- restart ssh
...
- handlers:
- name: restart ssh
sudo: yes
service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh]
failed: [default] => {"failed": true}
FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Task Also Fails
Playbook
- name: Restart SSH server
sudo: yes
service: name=ssh state=restarted
Same output as the handler use case.
Ad Hoc Command Also Fails
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> {
"failed": true,
"msg": ""
}
Shell command in box works
When I SSH in and run the usual command, everything works fine.
> vagrant ssh
> sudo service ssh restart
ssh stop/waiting
ssh start/running, process 7899
> echo $?
0
Command task also works
Output
TASK: [Restart SSH server] ****************************************************
changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command module and moved on:
- name: restart sshd
command: service ssh restart