EDIT: After some research, I wonder if this may be related to the on_become() function as described in this post? https://github.com/Dell-Networking/ansible-dellos-examples/issues/12
I am trying to backup our current configurations on our Dell 2048p switches, running OS6. No matter what I set the timeout to (using persistent_connection in ansible.cfg), it still errors out. I have checked the logs on the switch and it gets both the show ver and show running-config commands, however its just not making it back. I have looked at the Networking and Troubleshooting guide, but am having trouble getting a proper error. Does anyone have this working, or spot anything I can change?
Version
ansible 2.9.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
Playbook
-
name: Show ver
hosts: Dell
connection: network_cli
gather_facts: yes
tasks:
-
name: "Get Dell EMC OS6 Show version"
dellos6_command:
commands: ['show version']
register: show_ver
-
name: "Backup config file locally"
dellos6_config:
backup: yes
backup_options:
dir_path: "/mnt/c/Users/me/Documents/Programming Projects/netBackupPlaybooks"
filename: "{{ inventory_hostname }}"
authorize: yes
register: backup_dellso6_location
when: ansible_network_os == 'dellos6'
- debug: var=show_ver
- debug: var=backup_dellos6_location
Inventory
[Dell]
sw1 ansible_host=10.10.10.10 ansible_ssh_extra_args='-o StrictHostKeyChecking=no' ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_network_os=dellos6 ansible_connection=network_cli ansible_become_method=enable ansible_become_password=admin ansible_user=admin ansible_password=admin
sw2 ansible_host=10.10.10.11 ansible_ssh_extra_args='-o StrictHostKeyChecking=no' ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_network_os=dellos6 ansible_connection=network_cli ansible_become_method=enable ansible_become_password=admin ansible_user=admin ansible_password=admin
Command
sudo ansible-playbook -i inventory.ini DellPB.yaml -vvvv
Error
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
File "/tmp/ansible_dellos6_config_payload_pjEND4/ansible_dellos6_config_payload.zip/ansible/module_utils/network/dellos6/dellos6.py", line 86, in get_config
return _DEVICE_CONFIGS[cmd]
fatal: [sw2]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"after": null,
"auth_pass": null,
"authorize": true,
"backup": true,
"backup_options": null,
"before": null,
"config": null,
"host": null,
"lines": null,
"match": "line",
"parents": null,
"password": null,
"port": null,
"provider": null,
"replace": "line",
"save": false,
"src": null,
"ssh_keyfile": null,
"timeout": null,
"update": "merge",
"username": null
}
},
"msg": "unable to retrieve current config",
"stderr": "command timeout triggered, timeout value is 30 secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.",
"stderr_lines": [
"command timeout triggered, timeout value is 30 secs.",
"See the timeout setting options in the Network Debug and Troubleshooting Guide."
]
Just wanted to edit for anyone else experiencing this issue. It looks like it was a bug in the module that will be fixed in the latest release of Ansible.
https://github.com/ansible/ansible/pull/63272
Related
I have an issue when I run my playbook.yaml with -vvvv during Gathering Facts. I have the following error message :
fatal: [HOST.DOMAIN.COM]: FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"ansible.legacy.setup": {
"failed": true,
"module_stderr": "#< CLIXML\r\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
},
"msg": "The following modules failed to execute: ansible.legacy.setup\n"
I search on internet and I try different things like change the size of max memory per shell but it changes nothing.
Do you know how to resolve it or a way that i can explore for solve it pls ? If I need to change my config ?
Playbook.yaml :
- name: Who am I
hosts: win
tasks:
- name: Check my user name
ansible.windows.win_whoami:
win.yaml (variables) :
ansible_user: admin#DOMAIN.COM
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: kerberos
ansible_winrm_server_cert_validation: ignore
ansible_winrm_port: 5986
My Windows host :
OS : Microsoft Windows Server 2016 Standard
Powershell Version : 5.1.14393.5127
I'm creating playbook to install fluentbit on windows hosts. Everything is working properly but i'm getting error when creating service, it doesn’t fail the install as then everything is already in place but I would like to figure out how I could leverage conditionals. Could you help me with this? :)
My adhoc test-play where I've tried to parse results from ansible.windows.win_service_info module is as follows:
---
- name: Check Windows service status
hosts: win
gather_facts: True
tasks:
- name: Check if a service is installed
win_service:
name: fluent-bit
register: service_info
- debug: msg="{{service_info}}"
- name: Get info for a single service
ansible.windows.win_service_info:
name: fluent-bit
register: service_info
- debug: msg="{{ service_info }}"
- name: Get info for a fluent-bit service
ansible.windows.win_service_info:
name: logging
register: service_exists
- debug: msg="{{ service_exists }}"
- name: Send message if service exists
debug:
msg: "Service is installed"
when: service_exists.state is not defined or service_exists.name is not defined
- name: Send message if service exists
debug:
msg: "Service is NOT installed"
when: service_exists.state is not running
I just don’t get it how I could parse output so that I could skip task when fluent-bit -service exists = True like here:
TASK [debug] *****************************************************************************************
ok: [win-server-1] => {
"msg": {
"can_pause_and_continue": false,
"changed": false,
"depended_by": [],
"dependencies": [],
"description": "",
"desktop_interact": false,
"display_name": "fluent-bit",
**"exists": true,**
"failed": false,
"name": "fluent-bit",
"path": "C:\\fluent-bit\\bin\\fluent-bit.exe -c C:\\fluent-bit\\conf\\fluent-bit.conf",
"start_mode": "manual",
"state": "stopped",
"username": "LocalSystem"
}
}
Cheers :)
So, got it working as I wanted with service_info.exists != True, now it will skip the task if service is already present.
I try to set a wallpaper on Debian Systems with ansible on xfce4 desktops. For this I looked up the official documentation: https://docs.ansible.com/ansible/latest/collections/community/general/xfconf_module.html
My Task:
- name: set wallpaper
become_user: odin
xfconf:
channel: "xfce4-desktop"
property: "/backdrop/screen0/{{item}}/image-path"
value_type: "string"
value: ['/usr/share/backgrounds/xfce/802192.jpg']
loop:
- monitor0
- monitor1
- monitorDP-1
- monitoreDP-1
I receive the following error:
XFConfException: xfconf-query failed with error (rc=1): Failed to init libxfconf: Error spawning command line “dbus-launch --autolaunch=2e66f568a1c34fda92dcec58e724b679 --binary-syntax --close-stderr”: Child process exited with code 1.
failed: [localhost] (item=monitoreDP-1) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"channel": "xfce4-desktop",
"force_array": false,
"property": "/backdrop/screen0/monitoreDP-1/image-path",
"state": "present",
"value": [
"/usr/share/backgrounds/xfce/802192.jpg"
],
"value_type": [
"string"
]
}
},
"item": "monitoreDP-1",
"msg": "Module failed with exception: xfconf-query failed with error (rc=1): Failed to init libxfconf: Error spawning command line “dbus-launch --autolaunch=2e66f568a1c34fda92dcec58e724b679 --binary-syntax --close-stderr”: Child process exited with code 1.",
"output": {
"ansible_facts": {
"xfconf": {}
},
"cmd_args": [
"/usr/bin/xfconf-query",
"--channel",
"xfce4-desktop",
"--property",
"/backdrop/screen0/monitoreDP-1/image-path"
],
"force_lang": "C",
"rc": 1,
"stderr": "Failed to init libxfconf: Error spawning command line “dbus-launch --autolaunch=2e66f568a1c34fda92dcec58e724b679 --binary-syntax --close-stderr”: Child process exited with code 1.\n",
"stdout": ""
},
"vars": {
"cmd_args": [
"/usr/bin/xfconf-query",
"--channel",
"xfce4-desktop",
"--property",
"/backdrop/screen0/monitoreDP-1/image-path"
]
}
}
I thought about copying the xml config for xfce4-desktop on to every machine, but not every machine has the same screen "monitor" property.
Got it to work. Seems like running the task as root was doing the trick.
The xfce modification works as root for me as well with the following approach:
- name: Copy wallpaper file
copy:
src: files/wallpaper.jpg
dest: /usr/share/backgrounds/xfce/debian-wallpaper.jpg
owner: root
group: root
when: ansible_distribution == "Debian"
- name: Change wallpaper
become: true
xfconf:
channel: xfce4-desktop
property: /backdrop/screen0/monitoreDP-1/workspace0/last-image
value: ["/usr/share/backgrounds/xfce/debian-wallpaper.jpg"]
value_type: string
when: ansible_distribution == "Debian"
This will configure the xfce files in /root/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-desktop.xml though.
I was not able to do it for another user USERNAME, besides with this workaround:
- name: Copy xfce4 desktop xml files from root to user
copy:
src: /root/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-desktop.xml
dest: /home/USERNAME/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-desktop.xml
owner: USERNAME
group: USERNAME
force: yes
when: ansible_distribution == "Debian"
If anybody know how to use xfconf module in a better way to overcome this workaround, please let me know.
I'm trying to create a docker container which generates a secretkey using Ansible, but the docker_container doesn't seem to return the container output.
If I ssh into the server and run
root#localhost:~# docker run --rm sentry-onpremise config generate-secret-key
I get the desired output. A secret key such as this
q16w8(5s9_+%4#z8m%c%0uzb&agf0pn+6zfocraponasww&r)f
But if I try to run the same command using an Ansible playbook, the docker container is executed, but no value is returned:
...
- name: Cria secret key para utilizacao em passos seguintes
docker_container:
name: sentry-key-generator
cleanup: True
image: sentry-onpremise
command: config generate-secret-key
register: saida
tags:
- debug
- fail:
msg: "Valor de saida: {{ saida }}"
tags:
- debug
...
fatal: [45.56.93.133]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"msg": "Valor de saida: {u'changed': True, u'ansible_facts': {}}"
},
"module_name": "fail"
},
"msg": "Valor de saida: {u'changed': True, u'ansible_facts': {}}" }
Is this a limitation with the docker_container module? Do I have to setup any other configuration in docker or ansible to get the container output?
This is a bug that was introduced in Ansible 2.2.x, that strips ansible_docker_container away from results.
See:
https://github.com/ansible/ansible/issues/22323
https://github.com/ansible/ansible/issues/20543
Patch:
https://github.com/ansible/ansible/pull/22324/files
The fix is to be released with Ansible 2.3.x
I am trying to create an new user deployer on a vagrant virtual machine that runs ubuntu-16.04 Xenial. The user creation seems to work( the user names are added to /etc/passwd)
$ cat /etc/passwd | grep deployer
deployer1:x:1001:1001::/home/deployer1:/bin/bash
deployer2:x:1002:1003::/home/deployer2:
deployer1000:x:1003:1004::/home/deployer1000:/bin/bash
deployershell:x:1004:1005::/home/deployershell:/bin/bash
However I'm unable to login neither directly via ssh:
$ ssh deployer#local-box-2
deployer#local-box-2's password:
Permission denied, please try again.
deployer#local-box-2's password:
Permission denied, please try again.
nor via su deployer after ssh-ing with an existing user vagrant:
vagrant#local-box-2:~$ su deployer
Password:
su: Authentication failure
I've tried to create the user by using both the ad-hoc ansible commands:
$ ansible all -m user -a 'name=deployer group=admin update_password=always password=rawpass2 state=present shell=/bin/bash force=yes' -u vagrant -b -vvvv
local-box-2 | SUCCESS => {
"append": false,
"changed": true,
"comment": "",
"group": 1001,
"home": "/home/deployer",
"invocation": {
"module_args": {
"append": false,
"comment": null,
"createhome": true,
"expires": null,
"force": true,
"generate_ssh_key": null,
"group": "admin",
"groups": null,
"home": null,
"login_class": null,
"move_home": false,
"name": "deployer1",
"non_unique": false,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"remove": false,
"seuser": null,
"shell": "/bin/bash",
"skeleton": null,
"ssh_key_bits": 0,
"ssh_key_comment": "ansible-generated on local-box-2",
"ssh_key_file": null,
"ssh_key_passphrase": null,
"ssh_key_type": "rsa",
"state": "present",
"system": false,
"uid": null,
"update_password": "always"
},
"module_name": "user"
},
"move_home": false,
"name": "deployer",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"state": "present",
"uid": 1001
}
and by running a playbook
$ ansible-playbook local-box.yml
- name: Add 'deployer' user
hosts: local-box-2
gather_facts: false
remote_user: vagrant
become: true
become_method: sudo
become_user: root
tasks:
- remote_user: vagrant
become: true
become_method: sudo
become_user: root
group:
name: admin
state: present
- remote_user: vagrant
become: true
become_method: sudo
become_user: root
user:
name: deployer
groups: admin
append: yes
shell: /bin/bash
password: rawpass2
state: present
Again, both create the users, but apparently none of them set the password.
What do you the cause might be?
Later Edit:
Apparently, if I pass the raw password to a hash filter then I will be able to login using the (unhashed) raw password. I would love an explaination on why that is the case.
password: "{{ 'rawpass2' | password_hash('sha512') }}"
Note:
This answer gave me the idea to try using filters .
ansible user password should be hashed. https://github.com/ansible/ansible-examples/blob/master/language_features/user_commands.yml
Please follow the document "http://docs.ansible.com/ansible/faq.html#how-do-i-generate-crypted-passwords-for-the-user-module" for has password generation in ansible
- remote_user: vagrant
become: true
become_method: sudo
become_user: root
user:
name: deployer
groups: admin
append: yes
shell: /bin/bash
password: hass_value_of_rawpass2
state: present
To create hash password of rawpass2 as Example:
cmadmin#ansible:~$ mkpasswd --method=sha-512
Password: <rawpass2>
$6dsdwqrc3124JsO9gVJZUWa$sgKxTKz4RbZnIZIvhotWHAHQL1o3/V5LTrrEJCe9DDkTW3Daut.nIpU9Qa0kDWdMZSaxvV1
Ansible is not failing to set the password; the issue is in how passwords work on linux.
A linux user password is stored in a hashed form, the cleartext is not stored anywhere on the system. When you login the password you enter is hashed and compared with the stored value.
When you specify a password during the user creation process in ansible, it must be the already hashed form as noted in the doc links in your question.
If you look at /etc/shadow on the system where you created the user you will see something like this (the second field is the password value):
deployer:rawpass2:17165:0:99999:7:::
which shows you that the string you supplied was used directly as the hashed password value. Of course when you try to login and specify rawpass2, the login code hashes that and compares it to the stored value and they don't match. This is why you have to pass the already hashed value to ansible.