AIX user password reset - ansible

I'm able to reset password of a user in Linux using the below task. In Aix also it is executing successfully but password not getting changed and getting access denied.
- name: reset pw
user:
name: "{{ userid }}"
update_password: always
password: "{{ passwd | password_hash('sha512') }}"
register: resetPwOp
The result as below.
TASK [debug] *******************************************************************
ok: [110.110.110.115] => {
"msg": "resetpwop: {u'comment': u'94/C/UniqueID_cor1', u'shell': u'/usr/bin/ksh', u'group': 2, u'name': u'user1', u'changed': True, 'failed': False, u'state': u'present', u'home': u'/home/user1', u'move_home': False, u'password': u'NOT_LOGGING_PASSWORD', u'append': False, u'uid': 46}"
}

Please try the same operation with the modules developed for AIX systems.
Installation notes here:
https://github.com/IBM/ansible-power-aix/blob/dev-collection/docs/source/installation.rst
If you are using AIX 7.1-7.2 you should setup yum first to install Python2.7(in AIX 7.3 installed by default) and after you will be able to run the ibm.power_aix.user module.
Here you may find several examples:
https://github.com/IBM/ansible-power-aix/tree/dev-collection/playbooks

Related

Save ansible variable in local file

I am executing a PS script on a windows host and want to store its stdout in a file on an ansible local machine. I have a playbook like following:
---
- name: Check Antivirus software
hosts: all
become: false
gather_facts: no
tasks:
- name: Get AV details
win_shell: |
echo "script printing data on stdout"
register: result
- name: save response
copy:
content: '{{ result.stdout }}'
dest: '{{ response_file }}'
delegate_to: localhost
From the above playbook, 1st task gets executed without any issues. But 2nd task gives the following error.
TASK [save response] *******************************************************************************************************************************************
fatal: [20.15.102.192 -> localhost]: UNREACHABLE! => {"changed": false, "msg": "ntlm: HTTPSConnectionPool(host='localhost', port=5986): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f4940760208>: Failed to establish a new connection: [Errno 111] Connection refused',))", "unreachable": true}
I also tried local_action which is also giving the same error.

Unable to grab success or failure i.e rc of long running Ansible task in a loop

Below is my code that takes about 20 minutes in all. The task takes about 1 minute and the loop runs 20 times so 20X1=20minutes. I want the playbook to fail if any task in shell module failed i.e rc != 0
- name: Check DB connection with DBPING
shell: "java utils.dbping ORACLE_THIN {{ item | trim }}"
with_items: "{{ db_conn_dets }}"
delegate_to: localhost
Inorder, to speedup the tasks I decided to run them in parallel and then grab the success or failure of each tasks like below.
- name: Check DB connection with DBPING
shell: "java utils.dbping ORACLE_THIN {{ item | trim }}"
with_items: "{{ db_conn_dets }}"
delegate_to: localhost
async: 600
poll: 0
register: dbcheck
- name: Result check
async_status:
jid: "{{ item.ansible_job_id }}"
register: _jobs
until: _jobs.finished
delay: 10
retries: 20
with_items: "{{ dbcheck.results }}"
Now !! The task Check DB connection with DBPING completes in less than a minute for the entire loop. However, the problem is that ALL the task Result check fails even if the task Check DB connection with DBPING WOULD EVENTUALLY SUCCEED i.e rc=0
Below is one sample failing task Result check when it should have been successful
failed: [remotehost] (item={'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_no_log': False, u'ansible_job_id': u'234197058177.18294', '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'started': 1, 'changed': True, 'failed': False, u'finished': 0, u'results_file': u'/home/ansbladm/.ansible_async/234197058177.18294', 'item': u'dbuser mypass dbhost:1521:mysid', '_ansible_ignore_errors': None}) => {
"ansible_job_id": "234197058177.18294",
"attempts": 1,
"changed": false,
"finished": 1,
"invocation": {
"module_args": {
"jid": "234197058177.18294",
"mode": "status"
}
},
"item": {
"ansible_job_id": "234197058177.18294",
"changed": true,
"failed": false,
"finished": 0,
"item": "dbuser mypass dbhost:1521:mysid",
"results_file": "/home/ansbladm/.ansible_async/234197058177.18294",
"started": 1
},
"msg": "could not find job",
"started": 1
}
Can you please let me know what is the issue with my task Result check and how can i make sure it keeps trying till the task Check DB connection with DBPING completes and thereby reports success or failure upon the .rc of shell module?
It will also be great if i could get the the debug module to print all failed shell tasks of Check DB connection with DBPING.
Kindly suggest.
Although this is a partial answer to the set of queries i had. I will accept a complete answer if it comes in the future.
Adding delegate_to: localhost to task Result check inoder to resolve the behavior.
I'm however still waiting for the code to show all failed tasks in the debug.

How to install NVS via Ansible

I'm attempting to install NVS via Ansible but not having much luck. Following the instructions on the repo, it seems that I should first make sure to set the NVS_HOME environment variable, then clone the repo, then run the installer.
Thus, here is how I have it set up in my playbook:
- hosts: all
remote_user: "{{ new_user }}"
gather_facts: true
environment:
NVS_HOME: ~/.nvs
tasks:
- name: See if NVS is already installed
lineinfile:
path: ~/.zshrc
line: export NVS_HOME="$HOME/.nvs"
state: present
check_mode: true
register: zshrc
- name: Clone the NVS repo
git:
repo: https://github.com/jasongin/nvs
dest: "$NVS_HOME"
version: v1.6.0
ignore_errors: true
when: (zshrc is changed) or (zshrc is failed)
- name: Ensure NVS is installed
shell: ". \"$NVS_HOME/nvs.sh\" install"
when: (zshrc is changed) or (zshrc is failed)
With this, it first checks to see if NVS is installed and won't bother with the following two tasks if it is. Cloning is not a problem, so that leaves the final step of installing.
When Ansible gets to the final task, it outputs an error. Here's the result of that task from the Ansible debugger:
[localhost] TASK: Ensure NVS is installed (debug)> p result._result
{'_ansible_no_log': False,
'_ansible_parsed': True,
'changed': True,
'cmd': '. "$NVS_HOME/nvs.sh" install',
'delta': '0:00:00.002482',
'end': '2020-11-24 12:53:52.983307',
'failed': True,
'invocation': {'module_args': {'_raw_params': '. "$NVS_HOME/nvs.sh" install',
'_uses_shell': True,
'argv': None,
'chdir': None,
'creates': None,
'executable': None,
'removes': None,
'stdin': None,
'stdin_add_newline': True,
'strip_empty_ends': True,
'warn': True}},
'msg': 'non-zero return code',
'rc': 127,
'start': '2020-11-24 12:53:52.980825',
'stderr': "/bin/sh: 1: .: Can't open ~/.nvs/nvs.sh",
'stderr_lines': ["/bin/sh: 1: .: Can't open ~/.nvs/nvs.sh"],
'stdout': '',
'stdout_lines': []}
So I don't know what to do at this point. Any ideas?
The issue is with your dest: "$NVS_HOME" in git task.
I suggest to declare a variable and use it in your environment and in your task
- hosts: all
remote_user: "{{ new_user }}"
gather_facts: true
vars:
nvs_home: "~/.nvs"
environment:
NVS_HOME: "{{ nvs_home }}"
And in your git task:
- name: Clone the NVS repo
git:
repo: https://github.com/jasongin/nvs
dest: "{{ nvs_home }}"
version: v1.6.0
ignore_errors: true
when: (zshrc is changed) or (zshrc is failed)

Filter Ansible output from lineinfile

I'm attempting to audit my systems via files copied to a single host. The default output is very verbose. I would like to see just the pertinent fields of Ansible log output; that way, over 1000 hosts, I can zero into my problems more quickly. . When my play is successful, I'd like to just see:
ok: u'/mnt/inventory/hostname999'
I have a playbook that looks like this:
- hosts: 'localhost'
name: Playbook for the Audit for our infrastructure.
gather_facts: False
become: no
connection: local
roles:
- { role: network, tags: network }
My network role main.xml file looks like this:
---
- name: find matching pattern files
find:
paths: "/mnt/inventory"
patterns: "hostname*"
file_type: directory
register: subdirs
- name: check files for net.ipv4.ip_forward = 0
no_log: False
lineinfile:
name: "{{ item.path }}/sysctl.conf"
line: "net.ipv4.ip_forward = 0"
state: present
with_items: "{{ subdirs.files }}"
register: conf
check_mode: yes
failed_when: (conf is changed) or (conf is failed)
- debug:
msg: "CONF OUTPUT: {{ conf }}"
But I get log output like this:
ok: [localhost] => (item={u'uid': 0, u'woth': False, u'mtime': 1546922126.0,
u'inode': 773404, u'isgid': False, u'size': 4096, u'roth': True, u'isuid':
False, u'isreg': False, u'pw_name': u'root', u'gid': 0, u'ischr': False,
u'wusr': False, u'xoth': True, u'rusr': True, u'nlink': 12, u'issock':
False, u'rgrp': True, u'gr_name': u'root', u'path':
u'/mnt/inventory/hostname999', u'xusr': True, u'atime': 1546930801.0,
u'isdir': True, u'ctime': 1546922126.0, u'wgrp': False, u'xgrp': True,
u'dev': 51, u'isblk': False, u'isfifo': False, u'mode': u'0555', u'islnk':
False})
Furthermore, my debug message of CONF OUTPUT never shows and I have no idea why not.
I have reviewed https://github.com/ansible/ansible/issues/5564 and other articles but they just seem to refer to items like shell commands that send stuff to stdout, which lineinfile does not.
But I get log output like this:
Then you likely want to use loop_control: with a label: "{{ item.path }}" child:
- lineinfile:
# as before
with_items: "{{ subdirs.files }}"
loop_control:
label: "{{ item.path }}"
which will get you closer to what you want:
ok: [localhost] => (item=/mnt/inventory/hostname999)
Furthermore, my debug message of CONF OUTPUT never shows and I have no idea why not.
The best guess I have for that one is that maybe the verbosity needs to be adjusted:
- debug:
msg: "CONF OUTPUT: {{ conf }}"
verbosity: 0
but it works for me, so maybe there is something else special about your ansible setup. I guess try the verbosity: and see if it helps. Given what you are actually doing with that msg:, you may be much happier with just passing conf directly to debug:
- debug:
var: conf
since it will render much nicer because ansible knows it is a dict, rather that just effectively calling str(conf) which (as you saw above) does not format very nicely.

How can I run nohup command in ansible playbook task?

I am pretty much beginner with ansible. I am trying to migrate linux scripts to perform a remote server startup using the nohup command and it does not work properly. The ansible results state change: true, but when I check if the process is up, it doesn't. Thus, I have to restart it manually.
Thanks for your help.
Here what I am doing:
- name: "Performing remote startup on server {{ SERVER }}"
command: nohup {{ JAVA_HOME }}/bin/java {{ DYNATRACE_AGENT_REUS }} -jar -Dengine={{ ENGINE_NAME }} name-{{ ENGINE_JAR_VERSION }}.jar --server.port={{ ENGINE_PORT }} >{{ CONSOLE_LOG }} 2>&1 &
args:
chdir: "{{ SOMEVAR_HOME }}"
async: 75
poll: 0
register: out
- debug:
msg: '{{ out }}'
Results:
TASK [Performing remote startup on server] ***********************************************************************************************************************************************************************************
changed: [servername]
TASK [debug] ********************************************************************************************************************************************************************************************************************************
ok: [servername] => {
"msg": {
"ansible_job_id": "160551828781.28405",
"changed": true,
"failed": false,
"finished": 0,
"results_file": "/home/admindirectory/.ansible_async/160551828781.28405",
"started": 1
}
}
Try to use shell instead of command module. Bash redirects do not work with command.
P.S. You are using ansible in a strange way. Create a systemd unit (system or user-level) and use systemd module to enable/start it.

Resources