Cant start adminctl because of ## placeholders in admin.conf from Connections 6.5 IHS - websphere

I made a Connections 6.5 headless installation which itself works, but couldn't start adminctl in the
# cd /opt/IBM/HTTPServer/bin/
# ./adminctl start
Syntax error on line 7 of /opt/IBM/HTTPServer/conf/admin.conf:
Port must be specified
Line 7 seems like an variable, that doesn't got parsed properly when configuring the IHS
# grep Listen ../conf/admin.conf
Listen ##AdminPort##
There are also other such ## variables in the config file:
# grep ## ../conf/admin.conf
Listen ##AdminPort##
User ##SetupadmUser##
Group ##SetupadmGroup##
ServerName cnx65.internal:##AdminPort##
Why are those values not correctly replaced? For example to Listen 8008 (default IHS admin port).
How I configure the IHS
The machine got provisioned using ansible, where the following shell command runs for IHS plugin configuration:
./wctcmd.sh -tool pct -createDefinition -defLocPathname /opt/IBM/WebSphere/Plugins -response /tmp/plugin-response-file.txt -defLocName webserver1
Response file /tmp/plugin-response-file.txt:
configType=remote
enableAdminServerSupport=true
enableUserAndPass=true
enableWinService=false
ihsAdminCreateUserAndGroup=true
ihsAdminPassword=adminihs
ihsAdminPort=8008
ihsAdminUnixUserGroup=ihsadmin
ihsAdminUnixUserID=ihsadmin
mapWebServerToApplications=true
wasMachineHostname=cnx65.internal
webServerConfigFile1=/opt/IBM/HTTPServer/conf/httpd.conf
webServerDefinition=webserver1
webServerHostName=cnx65.internal
webServerOS=Linux
webServerPortNumber=80
webServerSelected=IHS
As you can see, all required variables for substitution were present. So the tool should be able to replace ##AdminPort## by the value 8008.

wctcmd.sh just creates the WAS definition for the IHS, but doesn't prepare the admin server. We need to do this manually with postinst and setupadm as documented here. This seems not just required for zip installations. My installation was done using Installation Manager and the admin server doesn't work without those steps.
I automated it in Ansible like this:
- name: Check if admin config is properly parsed
become: yes
shell: grep ##AdminPort## {{ http_server.target }}/conf/admin.conf
register: admin_conf_check
# File not found raise rc = 2, rc = 0 found, rc = 1 not found but file exists
failed_when: admin_conf_check.rc != 0 and admin_conf_check.rc != 1
changed_when: False
- set_fact:
admin_conf_is_configured: "{{ admin_conf_check.rc == 1 }}"
- name: Parse IHS admin config
become: yes
# plugin_config_file is defined in http-plugin.yml
shell: |
./bin/postinst -i $PWD -t setupadm -v ADMINPORT={{ http_server.admin_port }} -v SETUPADMUSER=nobody -v SETUPADMGROUP=nobody
./bin/setupadm -usr nobody -grp nobody -cfg conf/httpd.conf -plg {{ plugin_config_file }} -adm conf/admin.conf
args:
chdir: "{{ http_server.target }}"
environment:
LANG: "{{ system_language }}"
register: ihs_setup
# setupadm returns 90 if it was successfull: "Script Completed RC(90)"
failed_when: ihs_setup.rc != 90
when: not admin_conf_is_configured
- name: Create htpasswd for admin config
become: yes
shell: ./bin/htpasswd -c conf/admin.passwd adminihs
args:
chdir: "{{ http_server.target }}"
creates: "{{ http_server.target }}/conf/admin.passwd"
environment:
LANG: "{{ system_language }}"
http_server.target is the IHS base path, e.g. /opt/IBM/HTTPServer
http_server.admin_port is the IBM default value 8008
plugin_config_file is set to /opt/IBM/WebSphere/Plugins/config/{{ http_server.name }}/plugin-cfg.xml where http_server.name matches the definition name in WAS (webserver1 in my example)
system_language is set to en_US.utf8 to make sure that we get english error message for output validation (when required), independent of the configured OS language
After running those configuration tools, we can see that all placeholders were replaced by their corresponding values:
# grep -i listen ../conf/admin.conf
Listen 8008
Running the admin server by executing ./adminctl start in the bin directory now works as expected.

I heard from folks in the lab at IBM that webServerSelected=IHS is not being regognized and it must be webServerSelected=ihs (lowercase)
https://www.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/tins_pctcl_using.html
webServerSelected
Specifies the web server to be configured
Specify only one web server to configure.
apache22
Apache Web Server Version 2.2
64-bit configuration not supported on Windows
apache24
Apache Web Server Version 2.4
64-bit configuration not supported on Windows
ihs
IBM® HTTP Server
64-bit configuration not supported on Windows
...

Related

Ansible Perform Tasks in localhost on Multiple Hosts

I'm planning an Ansible playbook which will query version number of specific .exe file on windows hosts, then generate diff version between the version installed on the Windows hosts and the latest version stored on the Ansible controller Linux machine ("localhost"), and deploy the diff version to the Windows hosts.
My playbook looks something like:
- hosts: winClients
gather_facts: False
tasks:
- name: check exe file version
win_file_version:
path: C:\my.exe
register: exe_file_version
- name: Set client version
set_fact:
winClientExeVersion: "{{ exe_file_version.win_file_version.file_version }}"
Then, on localhost, I have folder for each version, and I'd like to generate the diff between the latest version and the version on the winClients. The versions are stored on localhost in folders named with the version numbers, e.g. MyVersions/1.0.0.0/abc.exe, abc.dll, aaa.txt, ... and MyVersions/1.0.0.1/abc.exe, abc.dll, aaa.txt, ... etc. And I have special folder, MyVersions/LatestVersion/ that always contains the latest version. So I need something like:
- hosts: localhost
gather_facts: False
tasks:
- name: check latest version on server using PEV
shell: peres -v /home/user/MyVersions/LatestVersion/My.exe | awk '{print $3}'
register: latest_file_version
- name: Set server version
set_fact:
serverLatestVersion: "{{ latest_file_version.stdout }}"
- name: If versions differ, generate diff between client and latest versions to temp DiffFolder, then delete empty folders
shell: rsync -rvcm --compare-dest=../{{hostvars.winClients.winClientExeVersion}} . ../DiffFolder && find ../DiffFolder -depth -type d -empty -delete
args:
chdir: /home/user/MyVersions/LatestVersion
when: serverLatestVersion != hostvars.winClients.winClientExeVersion
Then I copy the generated diff to the Windows clients using win_copy.
Now, all this works OK when winClients represents only one specific client.
My problem is, how to do it for group of clients, i.e. in case when winClients represents a group instead of one specific computer? How can I generate diff for each client separately (and each client might have different version installed on it), based on its previously-retrieved version number, when its version number differs from the latest version on server?
The winClients upper section will assign version number for each of the clients, the problem is with the localhost tasks and their when condition.
So it turns out to be pretty simple: with_items does the trick, and it can be referred later in when as well.
Example:
- hosts: localhost
gather_facts: False
tasks:
- name: Notify if versions differ
debug:
msg: "client and server versions differ: Server: {{ serverLatestVersion}}, Client: {{ hostvars[item]['winClientExeVersion'] }}"
with_items: "{{ groups['clients'] }}"
when: serverLatestVersion != hostvars[item]['winClientExeVersion']
This will loop through the clients in clients group, and for each client that got a different version than the server, notify in debug.
Similar method can be used for each task (copy, rsync, etc.) that requires to be executed only against relevant client, based on when condition.

Restart the apache service on a server based on the varnish health status

I have 3 instances running on centos 7
1. Ansible Server
2. Varnish Server
3. Apache httpd server
I want to restart the apache service when varnish service is up but varnish health status showing ""Sick" because apache service is stopped.
I have already created a playbook and defined the both hosts but not working
- name: Check Backend Nodes
shell: varnishadm backend.list | awk 'FNR==2{ print $4 }'
register: status1
- name: print backend status
debug:
msg: "{{status1.stdout_lines}}"
#tasks:
- include_tasks: /etc/ansible/apache/tasks/main.yml
when: status1.stdout_lines == 'Sick'
This is most likely because your when condition has a glitch, as you can see in the documentation, stdout_lines is always a list of string, when your condition do compare it to a string.
So your fix could actually be as simple as checking if the string Sick is among the list stdout_lines:
- include_tasks: /etc/ansible/apache/tasks/main.yml
when: “'Sick' in status1.stdout_lines”

Ansible Azure Dynamic inventory with tags does not work

I am using ansible version 2.8.1 and trying to identify the servers in a resource group based on tags Below is my code
i have 2 vms in test-rg (testvm1, testvm2). Only testvm1 has the tag nginx
i have set env variable AZURE_TAGS=nginx
azureinv.yml
plugin: azure_rm
include_vm_resource_groups:
- test-rg
nginx.yml
---
- name: Install and start Nginx on an Azure virtual machine
hosts: all
become: yes
tasks:
- name: echo test
shell: " echo test "
ansible-playbook -i ./azureinv.yml nginx.yml -u test
output:
i see its doing echo on both the servers (testvm1, testvm2) even if there is a tag called nginx, only on one server
can some one please help me ?

How to instruct Ansible for remote node command to find a text resource file on remote node

I am trying to upgrade some Jboss servers for my application running on remote nodes using Ansible. Through Ansible I can invoke a jboss server start script which has to upgrade and start my server on remote node.
Problem is that the script internally takes a configuration property file as an argument which resides on the remote server (there are many such servers and every server has different configuration property file which resides within the remote node server so I cannot keep these files locally on ansible controller machine) on which actually upgrade is running. However Ansible expects that the resource file should be available on ansible controller (locally) and fails to do the upgrade.
Is there any way I can instruct Ansible to find the particular resource or file directly on the remote node rather then finding it locally and then copying every resource on remote node for execution?
Ansible Playbook file contents
---
- name: Upgrade Server
hosts: remote_host
connection: ssh
vars:
server_version: 188
server_name: UpgradeTest
tasks:
- name: Start server
shell: "{{ jboss_home }}/bin/startJBossServer.sh {{ server_name }} >/dev/null 2>&1 &"
- name: Wait for port {{ server_http_port }} to come up
wait_for: host="localhost" port="{{ server_http_port }}" delay=15 timeout=300 state=started
- name: Test server is up and running
action: uri url="http://localhost:{{ server_http_port }}/{{ server_name }}" return_content=yes timeout=90
register: webpage
until: webpage.status == 200
retries: 25
delay: 5
The file startJBossServer.sh contains the following command:
nohup "${JBOSS_HOME}/bin/standalone.sh" -Djboss.server.base.dir=${JBOSS_HOME}/${i_server_name} -b=0.0.0.0 -c=#fm.config.xml# -P=${start_server_properties_file} </dev/null > "/dev/null" 2>&1 &
If you can see we need ${start_server_properties_file} in -P argument which actually is available on remote node server, however Ansible expects the same resource to be available on local machine and hence fails to run the command.

How to get the host name of the current machine as defined in the Ansible hosts file?

I'm setting up an Ansible playbook to set up a couple servers. There are a couple of tasks that I only want to run if the current host is my local dev host, named "local" in my hosts file. How can I do this? I can't find it anywhere in the documentation.
I've tried this when statement, but it fails because ansible_hostname resolves to the host name generated when the machine is created, not the one you define in your hosts file.
- name: Install this only for local dev machine
pip:
name: pyramid
when: ansible_hostname == "local"
The necessary variable is inventory_hostname.
- name: Install this only for local dev machine
pip:
name: pyramid
when: inventory_hostname == "local"
It is somewhat hidden in the documentation at the bottom of this section.
You can limit the scope of a playbook by changing the hosts header in its plays without relying on your special host label ‘local’ in your inventory. Localhost does not need a special line in inventories.
- name: run on all except local
hosts: all:!local
This is an alternative:
- name: Install this only for local dev machine
pip: name=pyramid
delegate_to: localhost

Resources