Apache service start inactive in chef recipe on Linux Red Hat - ruby

I have a customer who has 500 web sites on server, and I am trying to automate it. Sometimes when he wants to deploy configurations on the server Apache wont start even Syntax is Ok, because I'm checking it before new start of the server.
This is workflow of execution recipes:
Stop the server
Download configurations
Copy configurations(all 500)
Check error_log(apachectl configtest)
(if Syntax Ok) Start the server
and the problem starts here because my service for start the server is inactive.
Executing /bin/systemctl is-active httpd
---- Begin output of /bin/systemctl is-active httpd ----
DEBUG: STDOUT: inactive
DEBUG: STDERR:
---- End output of /bin/systemctl is-active httpd ----
DEBUG: Ran /bin/systemctl is-active httpd returned 3
DEBUG: Executing /bin/systemctl is-enabled httpd
DEBUG: ---- Begin output of /bin/systemctl is-enabled httpd ----
DEBUG: STDOUT: enabled
DEBUG: STDERR:
DEBUG: ---- End output of /bin/systemctl is-enabled httpd ----
DEBUG: Ran /bin/systemctl is-enabled httpd returned 0
DEBUG: Executing /bin/systemctl start httpd
DEBUG: ---- Begin output of /bin/systemctl start httpd ----
DEBUG: STDOUT:
DEBUG: STDERR: Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.
DEBUG: ---- End output of /bin/systemctl start httpd ----
DEBUG: Ran /bin/systemctl start httpd returned 1
Maybe someone knows how to activate service again?
service 'httpd' do
supports :status => true
action :start
ignore_failure true
end
I emphasize this is not happening every time.

Related

How to start apache2 service , when it is in inactive state using ansible( When it is inactive state

Already apache2 is installed in my target node
I have to start apache2 service when it is in inactive state.
How to implement such kind of solution in ansible
I have to start apache2 service whenever it is inactive
Command: /sbin/service apache2 status
if the output is showing inactive then only i have to run below command
Command: /sbin/service apache2 start
using adhoc ansible commands you should be able to run:
ansible <host_name_or_ip> -a "service httpd start"
or as part of a role:
- name: Ensure that apache2 is started
become: true
service: name=httpd state=started
Ansible is about describing states in tasks that always produce the same result (idempotence). Your problem then comes down to:
apache must be active on boot
apache must be started
We could also add in a prior separate task:
apache package must be installed on the system
A sample playbook would be
---
- name: Install and enable apache
hosts: my_web_servers
vars:
package_name: apache2 # adapt to your distribution
tasks:
- name: Make sure apache is installed
package:
name: "{{ package_name }}"
state: present
- name: Make sure apache is active on boot and started
service:
name: "{{ package_name }}"
enabled: true
state: started
Running this playbook against your target host will (make sure you change the hosts param to a relevant group or machine):
For the install task:
Install apache if it is not already installed and report "Changed"
Report "Ok" if apache is already installed
For the service task:
Enable and/or start apache if needed and report "Changed"
Report "Ok" if apache is already enabled and started.
Note: in my example, I used the agnostic modules package and service. If you which you can replace with their specific equivalent (apt, yum ... and systemd, sysvinit ...)

Is there an option on Ansible for getting squid service status?

I'm setting up a new playbook, and want to support both service status and systemctl status.
I want to monitor a service called Squid on Ubuntu 14.04 that has only service and 16.04 that has both service and systemctl.
I had tried using
How to get service status by Ansible?
But it gives me an answer only for service or systemctl.
- hosts: servers
gather_facts: true
tasks:
- name: Run the specified shell command
shell: 'service squid status'
register: result
- name: Display the outcome
debug:
msg: '{{ result }}'
I'm trying to get both service status from service and systemctl in the same playbook run.
But the actual result from that .yml file is only explicit for Systemctl status or service status

Ansible Playbook should stop execution if port is open

I am installing karaf on VM and in my installation process i want to verify if karaf instance is already running and it stop installation process and exit from ansible execution if port is open. currently its doing other way
- name: Wait for my-service to start
wait_for: port={{karaf_port}} timeout=30
msg: "Port 8101 is not accessible."
You could use this task to check if the application is already running. If running, it will abort the playbook.
- name: Check if service is running by querying the application port
wait_for:
port: 22
timeout: 10
state: stopped
msg: "Port 22 is accessible, application is already installed and running"
register: service_status
Basically, you are using the module with state: stopped, and ansible expects the port to somehow stop listening for timeoutseconds. if port stays up for 10 seconds (it will stay up since nobody stops the already installed application), ansible will exit with error.
change port to 23, you will see playbook would continue to next step:
tasks:
- name: Check if service is running by querying the application port
wait_for:
port: 23
timeout: 10
state: stopped
msg: "Port 23 is accessible, application is already installed and running"
register: service_status
- name: print
debug:
var: service_status
you dont need the register clause, just added for my test.
hope it helps

How to run command after error occurs in ansible handler

I have a simple playbook that copies some config files for nginx server and then issues service nginx reload command:
---
- hosts: mirrors
tasks:
- name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
notify:
- start nginx
- name: Copy vhost file
copy: src=vhost/test.vhost dest=/etc/nginx/sites-available/ mode=0644
- name: Symlink vhost file to enabled sites
file: src=/etc/nginx/sites-available/test.vhost dest=/etc/nginx/sites-enabled/test.vhost state=link
notify:
- reload nginx
handlers:
- name: start nginx
service: name=nginx state=started
- name: reload nginx
service: name=nginx state=reloaded
Now, the problem is that it might happen that the test.host file contains an error and nginx won't reload properly. If it happens, I'd like to run a command systemctl status nginx.service and see it's output when ansible playbook executes. How can I add this "error handler"?
If the handler output tells you about the failure, you can put together a nice bit of error handling with the ignore_errors and register keywords.
Set your handler not to end to fail and end the play when it gets an error code back, but to register the output as a variable:
handlers:
- name: reload nginx
service: name=nginx state=reloaded
ignore_errors: True
register: nginx_reloaded
Then back in the tasks section, call flush_handlers to execute all the queued handlers that would otherwise wait until the end of the play. Add a task in to go and ask the server about the status of Nginx when your nginx_reloaded variable has a non-zero return code, and then print the information from that with the debug module:
- name: Symlink vhost file to enabled sites
file: src=/etc/nginx/sites-available/test.vhost dest=/etc/nginx/sites-enabled/test.vhost state=link
notify:
- reload nginx
- meta: flush_handlers
- name: Get the Nginx service status if it failed to failed.
command: "systemctl status nginx.service"
register: nginx_status
when: nginx_reloaded.rc != 0
- debug: var=nginx_status
when: nginx_reloaded.rc != 0
Of course, a better solution would be to fix those errors in your configuration files - and figuring them out through Ansible output has limitations compared to SSHing onto the target host to check it out.
Why don't use Ansible modules ability to validate configuration?:
http://docs.ansible.com/ansible/template_module.html
http://docs.ansible.com/ansible/replace_module.html
http://docs.ansible.com/ansible/lineinfile_module.html
For example:
- template: src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s'
- lineinfile: dest=/etc/sudoers state=present regexp='^%ADMIN ALL\=' line='%ADMIN ALL=(ALL) NOPASSWD:ALL' validate='visudo -cf %s'
- replace: dest=/etc/apache/ports regexp='^(NameVirtualHost|Listen)\s+80\s*$' replace='\1 127.0.0.1:8080' validate='/usr/sbin/apache2ctl -f %s -t'

Ansible Service Restart Failed

I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Problem With Original Use Case (Handler)
Playbook
- hosts: all
- remote_user: vagrant
- tasks:
...
- name: Forbid SSH root login
sudo: yes
lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present
notify:
- restart ssh
...
- handlers:
- name: restart ssh
sudo: yes
service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh]
failed: [default] => {"failed": true}
FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Task Also Fails
Playbook
- name: Restart SSH server
sudo: yes
service: name=ssh state=restarted
Same output as the handler use case.
Ad Hoc Command Also Fails
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> {
"failed": true,
"msg": ""
}
Shell command in box works
When I SSH in and run the usual command, everything works fine.
> vagrant ssh
> sudo service ssh restart
ssh stop/waiting
ssh start/running, process 7899
> echo $?
0
Command task also works
Output
TASK: [Restart SSH server] ****************************************************
changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command module and moved on:
- name: restart sshd
command: service ssh restart

Resources