I set up a vagrant box that runs my Couchbase DB. When creating the box I want to initialize my Couchbase with puppet. When I run the following command (which inits the Couchbase cluster) it works.
vagrant#precise64:~$ /opt/couchbase/bin/couchbase-cli cluster-init --cluster=localhost --cluster-init-username=Administrator --cluster-init-password=foobar --cluster-init-ramsize=256 -u Administrator -p foobar -d
INFO: running command: cluster-init
INFO: servers {'add': {}, 'failover': {}, 'remove': {}}
METHOD: POST
PARAMS: {'username': 'Administrator', 'password': 'foobar', 'port': 'SAME', 'initStatus': 'done'}
ENCODED_PARAMS: username=Administrator&password=foobar&port=SAME&initStatus=done
REST CMD: POST /settings/web
response.status: 200
METHOD: POST
PARAMS: {'memoryQuota': '256'}
ENCODED_PARAMS: memoryQuota=256
REST CMD: POST /pools/default
response.status: 200
SUCCESS: init localhost
$ vagrant#precise64:~$ echo $?
0
However when I run the same command via puppet, puppet complaints about a non zero return value.
vagrant#precise64:~$ puppet apply --debug -e 'exec { "couchbase-init-cluster": command => "/opt/couchbase/bin/couchbase-cli cluster-init --cluster=localhost --cluster-init-username=administrator --cluster-init-password=foobar --cluster-init-ramsize=256 -u administrator -p foobar"}'
warning: Could not retrieve fact fqdn
debug: Creating default schedules
debug: Failed to load library 'selinux' for feature 'selinux'
debug: Failed to load library 'shadow' for feature 'libshadow'
debug: Failed to load library 'ldap' for feature 'ldap'
debug: /File[/home/vagrant/.puppet/var/state/state.yaml]: Autorequiring File[/home/vagrant/.puppet/var/state]
debug: /File[/home/vagrant/.puppet/var/log]: Autorequiring File[/home/vagrant/.puppet/var]
debug: /File[/home/vagrant/.puppet/var/state/last_run_report.yaml]: Autorequiring File[/home/vagrant/.puppet/var/state]
debug: /File[/home/vagrant/.puppet/var/state/graphs]: Autorequiring File[/home/vagrant/.puppet/var/state]
debug: /File[/home/vagrant/.puppet/var/run]: Autorequiring File[/home/vagrant/.puppet/var]
debug: /File[/home/vagrant/.puppet/ssl/private]: Autorequiring File[/home/vagrant/.puppet/ssl]
debug: /File[/home/vagrant/.puppet/ssl]: Autorequiring File[/home/vagrant/.puppet]
debug: /File[/home/vagrant/.puppet/var/facts]: Autorequiring File[/home/vagrant/.puppet/var]
debug: /File[/home/vagrant/.puppet/var/clientbucket]: Autorequiring File[/home/vagrant/.puppet/var]
debug: /File[/home/vagrant/.puppet/ssl/certificate_requests]: Autorequiring File[/home/vagrant/.puppet/ssl]
debug: /File[/home/vagrant/.puppet/var/state/last_run_summary.yaml]: Autorequiring File[/home/vagrant/.puppet/var/state]
debug: /File[/home/vagrant/.puppet/var/state]: Autorequiring File[/home/vagrant/.puppet/var]
debug: /File[/home/vagrant/.puppet/var/client_data]: Autorequiring File[/home/vagrant/.puppet/var]
debug: /File[/home/vagrant/.puppet/ssl/public_keys]: Autorequiring File[/home/vagrant/.puppet/ssl]
debug: /File[/home/vagrant/.puppet/var/lib]: Autorequiring File[/home/vagrant/.puppet/var]
debug: /File[/home/vagrant/.puppet/ssl/certs]: Autorequiring File[/home/vagrant/.puppet/ssl]
debug: /File[/home/vagrant/.puppet/var]: Autorequiring File[/home/vagrant/.puppet]
debug: /File[/home/vagrant/.puppet/var/client_yaml]: Autorequiring File[/home/vagrant/.puppet/var]
debug: /File[/home/vagrant/.puppet/ssl/private_keys]: Autorequiring File[/home/vagrant/.puppet/ssl]
debug: Finishing transaction 70097870601760
debug: Loaded state in 0.00 seconds
debug: Loaded state in 0.00 seconds
info: Applying configuration version '1387188181'
debug: /Schedule[daily]: Skipping device resources because running on a host
debug: /Schedule[monthly]: Skipping device resources because running on a host
debug: /Schedule[hourly]: Skipping device resources because running on a host
debug: Exec[couchbase-init-cluster](provider=posix): Executing '/opt/couchbase/bin/couchbase-cli cluster-init --cluster=localhost --cluster-init-username=administrator --cluster-init-password=foobar --cluster-init-ramsize=256 -u administrator -p foobar'
debug: Executing '/opt/couchbase/bin/couchbase-cli cluster-init --cluster=localhost --cluster-init-username=administrator --cluster-init-password=foobar --cluster-init-ramsize=256 -u administrator -p foobar'
err: /Stage[main]//Exec[couchbase-init-cluster]/returns: change from notrun to 0 failed: /opt/couchbase/bin/couchbase-cli cluster-init --cluster=localhost --cluster-init-username=administrator --cluster-init-password=foobar --cluster-init-ramsize=256 -u administrator -p foobar returned 2 instead of one of [0] at line 1
debug: /Schedule[never]: Skipping device resources because running on a host
debug: /Schedule[weekly]: Skipping device resources because running on a host
debug: /Schedule[puppet]: Skipping device resources because running on a host
debug: Finishing transaction 70097871491620
debug: Storing state
debug: Stored state in 0.01 seconds
notice: Finished catalog run in 0.63 seconds
debug: Finishing transaction 70097871014480
debug: Received report to process from precise64
debug: Processing report from precise64 with processor Puppet::Reports::Store
Please any ideas how I can run that command with puppet.
I believe puppet apply -e takes a puppet expression, not an arbitrary shell expression. You probably want something like:
puppet apply -e 'exec { "couchbase-init": \
command => "/opt/couchbase/bin/couchbase-cli cluster-init <rest of options>"'
I invite you to look at this blog post:
http://blog.couchbase.com/couchbase-cluster-minutes-vagrant-and-puppet
Related
I want to check if a package is installed but I get an error. Why the facts are not available inside the role?
TASK [docker : Check if listed package is installed or not on Debian Linux family] *****************************************************************************************************************************************************************
fatal: [10.100.0.52]: FAILED! => {"changed": false, "msg": "Could not detect which package manager to use. Try gathering facts or setting the \"use\" option."}
My playbook
- hosts: "{{ host }}"
gather_facts: yes
sudo: yes
roles:
- { role: "docker"}
and part of my role
- name: Check if listed package is installed or not on Debian Linux family
package:
name: ferm
state: present
check_mode: true
register: package_check
- name: Print execution results
debug:
msg: "Package is installed"
when: package_check is succeeded
This is my ansible playbook and I'm only running into an issue on the final task for starting and enabling Grafana.
---
- name: Install Grafana
hosts: hosts
become: yes
tasks:
- name: download apt key
ansible.builtin.apt_key:
url: https://packages.grafana.com/gpg.key
state: present
- name: Add Grafana repo to sources.list
ansible.builtin.apt_repository:
repo: deb https://packages.grafana.com/oss/deb stable main
filename: grafana
state: present
- name: Update apt cache and install Grafana
ansible.builtin.apt:
name: grafana
update_cache: yes
- name: Ensure Grafana is started and enabled
ansible.builtin.systemd:
name: grafana-server
state: started
enabled: yes
This is the error I received:
TASK [Ensure Grafana is started and enabled]
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Service is in unknown state", "status": {}}
This is also the configuration of my hosts file just in case:
[hosts]
localhost
[hosts:vars]
ansible_connection=local
ansible_python_interpreter=/usr/bin/python3
I'm pretty much just trying to have it run these two commands I have in a bash script
sudo systemctl start grafana-server
sudo systemctl enable grafana-server.service
Got it sorted out- turns out my system wasn't booted using systemd as init system. So I changed the Ansible module from ansible.builtin.systemd to ansible.builtin.sysvinit
I want to execute a playbook with tags, because I want to execute part of script, but the variable stored in register is empty.
---
- hosts: node
vars:
service_name: apache2
become: true
- name: Start if Service Apache is stopped
shell: service apache2 status | grep Active | awk -v N=2 '{print $N}'
args:
warn: false
register: res
tags:
- toto
- name: Start Apache because service was stopped
service:
name: "{{service_name}}"
state: started
when: res.stdout == 'inactive'
tags:
- toto
- name: Check for apache status
service_facts:
- debug:
var: ansible_facts.services.apache2.state
tags:
- toto2
$ ansible-playbook status.yaml -i hosts --tags="toto,toto2"
PLAY [nodeOne] ***************************************************************************
TASK [Start if Service Apache is stopped] ************************************************
changed: [nodeOne]
TASK [Start Apache because service was stopped] ******************************************
skipping: [nodeOne]
TASK [debug] *****************************************************************************
ok: [nodeOne] => {
"ansible_facts.services.apache2.state": "VARIABLE IS NOT DEFINED!"
}
At the end of the script, I don't get the output of apache status.
Q: "The variable stored in the register is empty."
A: A tag is missing in the task service_facts. As a result, this task is skipped and ansible_facts.services is not defined. Fix it for example
- name: Check for apache status
service_facts:
tags: toto2
Notes
1) With a single tag only it's not necessary to declare a list with one tag.
2) The concept of Idempotency makes the condition redundant.
An operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions.
The module service is idempotent. For example the task below
- name: Start Apache
service:
name: "{{ service_name }}"
state: started
tags:
- toto
will make any changes only if the service has not been started yet. The result of the task will be a started service. Once the service is started the task will report [ok] and not touch the service. In this respect, it does not matter what was the previous state of the service i.e. there is no reason to run the task conditionally.
3) The module service_facts works as expected. For example
- service_facts:
- debug:
var: ansible_facts.services.apache2.state
gives
"ansible_facts.services.apache2.state": "running"
Goal to achieve - check the status of "filebeat" & "Telegraf" service from ansible on 20 production servers. In case any service is stopped on any servers, I could get alert.
---
- hosts: ALL
tasks:
- name: checking service status
command: systemctl status "{{ item }}"
with_items:
- filebeat
- telegraf
register: result
ignore_errors: yes
- debug:
var: result
Got Below output -
ok: [10.5.10.10] => {
"result.results[0].stdout": "* filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.\n Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)\n Active: active (running) since Tue 2019-08-06 11:07:34 IST; 3 weeks 6 days ago\n Docs: https://www.elastic.co/products/beats/filebeat\n Main PID: 102961 (filebeat)\n CGroup: /system.slice/filebeat.service\n `-102961 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat\n\nWarning: Journal has been rotated since unit was started. Log output is incomplete or unavailable."
}
ok: [10.5.10.11] => {
"result.results[0].stdout": "* filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.\n Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)\n Active: inactive (dead)\n Docs: https://www.elastic.co/products/beats/filebeat"
}
How could I store these outputs in a file on my ansible server. So that I could apply alert in case any service is not running on any server.
Why don't you parse the output and see a red sign when one of them is not active?
Answering your question:
What I would do is saving the output to a file and copying that file back to your ansible server then appending all the results.
I'm trying to use ansible to build a docker image locally but I'm
running into problems.
- hosts: all
tasks:
- name: Build Docker image
local_action:
module: docker_image
path: .
name: SlothSloth
state: present
And my /etc/ansible/hosts contains
localhost ansible_connection=local
But when I try to run it I get:
TASK: [Build Docker image] ****************************************************
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
failed=True msg='failed to import python module: No module named docker.client'
FATAL: all hosts have already failed -- aborting
If you are using virtualenv, you are probably running ansible with /usr/bin/python by default. To bypass this behavior, you have to define the variable "ansible_python_interpreter".
Try to use :
- hosts: all
vars:
- ansible_python_interpreter: python
tasks:
- name: Build Docker image
local_action:
module: docker_image
path: .
name: SlothSloth
state: present