I installed elasticsearch using this role.
I do the following steps in the playbook:
---
- hosts: master
sudo: true
tags: elasticsearch
roles:
- ansible-elasticsearch
And then vagrant up es-test-cluster, where es-test-cluster is the name of the VM I mention in VagrantFile. I have given it a private IP 192.162.12.14. The VM boot perfectly and after running sudo service elasticsearch status I get that the service is running on 192.162.12.14:9200 which is correct. But if I run vagrant halt es-test-cluster and then vagrant up es-test-cluster I see that the elasticsearch service is running any more.
I thought of doing this:
---
- hosts: master
sudo: true
tags: elasticsearch
roles:
- ansible-elasticsearch
tasks:
- name: Starting elasticsearch service if not running
service: name=elasticsearch state=started
but even this does not help. This only runs when I boot for the first time.
How can start the service whenever I run vagrant up?
This is for Ubuntu 14.04.
You need to "enable" the service. This can be done with a single flag in your Ansible.
---
- hosts: master
sudo: true
tags: elasticsearch
roles:
- ansible-elasticsearch
tasks:
- name: Starting elasticsearch service if not running
service:
name: elasticsearch
state: started
enabled: true
Also, depending on which ansible-elasticsearch role you are using, it may already have a flag that you can pass to enable the service without the extra task. I know the role I used did.
The solutions are for Debian packages.
You can include the following in the VagrantFile:
config.vm.provision "shell", path: "restart_es.sh"
where restart_es.sh contains:
#!/bin/bash
SERVICE=elasticsearch;
sudo update-rc.d $SERVICE defaults 95 10
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
echo "$SERVICE service running, everything is fine"
else
echo "$SERVICE is not running"
sudo /etc/init.d/$SERVICE start
fi
OR
config.vm.provision "shell", path: "restart_es.sh", run: "always"
where restart_es.sh contains:
#!/bin/bash
SERVICE=elasticsearch;
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
echo "$SERVICE service running, everything is fine"
else
echo "$SERVICE is not running"
sudo service $SERVICE start
fi
OR
config.vm.provision "shell", inline: "sudo service elasticsearch start", run: "always"
All three start the elasticsearch service when you restart the vagrant box.
Related
I am trying to clear a hands on in HackerRank, where the task is to stop and start the service named ssh using service module. I have used the below code.
- name: "Stop ssh"
service:
name: ssh
state: stopped
- name: "start ssh"
service:
name: ssh
state: started
Can you please guide me to clear the hands on.
The service handling ssh on Linux is called sshd like a lot of other services, where the d stands for daemon.
So your correct tasks would be:
- name: Stop ssh
service:
name: sshd
state: stopped
- name: Start ssh
service:
name: sshd
state: started
This said, since Ansible connects through ssh, I am unsure how this will really react when the service is stopped.
Most of the linux services, though do also have a restart instruction, that you can also use in Ansible, with the state value restarted.
- name: Restart ssh
service:
name: sshd
state: restarted
I want to use Hadoop from CDH docker image. CDH image is already installed on my machine and I can run it.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07a55a9d4cb9 4239cd2958c6 "/usr/bin/docker-quickstart" 18 minutes ago Up 18 minutes 0.0.0.0:32774->7180/tcp, 0.0.0.0:32773->8888/tcp container
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container
172.17.0.2
Local, I am writing an ansible playbook and I need to set Hadoop conf dir in CDH which is: "/etc/hadoop/conf".
How can I set the running docker image in my ansible playbook?
I tried:
- name: run cloudera
docker_container:
name: "container"
image: quickstart/cloudera
command: /usr/bin/docker-quickstart"
state: started
ports:
- 8888:8888
- 7180:7180
But this command runs another docker image and I would like to connect to the running one.
inventory.ini
container ansible_connection=docker
Note: I suggest for the future that you rename your container to something more distinct than container....
example playbook.yml
---
- hosts: container
tasks:
- name: I am a dummy task, write your own
file:
path: /tmp/helloContainer
state: file
Running the playbook
ansible-playbook -i inventory.ini playbook.yml
I want to replace this command with an ansible playbook
'docker build -q --network host -t "ubuntu" . '
I have been going through docker_image module of ansible but couldn't figure it out. Any idea on how to proceed further?
Thanks in advance.
The nearest you can have it is:
---
- name: build the image
docker_image:
name: docker
tag: ubuntu
path: "/yourpath"
state: present
For the --network host, there is a request open in Github to have it.
I am writing a cookbook for installing a lamp stack with docker as driver for the test kitchen. when i use kitchen create command, it is getting stuck on SSH loop.
---
driver:
name: docker
privileged: true
use_sudo: false
provisioner:
name: chef_zero
# You may wish to disable always updating cookbooks in CI or other testing environments.
# For example:
# always_update_cookbooks: <%= !ENV['CI'] %>
always_update_cookbooks: true
verifier:
name: inspec
platforms:
- name: centos-7.3
driver_config:
run_command: /usr/lib/systemd/systemd
suites:
- name: default
run_list:
- recipe[lamp::default]
verifier:
inspec_tests:
- test/smoke/default
attributes:
The loopback msg is show below.
---> b052138ee79a
Successfully built b052138ee79a
97a2030283b85c3c915fd7bd318cdd12541cc4cfa9517a293821854b35d6cd2e
0.0.0.0:32770
Waiting for SSH service on localhost:32770, retrying in 3 seconds
Waiting for SSH service on localhost:32770, retrying in 3 seconds
The ssh loop is continuing until i force close it.
Can anybody help?
Most likely sshd isn't actually starting correctly. I don't recommend the systemd hack you have in there, it's hard to get right.
I'm trying to run elasticsearch in a Docker container on my laptop (Mac OS) and running my tests connecting on the TCP port 9300.
First I tried to run it without docker:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.zip
unzip elasticsearch-5.1.1.zip
cd elasticsearch-5.1.1
echo "cluster.name: test
client.transport.sniff: false
discovery.zen.minimum_master_nodes: 1
network.host:
- _local_
- _site_
network.publish_host: _local_" > config/elasticsearch.yml
./bin/elasticsearch
All works well.
Now if I try in docker:
docker run -p 9300:9300 -ti openjdk
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.zip
unzip elasticsearch-5.1.1.zip
cd elasticsearch-5.1.1
echo "cluster.name: test
client.transport.sniff: false
discovery.zen.minimum_master_nodes: 1
network.host:
- _local_
- _site_
network.publish_host: _local_" > config/elasticsearch.yml
chmod 777 -R .
useradd elastic
su elastic
./bin/elasticsearch
It would work for the first suite of tests but not the second one where it throws:
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [{nlR3i79}{nlR3i797RuKXJqS86GExXQ}{O6ltC6a5R-asNMuvCt3c4w}{127.0.0.1}{127.0.0.1:9300}]
Cheers
Take a look here:
https://hub.docker.com/_/elasticsearch/
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
Here an example:
docker run -d elasticsearch:5.1.1