I'm trying to use the docker's image elk-docker (https://elk-docker.readthedocs.io/) , using Docker Compose. The .yml file, is like this:
elk:
image: sebp/elk
ports:
- "5601:5601"
- "9200:9200"
- "5044:5044"
When I run the command: sudo docker-compose up, the console shows:
* Starting Elasticsearch Server
sysctl: setting key "vm.max_map_count": Read-only file system
...fail!
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
waiting for Elasticsearch to be up (11/30)
waiting for Elasticsearch to be up (12/30)
waiting for Elasticsearch to be up (13/30)
waiting for Elasticsearch to be up (14/30)
waiting for Elasticsearch to be up (15/30)
waiting for Elasticsearch to be up (16/30)
waiting for Elasticsearch to be up (17/30)
waiting for Elasticsearch to be up (18/30)
waiting for Elasticsearch to be up (19/30)
waiting for Elasticsearch to be up (20/30)
waiting for Elasticsearch to be up (21/30)
waiting for Elasticsearch to be up (22/30)
waiting for Elasticsearch to be up (23/30)
waiting for Elasticsearch to be up (24/30)
waiting for Elasticsearch to be up (25/30)
waiting for Elasticsearch to be up (26/30)
waiting for Elasticsearch to be up (27/30)
waiting for Elasticsearch to be up (28/30)
waiting for Elasticsearch to be up (29/30)
waiting for Elasticsearch to be up (30/30)
Couln't start Elasticsearch. Exiting.
Elasticsearch log follows below.
cat: /var/log/elasticsearch/elasticsearch.log: No such file or directory
docker_elk_1 exited with code 1
How can I resolve the problem?
You can do that in 2 ways.
Temporary set max_map_count:
sudo sysctl -w vm.max_map_count=262144
but this will only last till you restart your system.
Permament
In your host machine
vi /etc/sysctl.conf
make entry
vm.max_map_count=262144
restart
You probably need to set vm.max_map_count in /etc/sysctl.conf on the host itself, so that Elasticsearch does not attempt to do that from inside the container. If you don't know the desired value, try doubling the current setting and keep going until Elasticsearch starts successfully. Documentation recommends at least 262144.
In Docker host terminal (Docker CLI) run commands:
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
exit
Go inside your docker container
# docker exec -it <container_id> /bin/bash
You can view your current max_map_count value
# more /proc/sys/vm/max_map_count
Temporary set max_map_count value(container(host) restart will not persist max_map_count value)
# sysctl -w vm.max_map_count=262144
Permanently set value
1. # vi /etc/sysctl.conf
vm.max_map_count=262144
2. # vi /etc/rc.local
#put parameter inside rc.local file
echo 262144 > /proc/sys/vm/max_map_count
You have to set the vm.max_map_count variable in /etc/sysctl.conf at least with 262144
After that you can reload the settings with sysctl --system
Related
I have an ELK setup on a single instance running ubuntu 18.04. Every service (logstash, kibana, metricbeat) will auto start upon reboot except elasticsearch. I have to issue sudo service elasticsearch start command after rebooting the instance.
I tried this command sudo update-rc.d elasticsearch enable but it did not help.
What needs to be done to so that elastic would restart automatically?
in ubuntu 18.04 (above 16.04) the systemctl is command control of systemd.
to making a program as service you should use below command:
systemctl enable elasticsearch.service
you can check a program is service enabled?
systemctl is-enabled elasticsearch.service
I try running Elasticsearch 6.x on LXC using Ansible, when try to start Elasticsearch service in elasticsearch log I see:
[2020-01-04T08:45:58,744][ERROR][o.e.b.Bootstrap ] [4WUODd8] node validation exception
[1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
when I searched for this I found :
sysctl -w vm.max_map_count=262144
but When I running this I get :
sysctl: setting key "vm.max_map_count": Read-only file system
I tried change file manually but didn't work , Is there any way change this environment variable in LXC?
Solved by executing command :
sysctl -w vm.max_map_count=262144
on Ansible hosts it self
I installed on Centos8:
elasticsearch version 7.3.1
kibana version 7.3.1
curl -I localhost:9200/status is ok
curl -I localhost:5601/status --> kibana is not ready yet
In machine with centos7 (.226) all is ok
This is kibana log:
Can somebody help me please?
Elasticsearch 7.x.x requires cluster bootstrapping at first launch and Kibana won't start unless Elasticsearch is ready and each node is running Elasticsearch in version 7.x.x.
I will write steps which you would normally do on a real machine, so that anybody else could do the same. In docker it may look similarly, except that you are working in the containers.
Before we kick off, stop kibana and elasticsearch:
service kibana stop
service elasticsearch stop
killall kibana
killall elasticsearch
Make sure it's dead:
service kibana status
service elasticsearch status
Then head into /etc/elasticsearch/ and edit elasticsearch.yml file. Add at the end of the file:
cluster.initial_master_nodes:
- master-a
- master-b
- master-c
Where master-* will be equal to node.name on each node. Save and exit. Start Elasticsearch and then Kibana. On machines with lower memory (~4GB and probably in Docker too, as it normally gives 4GB memory for containers) you may have to start Kibana first, let it "compile", stop it, start Elasticsearch and back Kibana.
On machines with puppet make sure that puppet or cron is not running, just in case not to start off kibana/elastic too early.
Here's source: https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-discovery-bootstrap-cluster.html
I'm trying to get ElasticSearch running with Laradock. ES looks to be supported out of the box with Laradock.
Here's my docker command (run from <project root>/laradock/:
docker-compose up -d nginx postgres redis beanstalkd elasticsearch
However if I run docker ps, the elasticsearch container isn't running.
Both ports 9200 and 9300 are not consumed:
lsof -i :9200
Not sure why the elasticsearch container doesn't persist, it seems to just self close.
output of docker ps -a after running docker-compose up ...
http://pastebin.com/raw/ymfvLPLT
Condensed version:
IMAGE STATUS PORTS
laradock_nginx Up 36 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
laradock_elasticsearch Exited (137) 34 seconds ago
laradock_beanstalkd Up 37 seconds 0.0.0.0:11300->11300/tcp
laradock_php-fpm Up 38 seconds 9000/tcp
laradock_workspace Up 39 seconds 0.0.0.0:2222->22/tcp
tianon/true Excited (0) 41 seconds ago
laradock_postgres Up 41 seconds 0.0.0.0:5432->5432/tcp
laradock_redis Up 40 seconds 0.0.0.0:6379->6379/tcp
Output of docker events after running docker-compose up ...
http://pastebin.com/cE9bjs6i
Try to check logs first:
docker logs laradock_elasticsearch_1
(or another name of elasticsearch container)
In my case it was
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
I found solution here
namely, i've run on my Ubuntu machine
sudo sysctl -w vm.max_map_count=262144
I don't think the problem is related to Laradock, since Elasticsearch is supposed to be running on it's own, I would first check the memory:
open Docker Dashboard -> Settings -> Resources -> Advanced: and increase the memory.
check your Machine memory, Elasticsearch won't run if there is not enough memory in your machine.
or:
open your docker-compose.yml file
increase the mem_limit: 1g then
docker-compose up -d --build elasticsearch
If it's still not working, remove all the images, update laradock to latest version and setup it new.
I have stopped the ntpd and restarted it again. Have done a ntpdate pool.ntp.org. the error went once and the hosts were healthy but after sometime again got a clock offset error.
Also I observed that after doing a ntpdate the web interface of cloudera stopped working. It says potential mismatch configuration fix and restart hue.
I have the cloudera quick start vm with centos setup on VMware.
Check if /etc/ntp.conf file is the same across all nodes/masters
restart ntp
add deamon with chkconfig and set it to on
You can fix it by restarting the NTP service which syncronizes the time with a central source.
You can do this by logging in as root from the commandline and running service ntpd restart.
After about a minute the error in CM shoud go away.
Host Terminal
sudo su
service ntpd restart
Clock offset Error occur on Cloudera Manager if host\node's NTP service could not located or did not respond to a request for the clock offset.
Solution:
1)Identify NTP Server IP or Get details of NTP Server IP for your hadoop Cluster
2)On your Hadoop Cluster Nodes Edit-> /etc/ntp.conf
3)Add entries in ntp.conf
server [NTP Server IP]
server xxx.xx.xx.x
4)Restart Services.Execute
Service ntpd restart
5) Restart Cluster From Cloudera Manager
Note: If Problem Still Persist .Reboot you Hadoop Nodes & Check Process.
Check $ cat /etc/ntp.conf make sure configuration file is same as others (nodes)
$ systemctl restart ntpd
$ ntpdc -np
$ ntpdate -u 0.centos.pool.ntp.org
$ hwclock --systohc
$ systemctl restart cloudera-scm-agent
After that wait a few seconds to let it auto configure.