Why am I getting "No alive nodes found in your cluster" using Laravel and Docker? - laravel

I try to set up elasticsearch on laravel project using the docker and babenkoivan/scout-elasticsearch-driver.
When I start docker, all containers working include elasticsearch, but when I try to use
php artisan elastic:create-index "App\TutorialIndexConfigurator"
I get an error:
No alive nodes found in your cluster
Also, when I try to access to port 9200 via curl from docker workspace container, I get
curl: (7) Failed to connect to localhost port 9200: Connection refused
but when I do the same think from terminal, I get information about docker elasticsearch cluster. I think it's maybe related.
I wasted a three day for this problem and I don't have any solution, please help.
I tried to do the same things from laradock and gate the same results.
Here is my docker-compose.yml content
version: '3.1'
#volumes:
# elasticsearch:
# driver: local
volumes:
esdata1:
driver: local
esdata2:
driver: local
esdata3:
driver: local
networks:
esnet:
frontend:
backend:
services:
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./hosts:/etc/nginx/conf.d
- ./www:/var/www
- ./logs:/var/log/nginx
links:
- php
networks:
esnet:
frontend:
aliases:
- api.dev
backend:
aliases:
- api.dev
mysql:
image: mysql:5.7
ports:
- "3306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
networks:
- esnet
- frontend
- backend
# postgres:
# image: postgres
# ports:
# - "3306:3306"
# environment:
# MYSQL_ROOT_PASSWORD: secret
adminer:
image: adminer
restart: always
ports:
- 8080:8080
php:
build: ./images/php
links:
- mysql
volumes:
- ./www:/var/www
networks:
- esnet
- frontend
- backend
workspace:
build: ./images/workspace
volumes:
- ./www:/var/www:cached
extra_hosts:
- "dockerhost:10.0.75.1"
ports:
- "2222:22"
tty: true
networks:
- esnet
- frontend
- backend
redis:
image: redis:latest
volumes:
- ./www/redis:/data
ports:
- "6379:6379"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- http.cors.enabled=true
- http.cors.allow-origin=*
- discovery.zen.minimum_master_nodes=2
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
- frontend
- backend
lasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- http.cors.enabled=true
- http.cors.allow-origin=*
- discovery.zen.minimum_master_nodes=2
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
- frontend
- backend
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
container_name: elasticsearch3
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- http.cors.enabled=true
- http.cors.allow-origin=*
- discovery.zen.minimum_master_nodes=2
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata3:/usr/share/elasticsearch/data
networks:
- esnet
- frontend
- backend
kibana:
image: 'docker.elastic.co/kibana/kibana:6.4.2'
container_name: kibana
environment:
SERVER_NAME: kibana.local
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- '5601:5601'
networks:
- esnet
- frontend
- backend
headPlugin:
image: 'mobz/elasticsearch-head:5'
container_name: head
ports:
- '9100:9100'
networks:
- esnet
- frontend
- backend
Here is scout_elastic config
<?php
return [
'client' => [
'hosts' => [
env('SCOUT_ELASTIC_HOST', 'localhost:9200'),
],
],
'update_mapping' => env('SCOUT_ELASTIC_UPDATE_MAPPING', true),
'indexer' => env('SCOUT_ELASTIC_INDEXER', 'single'),
'document_refresh' => env('SCOUT_ELASTIC_DOCUMENT_REFRESH'),
];
And .env scout config
SCOUT_DRIVER=elastic

I believe your problem is wrong SCOUT_ELASTIC_HOST
It should be SCOUT_ELASTIC_HOST=http://elasticsearch:9200
like in the kibana service. Not http://localhost:9200

Related

Laravel in docker - browserSync setup problems

I'm struggling for a few hours now and cannot proprly set webpack-mix browserSync functionality for docker development enviroment.
I'd be glad to get some help!
Thanks in advace.
My docker-compose.yml:
version: '3.7'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
image: vokl
container_name: vokl-app
ports:
- 80:80
- 3000:3000
- 3001:3001
volumes:
- ./:/var/www/html
networks:
- vokl
depends_on:
- mysql
mysql:
image: 'mariadb:latest'
container_name: vokl-db
restart: unless-stopped
ports:
- 3306:3306
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- ./.docker/dbdata:/var/lib/mysql
networks:
- vokl
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: vokl-phpmyadmin
environment:
- PMA_HOST=mysql
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root
depends_on:
- mysql
ports:
- 8765:80
networks:
- vokl
networks:
vokl:
driver: bridge
webpack-mix:
mix.browserSync({
host: 'vokl-app.test',
proxy: 'app',
notify: false,
open: 'external'
});

How can I run Elasticsearch on Docker Via docker-compose

I could not find any docker-compose file for run Elasticsearch on docker. I found a few but but it doesn't work.
You can use this;
version: '3.1'
services:
elasticsearch:
container_name: elasticsearch_compose
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
ports:
- 9200:9200
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
environment:
- xpack.monitoring.enabled=true
- xpack.watcher.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
networks:
- elastic
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.9.2
ports:
- 5601:5601
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://localhost:9200
networks:
- elastic
networks:
elastic:
driver: bridge
volumes:
elasticsearch-data:

How to I have Elastic Search (with multiple nodes) and Kibana in one docker compose file?

I have successfully installed a 3-node Elastic Search cluster with a docker compose file from the Elastic Search web site with no problems elastic search link. I am trying to add Kibana to the .yml file so I can run it all with docker-compose up and was looking at this Elastic Search Kibana install site to try to figure it out what to add Kibana install site. When I try to start the file I made, I get this error kibana_1 | {"type":"log","#timestamp":"2021-02-13T11:20:31Z","tags":["error","elasticsearch","data"],"pid":8,"message":"[ConnectionError]: getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}. When I open the http://localhost:5601, it says Kibana server not ready yet. Can somebody please help me get this working? I marked the section I added in the .yml file.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
##############################
# My attempt at adding Kibana to the docker file. This file works
# fine if commenting out this whole section.
kibana:
image: docker.elastic.co/kibana/kibana:7.10.2
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
##############################
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
Make sure Docker Engine is allotted at least 4GiB of memory.
docker-compose.yml
version: '3.7'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es01
environment:
- node.name=es01
- cluster.name=docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ELASTIC_PASSWORD=password
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es02
environment:
- node.name=es02
- cluster.name=docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ELASTIC_PASSWORD=password
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es03
environment:
- node.name=es03
- cluster.name=docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ELASTIC_PASSWORD=password
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elastic
kibana:
image: kibana:7.9.2
ports:
- '5601:5601'
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
logstash:
image: logstash:7.9.2
ports:
- '5000:5000'
volumes:
- type: bind
source: ./logstash_pipeline/
target: /usr/share/logstash/pipeline
read_only: true
networks:
elastic:
driver: bridge
kiban.yml
server.name: kibana
server.host: 0.0.0.0
server.port: 5601
elasticsearch.hosts: [ "http://<ELK server ip>:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: password
Create a directory name logstash_pipeline and within that directory create a file beats.conf
beats.conf
input {
beats {
port => 5044
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "filebeat-%{+yyyy.MM.dd}"
user => "elastic"
password => "password"
ecs_compatibility => disabled
}
}
following requirements and recommendations apply when running Elasticsearch in Docker in production.
Set vm.max_map_count to at least 262144edit The vm.max_map_count kernel setting must be set to at least 262144 for production use.
How you set vm.max_map_count depends on your platform:
The vm.max_map_count setting should be set permanently in /etc/sysctl.conf:
grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
To apply the setting on a live system, run:
sysctl -w vm.max_map_count=262144
Run docker compose to bring up the three-node Elasticsearch cluster and Kibana
docker-compose up
As far as I see, there are 2 problems in your docker-compose file.
Kibana is not in the elastic network.
In the Kibana configuration, you set ELASTICSEARCH_HOSTS=http://elasticsearch:9200. However, none of your Elasticsearch containers is named elasticsearch.
The correct configuration should be somehow like this:
kibana:
image: docker.elastic.co/kibana/kibana:7.10.2
container_name: kibana
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'
networks:
- elastic

Unable to access elasticsearch instance in Kibana while setting up by multinode cluster

Unable to access elasticsearch instance in Kibana while setting up by multinode cluster by docker compose.
elasticsearch instance node are running fine.
Below I have shared the docker-compose.yaml and also the issue screenshot, can somebody help me on this.
Please let me know in case of any further information needed.
docker-compose.yml
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=docker-cluster
- discovery.seed_hosts=elasticsearch2,elasticsearch3
- cluster.initial_master_nodes=elasticsearch,elasticsearch2,elasticsearch3
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- cluster.name=docker-cluster
- discovery.seed_hosts=elasticsearch,elasticsearch3
- cluster.initial_master_nodes=elasticsearch,elasticsearch2,elasticsearch3
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch3
environment:
- node.name=elasticsearch3
- cluster.name=docker-cluster
- discovery.seed_hosts=elasticsearch,elasticsearch2
- cluster.initial_master_nodes=elasticsearch,elasticsearch2,elasticsearch3
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata3:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
environment:
SERVER_NAME: kibana.local
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- '5601:5601'
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
esdata3:
driver: local
networks:
esnet:
driver: bridge
Error Screenshot
Error Screenshot
Result
Can you confirm the Elasticsearch nodes are available?
curl -XGET http://elasticsearch:9200/_cat/nodes?v
And also check which node is running as master
curl -XGET http://elasticsearch:9200/_cat/master?v
It may also be worth adding Elasticsearch as a dependency for Kibana in your docker-compose e.g.
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
depends_on: elasticsearch
environment:
SERVER_NAME: kibana.local
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- '5601:5601'
networks:
- esnet
initial heap size [536870912] not equal to the maximum heap size, made the same, then got worked.
Initial heap size and the maximum heap size must be equal

Kibana fails to connect to Elasticsearch on docker

I am following https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docker.html
and
https://www.elastic.co/guide/en/kibana/6.5/docker.html
But it does not seems to work well with kibana, ES works fine.
I tried starting kibana alone, but finally i added it to one docker-compose file.
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:6.5.4
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
Kibana.yml is:
server.host: "0.0.0.0"
server.name: "kibana"
elasticsearch.url: http://elasticsearch:9200
I get following error:
kibana_1 | {"type":"log","#timestamp":"2019-06-11T08:55:30Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
The kibana container isn't on the same network as the two elasticsearch containers: it doesn't have a networks: block and so is on an automatically-created default network, but the two elasticsearch containers are on an explicitly-declared esnet network. Since they're not on the same network, inter-container DNS doesn't work.
I'd suggest just deleting all of the networks: blocks and using the default network Docker Compose creates for you. If you want an explicit named network, copy the same networks: [esnet] lines into the kibana: service block.

Resources