Slave cannot connect to master in Elasticsearch? - elasticsearch

I have 2 vps, on which i have installed ES. I try combine it to one cluster.
master node config
version: '3.9'
services:
master:
image: elasticsearch:8.1.1
container_name: master
environment:
- node.name=master
- cluster.name=elastic-cluster
- network.host=0.0.0.0
- discovery.seed_hosts=10.126.107.238:1029, 10.126.107.236:1029
- cluster.initial_master_nodes=10.126.107.238
- "node.roles=master"
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "bootstrap.memory_lock=true"
- "xpack.security.enabled=false"
- "xpack.security.transport.ssl.enabled=false"
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- master-data:/usr/share/elasticsearch/data
ports:
- 1028:9200
- 1029:9300
volumes:
master-data:
driver: local
And node01 config (data node)
version: '3.9'
services:
node01:
image: elasticsearch:8.1.1
container_name: node01
environment:
- node.name=node01
- cluster.name=elastic-cluster
- network.host=0.0.0.0
- discovery.seed_hosts=10.126.107.238:1029, 10.126.107.236:1029
- cluster.initial_master_nodes=10.126.107.238
- "node.roles=data"
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "bootstrap.memory_lock=true"
- "xpack.security.enabled=false"
- "xpack.security.transport.ssl.enabled=false"
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- master-data:/usr/share/elasticsearch/data
ports:
- 1028:9200
- 1029:9300
volumes:
master-data:
driver: local
When node01 tries to connect to cluster - its raise error
node01 | {"#timestamp":"2022-08-07T08:15:38.096Z", "log.level":
"WARN", "message":"[connectToRemoteMasterNode[10.126.107.238:1029]]
completed handshake with
[{master}{5_tf1e03S4CYx_v4j4tTyw}{4LhWtmyPTaGKY5x_4Sf10g}{192.168.16.2}{192.168.16.2:9300}{m}{xpack.installed=true}]
but followup connection failed", "ecs.version":
"1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[node01][generic][T#2]","log.logger":"org.elasticsearch.discovery.HandshakingTransportAddressConnector","elasticsearch.node.name":"node01","elasticsearch.cluster.name":"elastic-cluster","error.type":"org.elasticsearch.transport.ConnectTransportException","error.message":"[master][192.168.16.2:9300]
connect_exception","error.stack_trace":"org.elasticsearch.transport.ConnectTransportException:
[master][192.168.16.2:9300] connect_exception\n\tat
org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1107)\n\tat

Related

Elastic Search auth is not working when it is started with Docker compose

I was working with Elastic Search before without any auth. However I wanted to put the basic authentication and changed my docker compose to the following:
version: '3.0'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms4g -Xmx4g"
- ELASTIC_USERNAME=elastic
- ELASTIC_PASSWORD=Hey_Anka
- xpack.security.enabled=true
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms4g -Xmx4g"
- ELASTIC_USERNAME=elastic
- ELASTIC_PASSWORD=Hey_Anka
- xpack.security.enabled=true
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
ports:
- 9201:9201
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms4g -Xmx4g"
- ELASTIC_USERNAME=elastic
- ELASTIC_PASSWORD=Hey_Anka
- xpack.security.enabled=true
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
ports:
- 9202:9202
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
I updated the file and added the following to environment:
- ELASTIC_USERNAME=elastic
- ELASTIC_PASSWORD=Hey_Anka
- xpack.security.enabled=true
And then what I did was to restart the container with docker restart es01 es02 es03.
Docker restarts but when I go to localhost:9200, it doesn't ask for any name and password.
Am I doing anything wrong?

Does Elasticsearch go to sleep when there are no requests for several hours or days?

I have a 3 node ElasticSearch (7.14.2) with docker that is used by Web App with Python ElasticSearch library (elasticsearch==7.15.0).
I connect with:
connection = Elasticsearch(
[cnt_params],
max_retries=100,
retry_on_timeout=True,
timeout=700,
request_timeout=800,
)
if connection.ping():
return connection
else:
return None
Everything runs normal but sometimes after the App is idle for a few days or hours the ping() returns False, then If I try again the ping returns True. It is like if ElasticSearch goes to sleep.
Here is my docker-compose.yml
version: '3'
services:
fsmysql_20211019:
image: mysql:8.0.26
command: --default-authentication-plugin=mysql_native_password
container_name: fs_mysql_20211019
cap_add:
- SYS_NICE
environment:
MYSQL_ROOT_PASSWORD: some_pass
volumes:
- /opt/my_app/mysql/data:/var/lib/mysql
- /opt/my_app/mysql/conf:/etc/mysql
networks:
fsnet:
ipv4_address: 192.169.1.5
fses20211019n01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
container_name: fses20211019n01
environment:
- network.host=192.169.1.1
- node.name=fses20211019n01
- cluster.name=fs-es-cluster
- discovery.seed_hosts=fses20211019n02,fses20211019n03
- cluster.initial_master_nodes=fses20211019n01,fses20211019n02,fses20211019n03
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/my_app/elasticsearch/esdata:/usr/share/elasticsearch/data
networks:
fsnet:
ipv4_address: 192.169.1.1
fses20211019n02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
container_name: fses20211019n02
environment:
- network.host=192.169.1.2
- node.name=fses20211019n02
- cluster.name=fs-es-cluster
- discovery.seed_hosts=fses20211019n01,fses20211019n03
- cluster.initial_master_nodes=fses20211019n01,fses20211019n02,fses20211019n03
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/my_app/elasticsearch/esdata2:/usr/share/elasticsearch/data
networks:
fsnet:
ipv4_address: 192.169.1.2
fses20211019n03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
container_name: fses20211019n03
environment:
- network.host=192.169.1.3
- node.name=fses20211019n03
- cluster.name=fs-es-cluster
- discovery.seed_hosts=fses20211019n01,fses20211019n02
- cluster.initial_master_nodes=fses20211019n01,fses20211019n02,fses20211019n03
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/my_app/elasticsearch/esdata3:/usr/share/elasticsearch/data
networks:
fsnet:
ipv4_address: 192.169.1.3
fsfluentd_20211019:
image: qlands/fluentd:v1.14-1
container_name: fs_fluentd_20211019
environment:
WAIT_HOSTS: 192.169.1.1:9200, 192.169.1.4:5900
WAIT_HOSTS_TIMEOUT: 3200
volumes:
- /opt/my_app/log:/opt/formshare_log
- /opt/my_app/fluentd:/fluentd/etc/
networks:
fsnet:
ipv4_address: 192.169.1.6
formshare_20211019:
image: qlands/formshare2:20211019
container_name: formshare_20211019
environment:
MYSQL_HOST_NAME: 192.169.1.5
MYSQL_USER_NAME: root
MYSQL_USER_PASSWORD: some_pass
FORMSHARE_ADMIN_USER: admin
FORMSHARE_ADMIN_EMAIL: info#me.com
FORMSHARE_ADMIN_PASSWORD: some_pass
ELASTIC_SEARCH_HOST: 192.169.1.1
ELASTIC_SEARCH_PORT: 9200
FORMSHARE_HOST: 192.169.1.4
FORMSHARE_PORT: 5900
FORWARDED_ALLOW_IP: localhost
CONFIGURE_FLUENT: "true"
WAIT_HOSTS: 192.169.1.1:9200, 192.169.1.5:3306
WAIT_HOSTS_TIMEOUT: 1200
volumes:
- /opt/my_app/repository:/opt/formshare_repository
- /opt/my_app/log:/opt/formshare_log
- /opt/my_app/celery:/opt/formshare_celery
- /opt/my_app/config:/opt/formshare_config
- /opt/my_app/fluentd:/opt/formshare_fluentd
- /opt/my_app/plugins:/opt/formshare_plugins
- /opt/my_app/mosquitto:/etc/mosquitto/conf.d/
- /opt/tomcat/webapps:/opt/formshare_odata_webapps
ports:
- 5900:5900
- 9001:9001
networks:
fsnet:
ipv4_address: 192.169.1.4
networks:
fsnet:
ipam:
driver: default
config:
- subnet: 192.169.0.0/16
This was reported before here: https://discuss.elastic.co/t/does-elasticsearch-go-to-sleep-when-there-are-no-requests-for-a-couple-of-hours/231632 But no indication of what might cause it.
Any idea is appreciated.

ElasticSearch 7.10.1 with Docker reports java.net.UnknownHostException under Mac but not on Linux

Hi have the following docker-compose file with several services including ElasticSearch with two nodes:
version: '3'
services:
CO_MYSQL:
image: mysql:8.0.23
container_name: CO_MYSQL
environment:
MYSQL_ROOT_PASSWORD: 72EkBqCs!
volumes:
- /opt/cropontology/mysql/data:/var/lib/mysql
ports:
- 3306:3306
networks:
- CO_Network
CO_MONGO:
image: mongo:3.6.8
container_name: CO_MONGO
volumes:
- /opt/cropontology/mongo/data:/data/db
ports:
- 27017:27017
networks:
- CO_Network
CO_NEO4J:
image: neo4j:4.1.2
container_name: CO_NEO4J
volumes:
- /opt/cropontology/neo4j/data:/data
- /opt/cropontology/neo4j/plugins:/var/lib/neo4j/plugins
ports:
- 7474:7474
- 7687:7687
networks:
- CO_Network
CO_ES_01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
container_name: CO_ES_01
environment:
- node.name=CO_ES_01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=CO_ES_02
- cluster.initial_master_nodes=CO_ES_01,CO_ES_02
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/cropontology/es/data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- CO_Network
CO_ES_02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
container_name: CO_ES_02
environment:
- node.name=CO_ES_02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=CO_ES_01
- cluster.initial_master_nodes=CO_ES_01,CO_ES_02
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/cropontology/es/data2:/usr/share/elasticsearch/data
networks:
- CO_Network
networks:
CO_Network:
driver: bridge
Everything works well under Linux but if I try to run the same file under Mac I get:
CO_ES_01 | "stacktrace": ["java.net.UnknownHostException: CO_ES_02",
Do I need specific configuration under Mac for it to work?

elasticsearch on docker - snapshot and restore - access_denied_exception

I created elasticsearch cluster flowing by the article: Running the Elastic Stack on Docker
After the elasticsearch runs, I need to create snapshot and restore to backup my data.
I modified my elastic-docker-tls.yml file:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=basic
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es01/es01.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es01/es01.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es01/es01.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es01/es01.key
- ELASTIC_PASSWORD=$ELASTIC_PASSWORD
- path.repo=/usr/share/elasticsearch/backup
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
- databak:/usr/share/elasticsearch/backup
- certs:$CERTS_DIR
ports:
- 9200:9200
networks:
- elastic
healthcheck:
test: curl --cacert $CERTS_DIR/ca/ca.crt -s https://localhost:9200 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
interval: 30s
timeout: 10s
retries: 5
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=basic
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es02/es02.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es02/es02.key
- path.repo=/usr/share/elasticsearch/backup
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
- databak:/usr/share/elasticsearch/backup
- certs:$CERTS_DIR
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=basic
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es03/es03.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es03/es03.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es03/es03.key
- path.repo=/usr/share/elasticsearch/backup
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
- databak:/usr/share/elasticsearch/backup
- certs:$CERTS_DIR
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:${VERSION}
container_name: kib01
depends_on: {"es01": {"condition": "service_healthy"}}
ports:
- 5601:5601
environment:
SERVERNAME: localhost
ELASTICSEARCH_URL: https://es01:9200
ELASTICSEARCH_HOSTS: https://es01:9200
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: $ELASTIC_PASSWORD
ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES: $CERTS_DIR/ca/ca.crt
SERVER_SSL_ENABLED: "true"
SERVER_SSL_KEY: $CERTS_DIR/kib01/kib01.key
SERVER_SSL_CERTIFICATE: $CERTS_DIR/kib01/kib01.crt
volumes:
- certs:$CERTS_DIR
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
databak:
driver: local
certs:
driver: local
networks:
elastic:
driver: bridge
After that, I registered a snapshot repository:
PUT /_snapshot/my_backup
{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/backup/my_backup"
}
}
But, I get the following error message:
{
"error" : {
"root_cause" : [
{
"type" : "repository_exception",
"reason" : "[my_backup] cannot create blob store"
}
],
"type" : "repository_exception",
"reason" : "[my_backup] cannot create blob store",
"caused_by" : {
"type" : "access_denied_exception",
"reason" : "/usr/share/elasticsearch/backup/my_backup"
}
},
"status" : 500
}
I have searched for solutions on google for 2 days but no solution. Can someone help me? Thank you very much!!!
You can set chown for elasticsearch user in docker volume.
Run
ls -l show all mod of directory in elasticsearch
Run
chown elasticsearch /backup
For elasticsearch deployed on kubernetes, the way to do is by adding an init container in the helm values.yaml
extraInitContainers: |
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: create-backup-directory
image: busybox:1.28
command: ['mkdir','-p', '/usr/share/elasticsearch/data/backup']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
extraEnvs:
- name: path.repo
value: /usr/share/elasticsearch/data/backup
This will create a folder called backup in /usr/share/elasticsearch/data directory.

Elasticsearch restore data in case of machine (node) failure

I have got a single machine in which I have set up 3 nodes elastic using docker-compose.
Here is my docker-compose file
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms24g -Xmx24g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 5051:9200
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms24g -Xmx24g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es03
environment:
- node.name=es03
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms24g -Xmx24g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata03:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.3.2
environment:
SERVER_NAME: kibana.local
ELASTICSEARCH_HOSTS: http://es01:9200
ports:
- '5601:5601'
networks:
- esnet
volumes:
esdata01:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/ebs/esdata01'
esdata02:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/ebs/esdata02'
esdata03:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/ebs/esdata03'
networks:
esnet:
The mount location is an external location
/ebs/esdata01
The problem is the machine has been crashed now.
What I want to ask is if I get a new machine and set up the same docker-compose there, whether I will be able to see the existing data in corresponding indexes?
If not, then what is the alternative way to do so?

Resources