I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?
Related
I'm deploying in AWS Ubuntu instance, on a VM using this yml:
version: "3.7"
services:
cassandra:
container_name: cassandra
image: cassandra:3.11
restart: unless-stopped
hostname: cassandra
environment:
- MAX_HEAP_SIZE=1G
- HEAP_NEWSIZE=1G
- CASSANDRA_CLUSTER_NAME=thp
volumes:
- ./cassandra/data:/var/lib/cassandra/data
networks:
- Hive
elasticsearch:
container_name: elasticsearch
image: elasticsearch:7.11.1
environment:
- http.host=0.0.0.0
- discovery.type=single-node
- cluster.name=hive
- script.allowed_types= inline
- thread_pool.search.queue_size=100000
- thread_pool.write.queue_size=10000
- gateway.recover_after_nodes=1
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms256m -Xmx256m
ulimits:
nofile:
soft: 65536
hard: 65536
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/logs:/usr/share/elasticsearch/logs
networks:
- Hive
cortex:
container_name: cortex
image: thehiveproject/cortex:latest
depends_on:
- elasticsearch
environment:
- 'JOB_DIRECTORY=/opt/cortex/jobs'
ports:
- '0.0.0.0:9001:9001'
volumes:
- ./cortex/application.conf:/etc/cortex/application.conf
- '/var/run/docker.sock:/var/run/docker.sock'
- ./cortex/log/:/var/log/cortex
- /tmp:/tmp
#- ./cortex/Cortex-Analyzers:/opt/cortex/analyzers
#- .cortex/Cortex-Analyzers/analyzers.json:/opt/cortex/analyzers/analyzers.json
privileged: true
networks:
- Hive
thehive:
container_name: thehive
image: 'thehiveproject/thehive4:latest'
restart: unless-stopped
depends_on:
- cassandra
ports:
- '0.0.0.0:9000:9000'
volumes:
- ./thehive/application.conf:/etc/thehive/application.conf
- ./thehive/data:/opt/thp/thehive/data
- ./thehive/index:/opt/thp/thehive/index
command:
--cortex-port 9001
--cortex-keys ${CORTEX_KEY}
networks:
- Hive
networks:
Hive:
driver: bridge
and additional 2 yml application.conf files for thehive and cortex. The problem I have is that when I look up docker instances using docker ps or docker compose ps I can see that cortex and thehive are on 0.0.0.0:9000 and 0.0.0.0:9001 respectively but elasticsearch only shows 9200/tcp, 9300/tcp. How can I get access to web interface of ES locally? I can't figure this out. Using netstat I can't find port 9200 or 9300 listening anywhere.
Elasticsearch does not natively come with a web interface. Elasticsearch exposes a REST api where third party interfaces can interact with.
One of the most popular tools for visualizing or viewing data in the elastic stack is Kibana which interfaces with Elasticsearch. See link for more details: https://www.elastic.co/kibana/
ES API Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html
The main purpose: I want to use Logstash for collecting logs files that rely on remote server.
My ELK stack were created by using docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
and then I want to install a filebeat at the target host in order to send log to the ELK host.
docker run docker.elastic.co/beats/filebeat-oss:7.5.1 setup \
-E setup.kibana.host=x.x.x.x:5601 \
-E ELASTIC_PASSWORD="changeme" \
-E output.elasticsearch.hosts=["x.x.x.x:9200"]
but once hit the enter, the error occurs
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://x.x.x.x:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]
Also tried with -E ELASTICS_USERNAME="elastic" the error still persists
You should disable the basic x-pack security which is by default enabled in Elasticsearch 7.X version, under environment variable of ES docker image, mentioned below and start ES docker container.
xpack.security.enabled : false
After this, no need to pass ES creds and you can also remove below from your ES env. var:
ELASTIC_PASSWORD: changeme
I am experimenting with some json that has been formatted in accordance with Elasticsearch, so I have gone directly from Filebeat to Elasticsearch, as opposed to going through Logstash. This is using docker-compose:
version: '2.2'
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- cluster.name=docker-
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
networks:
- esnet
elastichq:
container_name: elastichq
image: elastichq/elasticsearch-hq
ports:
- 8080:5000
environment:
- HQ_DEFAULT_URL=http://elasticsearch:9200
- HQ_ENABLE_SSL=False
- HQ_DEBUG=FALSE
networks:
- esnet
networks:
esnet:
However, when I open ElasticHQ the index name has been labeled as filebeat-7.5.2-2020.02.10-000001 with a date stamp. I have specified the index name as Sample in my filebeat.yml. Is there something I am missing, or is this behavior normal?
Here is my filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.json
json.keys_under_root: true
json.add_error_key: true
#----------------------------- Elasticsearch output --------------------------------
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "sample-%{+YYYY.MM.dd}"
setup.template.name: "sample"
setup.template.pattern: "sample-*"
It would be more practical to know something predefined so if I use Postman as opposed to ElasticHQ, I can start querying my data without having to look for the index name.
I think Filebeat ILM might be taking over instead of the configured index name.
Starting with version 7.0, Filebeat uses index lifecycle management by
default when it connects to a cluster that supports lifecycle
management. Filebeat loads the default policy automatically and
applies it to any indices created by Filebeat.
And when ilm is enabled Filebeat Elasticsearch output index settings are ignored
The index setting is ignored when index lifecycle management is
enabled. If you’re sending events to a cluster that supports index
lifecycle management, see Configure index lifecycle management to
learn how to change the index name.
You might need to disable ILM or better yet configure your desired filename using ILM rollover_alias.
I have the following Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.4.0
RUN elasticsearch
EXPOSE 80
I think the 3rd line is never reached.
When I try to access the dockercontainer from my local machine through:
172.17.0.2:9300
I get nothing, what am I missing? I want to access elasticsearch from the local host machine.
I recommend using docker-compose (which makes lot of things much easier) with following configuration.
Configuration (for development)
Configuration starts 3 services: elastic itself and extra utilities
for development like kibana and head plugin (these could be omitted, if you don't need them).
In the same directory you will need three files:
docker-compose.yml
elasticsearch.yml
kibana.yml
With following contents:
docker-compose.yml
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.0
container_name: elasticsearch_540
environment:
- http.host=0.0.0.0
- transport.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
volumes:
- esdata:/usr/share/elasticsearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 2g
cap_add:
- IPC_LOCK
kibana:
image: docker.elastic.co/kibana/kibana:5.4.0
container_name: kibana_540
environment:
- SERVER_HOST=0.0.0.0
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
headPlugin:
image: mobz/elasticsearch-head:5
container_name: head_540
ports:
- 9100:9100
volumes:
esdata:
driver: local
elasticsearch.yml
cluster.name: "chimeo-docker-cluster"
node.name: "chimeo-docker-single-node"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"
kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
elasticsearch.username: elastic
elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: true
Running
With above three files in the same directory and that directory set as current working directory you do (could require sudo, depends how you have your docker-compose set up):
docker-compose up
It will start up and you will see logs from three different services: elasticsearch_540, kibana_540 and head_540.
After initial start up you will have your elastic cluster available for http under 9200 and for tcp under 9300. Validate with following curl if the cluster started up:
curl -u elastic:changeme http://localhost:9200/_cat/health
Then you can view and play with your cluster using either kibana (with credentials elastic / changeme):
http://localhost:5601/
or head plugin:
http://localhost:9100/?base_uri=http://localhost:9200&auth_user=elastic&auth_password=changeme
Your container is auto exiting because of insufficient virtual memory, by default to run an elastic search container your memory should be a min of 262144 but if you run this command sysctl vm.max_map_countand see it will be around 65530. Please increase your virtual memory count by using this command sysctl -w vm.max_map_count=262144 and run the container again docker run IMAGE IDthen you should have your container running and you should be able to access elastic search at port 9200 or 9300
edit : check this link https://www.elastic.co/guide/en/elasticsearch/reference/5.0/vm-max-map-count.html#vm-max-map-count
Best would be to follow the official elasticsearch documentation which has a nice section on single node elasticsearch cluster Also running a multi-node elasticsearch cluster using docker-compose.
Please refer to version specific documentation, which can be accessed in the version drop-down present in elasticsearch official documentation.
The public docker image for elasticsearch is on docker hub
https://hub.docker.com/_/elasticsearch/
If i defined my own docker-compose file with elasticsearch, how would i scale up elasticsearch so that the ports don't collide?
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
kibana:
image: kibana
ports:
- 5601:5601
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
How could i scale this up, similar to the command below?
docker-compose scale elasticsearch=3
I am running docker beta for mac version 1.12.
Thanks,
Shane.
If you just want the ports don't collide, we should use "automatic port mapping" with docker-compose.yml as below:
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
ports:
- 9200
- 9300
So Docker will automatically map ports 9200 and 9300 to random ports in range 32xxx.