The public docker image for elasticsearch is on docker hub
https://hub.docker.com/_/elasticsearch/
If i defined my own docker-compose file with elasticsearch, how would i scale up elasticsearch so that the ports don't collide?
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
kibana:
image: kibana
ports:
- 5601:5601
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
How could i scale this up, similar to the command below?
docker-compose scale elasticsearch=3
I am running docker beta for mac version 1.12.
Thanks,
Shane.
If you just want the ports don't collide, we should use "automatic port mapping" with docker-compose.yml as below:
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
ports:
- 9200
- 9300
So Docker will automatically map ports 9200 and 9300 to random ports in range 32xxx.
Related
I am trying to send my node app logs to fluentd to elasticsearch to kibana, but having a problem connecting fluentd with elasticsearch with docker. I want to dockerize this efk stack.
I have attached the folder structure and shared relevant files.
Following is the folder structure:
Error
Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 172.20.0.2:9200 (Errno::ECONNREFUSED)
fluent.conf:
#type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
user elastic
password pass
</store>
</match>
DockerFile
FROM fluent/fluentd:v1.15-1
USER root
RUN gem install elasticsearch -v 7.6.0
# RUN gem install fluent-plugin-elasticsearch -v 7.6.0
RUN gem install fluent-plugin-elasticsearch -v 4.1.1
RUN gem install fluent-plugin-rewrite-tag-filter
RUN gem install fluent-plugin-multi-format-parser
USER fluent
Docker-compose.yml
version: '3'
services:
fluentd:
build: ./fluentd
container_name: loggingFluent
volumes:
- ./fluentd/conf:/fluentd/etc
# - ./fluentd/conf/fluent.conf:/fluentd/etc/fluent.conf
ports:
- "24224:24224"
- "24224:24224/udp"
links:
- elasticsearch
depends_on:
- elasticsearch
- kibana
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch-Logging
ports:
- 9200:9200
expose:
- 9200
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
kibana:
image: docker.elastic.co/kibana/kibana:7.8.1
container_name: kibana-Logging
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
Maybe I am missing some with docker networking because I am using docker for the first time I have checked the ports exposed by docker containers and they are fine. I have done this without docker and have used the same settings but having a problem doing it with docker. Looking forward to seeing your responses. Thank you very much.
Adding a username to the elastic search environment solved the issue :
elasticsearch-environment
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
ELASTIC_USERNAME: 'elastic'
I use logstash by logstash:7.9.1 image and i get this error when I up docker-compose and I dont know what to do with this (I try to make my logstash config wrong and connect it to the wrong elastic port but my docker still connect to 9200 and so I think it dosent read its data from my logstash config) pls help meeeee!!!!
my error:
[logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
my docker-compose:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
container_name: zookeeper
ports:
- 2181:2181
networks:
- bardz
kafka:
image: wurstmeister/kafka:2.11-1.1.0
container_name: kafka
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: logs-topic:1:1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- 9092:9092
volumes:
- kofka-volume:/var/run/docker.sock
networks:
- bardz
elasticsearch:
build:
context: elk/elasticsearch/
args:
ELK_VERSION: "7.9.1"
volumes:
- type: bind
source: ./elk/elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- bardz
logstash:
image: logstash:7.9.1
restart: on-failure
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
volumes:
- logstash_data:/bitnami
- ./elk/logstash/logstash-kafka.conf:/opt/bitnami/logstash/config/logstash-kafka.conf
environment:
LOGSTASH_CONF_FILENAME: logstash-kafka.conf
networks:
- bardz
depends_on:
- elasticsearch
networks:
bardz:
external: true
driver: bridge
volumes:
elasticsearch:
zipkin-volume:
kofka-volume:
logstash_data:
my logstash config:
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["logs-topic"]
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => elastic
password => changeme
index => "logs-topic"
workers => 1
}
}
You are using the wrong password of elastic user in 7.9 which is changed from changeme to password as shown in ES contribution doc, but I tried and this seems to work only when you are running ES from source code.
Anyway you are getting 401 means unauth access and you can read more about it here,
As you are not running ES code from source, would advise you to follow the steps mentioned in this thread to change the password and as you are running it in docker, you need to go inside the docker conatainer by docker exec -it <cont-id> /bin/bash and than run the command mentioned in thread to set your own password.
I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?
i need use more than elasticsearch node in the same machine.but whene i add the second elasticsearch service in docker-compose.yml. all elasticsearch services crach.
my sample yml file :
version: "2"
services:
elasticsearch_master:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.node.master=true -Des.node.data=false"
ports:
- "9200:9200"
- 9300
elasticsearch1:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=workagram -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
I'm trying to connect cadvisor to elasticsearch with docker and I'm getting the error:
cadvisor.go:113] Failed to initialize storage driver: failed to create the elasticsearch client - no Elasticsearch node available
docker-compose.yml
version: '2'
services:
elasticsearch:
image: "elasticsearch:2.3.3"
container_name: "elasticsearch"
ports:
- "9200:9200"
kibana:
image: "kibana:4.5.1"
container_name: "kibana"
ports:
- "5601:5601"
links:
- elasticsearch
cadvisor:
image: "google/cadvisor:latest"
container_name: "cadvisor"
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
links:
- elasticsearch
restart: always
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://elasticsearch:9200"
If I change the command to
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://172.22.0.5:9200"
everything works just fine. Any ideas?
what you are missing is an index in elasticsearch, unfortunately this is not well documented
go to your kibana dashboard, dev tools and send this request:
PUT /.kibana/index-pattern/cadvisor
{"title" : "cadvisor", "timeFieldName": "container_stats.timestamp"}