I'am trying to install Kibana 4 in my machine but it's giving the following errors.
{"#timestamp":"2015-04-15T06:25:50.688Z","level":"error","node_env":"production","error":"Request error, retrying -- connect ECONNREFUSED"}
{"#timestamp":"2015-04-15T06:25:50.693Z","level":"warn","message":"Unable to revive connection: http://0.0.0.0:9200/","node_env":"production"}
{"#timestamp":"2015-04-15T06:25:50.693Z","level":"warn","message":"No living connections","node_env":"production"}
{"#timestamp":"2015-04-15T06:25:50.698Z","level":"fatal","message":"No Living connections","node_env":"production","error":{"message":"No Living connections","name":"Error","stack":"Error: No Living connections\n at sendReqWithConnection (/home/kibana-4.0.0-rc1-linux-x64/src/node_modules/elasticsearch/src/lib/transport.js:174:15)\n
The ECONNREFUSED is telling you that it can't connect to Elasticsearch. The http://0.0.0.0:9200/ tells you what it's trying to connect to.
You need to modify the config/kibana.yml and change the elasticsearch_url setting to point to your cluster. If you are running Elasticsearch on the same box, the correct value is http://localhost:9200.
Your elastic search is down.
In my case it was because the environment variable Java_Home was not set
correctly.You have to manually set it. These are the guides lines to do it :
Go to your PC Environments.
Create a new variable,with variable name Java_Home. The variable value should be java installation path.
Make sure your path has no spaces. If your Java is in Program Files(x86) you can use shortcut which is : progra~2 instead of Program Files(x86).
As a result you have something like this : C:\Progra~2\Java\jre1.8.0_131
There is another reason why this might happen in the case you are using AWS Elasticsearch service.
No grant right access policies for ES and not loading right AWS credential will be the root cause.
There is one more posibility, maybe your elasticsearch does not run properly as you want: please check this link and try to dockerize the elasticsearch.
for me this docker-compose.yml file can dockerize the elasticsearch:
services:
elasticsearch:
image: "${CREATED_IMAGE_NAME_PREFIX}:1"
container_name: efk_elastic
build:
context: ./elasticsearch
args:
EFK_VERSION: $EFK_VERSION
ELASTIC_PORT1: $ELASTIC_PORT1
ELASTIC_PORT2: $ELASTIC_PORT2
environment:
# node.name: node
# cluster.name: elasticsearch-default
ES_JAVA_OPTS: -Xms1g -Xmx1g
discovery.type: single-node
ELASTIC_PASSWORD: changeme
http.cors.enabled: "true"
http.cors.allow-credentials: "true"
http.cors.allow-headers: X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
hostname: elasticsearch
ports:
- "${ELASTIC_EXPOSED_PORT1}:$ELASTIC_PORT1"
- "$ELASTIC_EXPOSED_PORT2:${ELASTIC_PORT2}"
volumes:
# - type: bind
# source: ./elasticsearch/config/elasticsearch.yml
# target: /usr/share/elasticsearch/config/elasticsearch.yml
# read_only: true
- type: volume
source: elasticsearch_data
target: /usr/share/elasticsearch/data
networks:
- efk
please note that this is not complete. for more details please see my GitHub repository
Related
I'm using a Docker Compose file for ELK setup and using the latest version (above 7) for Kibana. Now I set the xpack.encryptedSavedObjects.encryptionKey parameter in the kibana.yml so that I can use the alert and actions feature. But even after that I'm not able to create alert. Can anyone help me please?
I generated 32 character encryption key using Python uuid module.
According to https://github.com/elastic/kibana/issues/57773 the environment variable XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY was missing in the kibana config. In Feb 2020 it was merged and is now working.
The encryption key XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY has to be 32 characters or longer. https://www.elastic.co/guide/en/kibana/current/using-kibana-with-security.html
A working configuration could look like this:
...
kibana:
depends_on:
- elasticsearch
image: docker.elastic.co/kibana/kibana:8.0.0-rc2
container_name: kibana
environment:
- ...
- SERVER_PUBLICBASEURL=https://kibana.stackoverflow.com/
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=a7a6311933d3503b89bc2dbc36572c33a6c10925682e591bffcab6911c06786d
- ...
...
I have tried using the environment variable in my docker-compose.yml file as
kib01:
image: docker.elastic.co/kibana/kibana:${VERSION}
container_name: kib01
depends_on: {"es01": {"condition": "service_healthy"}}
ports:
- 5601:5601
environment:
SERVERNAME: localhost
ELASTICSEARCH_URL: https://es01:9200
ELASTICSEARCH_HOSTS: https://es01:9200
XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: "743787217A45432B462D4A614EF35266"
volumes:
- /var/elasticsearch/config/certs:$CERTS_DIR
networks:
- elastic
We have changed the string format of xpack.encryptedSavedObjects.encryptionKey in to environment variable format XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY by replacing . with _ and all caps.
Maybe there is a problem with mounting the file, I opted for the environment variables in my docker-compose file.
services:
kibana:
...
environment:
...
XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: abcd...
I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?
OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.
I am using docker-compose as in https://github.com/davidefiocco/dockerized-elasticsearch-indexer/blob/master/docker-compose.yml to initialize a containerized elasticsearch index.
Now, I would like to set a larger value for indices.query.bool.max_clause_count than the default setting using a elasticsearch.yml config file (this is to run some heavy queries as in Elasticsearch - set max_clause_count).
So far I tried to add in the docker-compose.yml a volume with:
services:
elasticsearch:
volumes:
- ./elasticsearch/config/elasticsearch.yml
(and variations thereof) trying to point to a elasticsearch.yml file (that I would like to ship with the rest of the files) with the right max_clause_count setting, but to no avail.
Can someone point me to the right direction?
You can mount the host's directory containing the elasticsearch.yml into the container using
services:
elasticsearch:
volumes:
- path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
One workaround to perform that (trivial) modification to elasticsearch.yml in the container is to modify directly a relevant Dockerfile with the syntax
USER root
RUN echo "indices.query.bool.max_clause_count: 1000000" >> /usr/share/elasticsearch/config/elasticsearch.yml
so to append the desired custom value.
I have the following Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.4.0
RUN elasticsearch
EXPOSE 80
I think the 3rd line is never reached.
When I try to access the dockercontainer from my local machine through:
172.17.0.2:9300
I get nothing, what am I missing? I want to access elasticsearch from the local host machine.
I recommend using docker-compose (which makes lot of things much easier) with following configuration.
Configuration (for development)
Configuration starts 3 services: elastic itself and extra utilities
for development like kibana and head plugin (these could be omitted, if you don't need them).
In the same directory you will need three files:
docker-compose.yml
elasticsearch.yml
kibana.yml
With following contents:
docker-compose.yml
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.0
container_name: elasticsearch_540
environment:
- http.host=0.0.0.0
- transport.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
volumes:
- esdata:/usr/share/elasticsearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 2g
cap_add:
- IPC_LOCK
kibana:
image: docker.elastic.co/kibana/kibana:5.4.0
container_name: kibana_540
environment:
- SERVER_HOST=0.0.0.0
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
headPlugin:
image: mobz/elasticsearch-head:5
container_name: head_540
ports:
- 9100:9100
volumes:
esdata:
driver: local
elasticsearch.yml
cluster.name: "chimeo-docker-cluster"
node.name: "chimeo-docker-single-node"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"
kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
elasticsearch.username: elastic
elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: true
Running
With above three files in the same directory and that directory set as current working directory you do (could require sudo, depends how you have your docker-compose set up):
docker-compose up
It will start up and you will see logs from three different services: elasticsearch_540, kibana_540 and head_540.
After initial start up you will have your elastic cluster available for http under 9200 and for tcp under 9300. Validate with following curl if the cluster started up:
curl -u elastic:changeme http://localhost:9200/_cat/health
Then you can view and play with your cluster using either kibana (with credentials elastic / changeme):
http://localhost:5601/
or head plugin:
http://localhost:9100/?base_uri=http://localhost:9200&auth_user=elastic&auth_password=changeme
Your container is auto exiting because of insufficient virtual memory, by default to run an elastic search container your memory should be a min of 262144 but if you run this command sysctl vm.max_map_countand see it will be around 65530. Please increase your virtual memory count by using this command sysctl -w vm.max_map_count=262144 and run the container again docker run IMAGE IDthen you should have your container running and you should be able to access elastic search at port 9200 or 9300
edit : check this link https://www.elastic.co/guide/en/elasticsearch/reference/5.0/vm-max-map-count.html#vm-max-map-count
Best would be to follow the official elasticsearch documentation which has a nice section on single node elasticsearch cluster Also running a multi-node elasticsearch cluster using docker-compose.
Please refer to version specific documentation, which can be accessed in the version drop-down present in elasticsearch official documentation.
I'm working with docker for the first time.
I successfully installed elasticsearch and kibana on docker, but when I try to connect kibana with elastic I get a red status with the following errors:
ui settings Elasticsearch plugin is red
plugin:elasticsearch#5.1.1 Authentication Exception
I'm not sure but I think the problem is kibana doesn't pass elastic x-pack authentication.
Now, I'm trying to disable this authentication via elastic yml file, according to the instructions here.
But I can't find the yml file anywhere (I searched /usr/share/elasticsearch but I can't find either config directory or elasticsearch.yml file).
How do I config elastic with docker?
P.S.
I'm working with ubuntu 16.04
For Debian/Ubuntu/Mint, you can find the config files under /etc folder.
/etc/elasticsearch/elasticsearch.yml
Take a look at: https://www.elastic.co/guide/en/elasticsearch/reference/2.4/setup-dir-layout.html
I'm wondering why this is even happening. With the following docker-compose.yml it's working fine for me with security enabled:
---
version: '2'
services:
kibana:
image: docker.elastic.co/kibana/kibana:5.1.1
links:
- elasticsearch
ports:
- 5602:5601
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.1.1
cap_add:
- IPC_LOCK
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9201:9200
volumes:
esdata1:
driver: local
I successfully run elastic and kibana using the official elastic docker. Somehow, the container version in the official elastic documention didn't work for me.
If you prefer to start a container using docker run and not through compose file. (only use this for dev envs, not recommended on prod envs)
docker network create elastic
docker run --network=elastic --name=elasticsearch docker.elastic.co/elasticsearch/elasticsearch:5.2.2
docker run --network=elastic -p 5601:5601 docker.elastic.co/kibana/kibana:5.2.2
A brief description can be found here:
https://discuss.elastic.co/t/kibana-docker-image-doesnt-connect-to-elasticsearch-image/79511/4