Microservice can not reach Elasticsearch Image - spring-boot

I have one Microservice with Jhipster version 5v and a image 2.4.1 of the ElasticSearch running in vagrant centos 7v. The two image are running but the operations of save and search can not reach the Elasticsearch image.
docker-compose:
service-app:
image: "..."
depends_on:
- service-mysql
- service-elasticsearch
- kafka
- zookeeper
- jhipster-registry
environment:
- SPRING_PROFILES_ACTIVE=dev,swagger
- SPRING_CLOUD_CONFIG_URI=http://admin:admin#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:mysql://service-mysql:3306/service?useUnicode=true&characterEncoding=utf8&useSSL=false
- SPRING_DATA_CASSANDRA_CONTACTPOINTS=cassandra
- JHIPSTER_SLEEP=30
- JHIPSTER_LOGGING_LOGSTASH_HOST=jhipster-logstash
- JHIPSTER_LOGGING_LOGSTASH_PORT=5000
- SPRING_DATA_ELASTICSEARCH_CLUSTER-NAME=SERVICE
- SPRING_DATA_ELASTICSEARCH_CLUSTER_NODES=service-elasticsearch:9300
- SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS=kafka
- SPRING_CLOUD_STREAM_KAFKA_BINDER_ZK_NODES=zookeeper
- EUREKA_CLIENT_SERVICEURL_DEFAULTZONE=http://admin:admin#jhipster-registry:8761/eureka
ports:
- 60088:8088
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "10"
service-elasticsearch:
image: ...
volumes:
- service-elasticsearch:/usr/share/elasticsearch/data/
environment:
- network.host=0.0.0.0
- cluster.name=service
- discovery.type=single-node
- CLUSTER_NAME=SERVICE
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "10"
application_dev.yml:
data:
elasticsearch:
properties:
path:
home: target/elasticsearch
application_prod:
data:
jest:
uri: http://localhost:9200
domain:

The issue is that one of your ES node in your cluster is running on low disk space, hence you are getting this exception.
Please make sure that you as clean up the disk space on the ES nodes on which you are getting the exception. I have faced this issue 2-3 times and it does not depend on the Elasticsearch index size, hence even you might have a very small index on large disk(let's suppose 2 TB) but if you don't have a free disk space more than 10% (which is almost 200 GB, which is huge) still you will get this exception and you need to clean up your disk space.

Related

Traefik poor upload perfomance

Recently I moved to traefik as my reverse proxy of choice. But noticed that upload speed to my synology NAS decreased dramatically while using traefik with tls enabled. I did a little of investigation and installed librespeed container to do some speed tests.
The results surprised me. Plain http (directly to container over VPN) 150/300, and while using traefik (over public IP) the best it can do was 100/20. VM configuration is 16 CPUs (hardware AES encryption supported / AMD Epyc 7281) and 32 gigs of ram with 10Gb net.
Is it the right perfomance I should expect from traefik? Upload speed decreased more than 10 times. Maybe it is configuration issue?
services:
traefik:
image: traefik:v2.9.6
container_name: traefik
restart: unless-stopped
networks:
- outbound
- internal
command:
- "--serversTransport.insecureSkipVerify=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker=true"
- "--providers.docker.watch"
- "--providers.docker.network=outbound"
- "--providers.docker.swarmMode=false"
- "--entrypoints.http.address=:80"
- "--entrypoints.https.address=:443"
- "--entryPoints.traefik.address=:8888"
- "--entrypoints.http.http.redirections.entryPoint.to=https"
- "--entrypoints.http.http.redirections.entryPoint.scheme=https"
- "--providers.file.directory=/rules"
- "--providers.file.watch=true"
- "--api.insecure=true"
- "--accessLog=true"
- "--accessLog.filePath=/traefik.log"
- "--accessLog.bufferingSize=100"
- "--accessLog.filters.statusCodes=400-499"
- "--metrics"
- "--metrics.prometheus.buckets=0.1,0.3,1.2,5.0"
#- "--log.level=DEBUG"
- "--certificatesResolvers.myresolver.acme.caServer=https://acme-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.myresolver.acme.storage=acme.json"
- "--certificatesResolvers.myresolver.acme.httpChallenge.entryPoint=http"
- "--certificatesResolvers.myresolver.acme.tlsChallenge=true"
- "--certificatesResolvers.myresolver.acme.email=asd#asd.me"
volumes:
- /etc/localtime:/etc/localtime:ro
- ./traefik/acme.json:/acme.json
- ./traefik/traefik.log:/traefik.log
- ./traefik/rules:/rules
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- "80:80"
- "443:443"
- "8888:8888"
librespeed:
image: adolfintel/speedtest
container_name: librespeed
environment:
- MODE=standalone
networks:
- outbound
ports:
- 8080:80
labels:
- "traefik.enable=true"
- "traefik.http.routers.librespeed.rule=Host(`s.mydomain.com`)"
- "traefik.http.services.librespeed.loadbalancer.server.port=80"
- "traefik.http.routers.librespeed.entrypoints=https,http"
- "traefik.http.routers.librespeed.tls=true"
- "traefik.http.routers.librespeed.tls.certresolver=myresolver"
Maybe up to 2x times speed decrese.
There could be a few reasons why you are experiencing a decrease in upload speed when using Traefik as your reverse proxy with TLS enabled.
One potential reason is that the overhead of the encryption and decryption process is causing a bottleneck in your system. The CPU usage of your VM may be high when running Traefik, which can cause a decrease in performance.
Another potential reason could be that the configuration of your Traefik container is not optimized for performance. For example, there might be some misconfigured settings that are causing high CPU usage, or there might be some settings that are not properly utilizing the resources available on your system.
You could try some of the following steps to help improve the performance of your Traefik container:
Increase the number of worker threads in Traefik by adding the --global.sendTimeout=6h and --global.readTimeout=6h to the command.
Increase the number of worker processes in Traefik by adding the --workers=16 to the command.
To check if the problem is related to the encryption process, you could try disabling the encryption to see if that improves the performance.
Finally, you could try disabling the access log, which could help to reduce the CPU usage

How to communicate between two services in Fargate using docker compose

I am trying to host Elasticsearch and kibana in AWS ECS (Fargate). I have created a docker-compose.ym file
version: '2.2'
services:
es-node:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
deploy:
resources:
limits:
memory: 8Gb
command: >
bash -c
'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
/usr/local/bin/docker-entrypoint.sh'
container_name: es-$ENV
environment:
- node.name=es-$ENV
- cluster.name=es-docker-cluster
- discovery.type=single-node
# - discovery.seed_hosts=es02,es03
# - cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=$ES_DB_PASSWORD
- xpack.security.enabled=true
logging:
driver: awslogs
options:
awslogs-group: we-two-works-db-ecs-context
awslogs-region: us-east-1
awslogs-stream-prefix: es-node
volumes:
- elastic_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana-node:
image: docker.elastic.co/kibana/kibana:7.9.0
container_name: kibana-$ENV
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: $ES_DB_URL
ELASTICSEARCH_HOSTS: '["http://es-$ENV:9200"]'
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: $ES_DB_PASSWORD
networks:
- elastic
logging:
options:
awslogs-group: we-two-works-db-ecs-context
awslogs-region: us-east-1
awslogs-stream-prefix: "kibana-node"
volumes:
elastic_data:
driver_opts:
performance-mode: maxIO
throughput-mode: bursting
uid: 0
gid: 0
networks:
elastic:
driver: bridge
and pass in the env variables using .env.developmentfile
ENV="development"
ES_DB_URL="localhost"
ES_DB_PORT=9200
ES_DB_USER="elastic"
ES_DB_PASSWORD="****"
and up the stack in ECS using this command after creating a docker context pointing to ECS docker compose --env-file ./.env.development up
However, after creating the stack the kibana node fails to establish communication with the elasticsearch node. Check the logs from kibana node container
{
"type": "log",
"#timestamp": "2021-12-09T02:07:04Z",
"tags": [
"warning",
"plugins-discovery"
],
"pid": 7,
"message": "Expect plugin \"id\" in camelCase, but found: beats_management"
}
{
"type": "log",
"#timestamp": "2021-12-09T02:07:04Z",
"tags": [
"warning",
"plugins-discovery"
],
"pid": 7,
"message": "Expect plugin \"id\" in camelCase, but found: triggers_actions_ui"
}
[BABEL] Note: The code generator has deoptimised the styling of /usr/share/kibana/x-pack/plugins/canvas/server/templates/pitch_presentation.js as it exceeds the max of 500KB.
After doing a research I have found that ecs cli does not support service.networks docker compose file field and it has given these instructions Communication between services is implemented by SecurityGroups within the application VPC.. I am wondering how to set these instructions in the docker-compose.yml file because the IP addresses get assigned after stack is being created.
These containers should be able to communicate with each others via their compose service names. So for example the kibana container should be able to reach the ES node using es-node. I assume this needs you need to set ELASTICSEARCH_HOSTS: '["http://es-node:9200"]'?
I am also not sure about ELASTICSEARCH_URL: $ES_DB_URL. I see you set ES_DB_URL="localhost" but that means that the kibana container will be calling localhost to try to reach the ES service (this may work on a laptop where all containers run on a flat network but that's not how it will work on ECS - where each compose service is a separate ECS service).
[UPDATE]
I took at stab at the compose file provided. Note that I have simplified it a bit to remove some variables such as the env file, the logging entries (why did you need them? Compose/ECS will create the logging infra for you).
This file works for me (with gotchas - see below):
services:
es-node:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
deploy:
resources:
reservations:
memory: 8Gb
command: >
bash -c
'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
/usr/local/bin/docker-entrypoint.sh'
container_name: es-node
environment:
- node.name=es-node
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=thisisawesome
- xpack.security.enabled=true
volumes:
- elastic_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana-node:
image: docker.elastic.co/kibana/kibana:7.9.0
deploy:
resources:
reservations:
memory: 8Gb
container_name: kibana-node
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: es-node
ELASTICSEARCH_HOSTS: http://es-node:9200
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: thisisawesome
volumes:
elastic_data:
driver_opts:
performance-mode: maxIO
throughput-mode: bursting
uid: 0
gid: 0
There are two major things I had to fix:
1- the kibana task needed more horsepower (the 0.5 vCPU and 512MB of memory - default - was not enough). I set the memory to 8GB (which set the CPU to 1) and the Kibana container came up.
2- I had to increase ulimits for the ES container. Some of the error messages in the logs pointed to max file opened and vm.max_map_count which both pointed to ulimits needing being adjusted. For Fargate you need a special section in the task definition. I know there is a way to embed CFN code into the compose file via overlays but I found easier/quickert to docker compose convert the compose into a CFN file and tweak that by adding this section right below the image:
"ulimits": [
{
"name": "nofile",
"softLimit": 65535,
"hardLimit": 65535
}
]
So to recap, you'd need to take my compose above, convert it into a CFN file, add the ulimits snipped and run it directly in CFN.
You can work backwards from here to re-add your variables etc.
HTH

How to run container of beat that required authentication from Elasticsearch

The main purpose: I want to use Logstash for collecting logs files that rely on remote server.
My ELK stack were created by using docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
and then I want to install a filebeat at the target host in order to send log to the ELK host.
docker run docker.elastic.co/beats/filebeat-oss:7.5.1 setup \
-E setup.kibana.host=x.x.x.x:5601 \
-E ELASTIC_PASSWORD="changeme" \
-E output.elasticsearch.hosts=["x.x.x.x:9200"]
but once hit the enter, the error occurs
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://x.x.x.x:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]
Also tried with -E ELASTICS_USERNAME="elastic" the error still persists
You should disable the basic x-pack security which is by default enabled in Elasticsearch 7.X version, under environment variable of ES docker image, mentioned below and start ES docker container.
xpack.security.enabled : false
After this, no need to pass ES creds and you can also remove below from your ES env. var:
ELASTIC_PASSWORD: changeme

Index Name Not Being Set in Filebeat to Elasticsearch - ELK .NET Docker ElasticHQ

I am experimenting with some json that has been formatted in accordance with Elasticsearch, so I have gone directly from Filebeat to Elasticsearch, as opposed to going through Logstash. This is using docker-compose:
version: '2.2'
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- cluster.name=docker-
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
networks:
- esnet
elastichq:
container_name: elastichq
image: elastichq/elasticsearch-hq
ports:
- 8080:5000
environment:
- HQ_DEFAULT_URL=http://elasticsearch:9200
- HQ_ENABLE_SSL=False
- HQ_DEBUG=FALSE
networks:
- esnet
networks:
esnet:
However, when I open ElasticHQ the index name has been labeled as filebeat-7.5.2-2020.02.10-000001 with a date stamp. I have specified the index name as Sample in my filebeat.yml. Is there something I am missing, or is this behavior normal?
Here is my filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.json
json.keys_under_root: true
json.add_error_key: true
#----------------------------- Elasticsearch output --------------------------------
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "sample-%{+YYYY.MM.dd}"
setup.template.name: "sample"
setup.template.pattern: "sample-*"
It would be more practical to know something predefined so if I use Postman as opposed to ElasticHQ, I can start querying my data without having to look for the index name.
I think Filebeat ILM might be taking over instead of the configured index name.
Starting with version 7.0, Filebeat uses index lifecycle management by
default when it connects to a cluster that supports lifecycle
management. Filebeat loads the default policy automatically and
applies it to any indices created by Filebeat.
And when ilm is enabled Filebeat Elasticsearch output index settings are ignored
The index setting is ignored when index lifecycle management is
enabled. If you’re sending events to a cluster that supports index
lifecycle management, see Configure index lifecycle management to
learn how to change the index name.
You might need to disable ILM or better yet configure your desired filename using ILM rollover_alias.

Why elasticsearch on docker swarm requires a transport.host=localhost setting?

I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?

Resources