not getting kibana gui outside kubernetes - elasticsearch

I have created kops cluster and elasticsearch logging as below.
kops create cluster --zones ap-southeast-1a,ap-southeast-1b,ap-southeast-1c --topology private --networking calico --master-size t2.micro --master-count 3 --node-size t2.micro --node-count 2 --cloud-labels "Project=Kubernetes,Team=Devops" ${NAME} --ssh-public-key /root/.ssh/id_rsa.pub --yes
https://github.com/kubernetes/kops/blob/master/addons/logging-elasticsearch/v1.7.0.yaml
then some important cluster information.
root#ubuntu:~# kubectl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-logging ClusterIP 100.67.69.222 <none> 9200/TCP 2m
kibana-logging ClusterIP 100.67.182.172 <none> 5601/TCP 2m
kube-dns ClusterIP 100.64.0.10 <none> 53/UDP,53/TCP 6m
root#ubuntu:~# kubectl cluster-info
Kubernetes master is running at https://${NAME}
Elasticsearch is running at https://${NAME}/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://${NAME}/api/v1/namespaces/kube-system/services/kibana-logging/proxy
KubeDNS is running at https://${NAME}/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
now when i access "https://${NAME}/api/v1/namespaces/kube-system/services/kibana-logging"
i got as below in browser
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kibana-logging",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kibana-logging",
"uid": "8fc914f7-3f65-11e9-a970-0aaac13c99b2",
"resourceVersion": "923",
"creationTimestamp": "2019-03-05T16:41:44Z",
"labels": {
"k8s-addon": "logging-elasticsearch.addons.k8s.io",
"k8s-app": "kibana-logging",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Kibana"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 5601,
"targetPort": "ui"
}
],
"selector": {
"k8s-app": "kibana-logging"
},
"clusterIP": "100.67.182.172",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
when i access "https://${NAME}/api/v1/namespaces/kube-system/services/elasticsearch-logging"
i got as below in browser
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "elasticsearch-logging",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/elasticsearch-logging",
"uid": "8f7cc654-3f65-11e9-a970-0aaac13c99b2",
"resourceVersion": "902",
"creationTimestamp": "2019-03-05T16:41:44Z",
"labels": {
"k8s-addon": "logging-elasticsearch.addons.k8s.io",
"k8s-app": "elasticsearch-logging",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Elasticsearch"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 9200,
"targetPort": "db"
}
],
"selector": {
"k8s-app": "elasticsearch-logging"
},
"clusterIP": "100.67.69.222",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
when i access "https://${NAME}/api/v1/namespaces/kubesystem/services/elasticsearch-logging/proxy/"
i got below in browser
{
"name" : "elasticsearch-logging-0",
"cluster_name" : "kubernetes-logging",
"cluster_uuid" : "_na_",
"version" : {
"number" : "5.6.4",
"build_hash" : "8bbedf5",
"build_date" : "2017-10-31T18:55:38.105Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
when i access "https://${NAME}/api/v1/namespaces/kube-system/services/kibana-logging/proxy"
i got error as below.
:Error: 'dial tcp 100.111.147.69:5601: connect: connection refused'
Trying to reach: 'http://100.111.147.69:5601/'
why i am not getting GUI of kibana here?
After one hour, i got kibana logs as below
{"type":"log","#timestamp":"2019-03-22T12:46:58Z","tags":["info","optimize"],"pid":1,"message":"Optimizing and caching bundles for graph, ml, kibana, stateSessionStorageRedirect, timelion and status_page. This may take a few minutes"}
{"type":"log","#timestamp":"2019-03-22T13:18:19Z","tags":["info","optimize"],"pid":1,"message":"Optimization of bundles for graph, ml, kibana, stateSessionStorageRedirect, timelion and status_page complete in 1880.89 seconds"}
{"type":"log","#timestamp":"2019-03-22T13:18:20Z","tags":["status","plugin:kibana#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:21Z","tags":["status","plugin:elasticsearch#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:21Z","tags":["status","plugin:xpack_main#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:22Z","tags":["status","plugin:graph#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:30Z","tags":["reporting","warning"],"pid":1,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
{"type":"log","#timestamp":"2019-03-22T13:18:30Z","tags":["status","plugin:reporting#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:36Z","tags":["status","plugin:xpack_main#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:18:36Z","tags":["status","plugin:graph#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:18:36Z","tags":["status","plugin:reporting#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:18:36Z","tags":["status","plugin:elasticsearch#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:18:55Z","tags":["status","plugin:elasticsearch#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
{"type":"log","#timestamp":"2019-03-22T13:18:57Z","tags":["license","info","xpack"],"pid":1,"message":"Imported license information from Elasticsearch for [data] cluster: mode: trial | status: active | expiry date: 2019-04-21T11:47:30+00:00"}
{"type":"log","#timestamp":"2019-03-22T13:18:57Z","tags":["status","plugin:xpack_main#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
{"type":"log","#timestamp":"2019-03-22T13:18:57Z","tags":["status","plugin:graph#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
{"type":"log","#timestamp":"2019-03-22T13:18:57Z","tags":["status","plugin:reporting#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
{"type":"log","#timestamp":"2019-03-22T13:20:37Z","tags":["status","plugin:searchprofiler#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:37Z","tags":["status","plugin:ml#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:ml#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from green to yellow - Waiting for Elasticsearch","prevState":"green","prevMsg":"Ready"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:ml#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:tilemap#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:watcher#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:grokdebugger#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:upgrade#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:console#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:39Z","tags":["status","plugin:metrics#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:41Z","tags":["status","plugin:timelion#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:41Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
{"type":"log","#timestamp":"2019-03-22T13:20:41Z","tags":["status","ui settings","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"response","#timestamp":"2019-03-22T13:26:01Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/","method":"get","headers":{"host":"MYHOSTNAME","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9","cache-control":"max-age=0","upgrade-insecure-requests":"1","x-forwarded-for":"172.20.1.246","x-forwarded-uri":"/api/v1/namespaces/kube-system/services/kibana-logging/proxy/"},"remoteAddress":"100.124.142.0","userAgent":"100.124.142.0"},"res":{"statusCode":200,"responseTime":622,"contentLength":9},"message":"GET / 200 622ms - 9.0B"}
also curl from master node
admin#ip-172-20-51-6:~$ curl 100.66.205.174:5601
<script>var hashRoute = '/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana';
var defaultRoute = '/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana';
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;
browser log

This is happening because it takes some time to find optimal setup of ELK.
If you check log of kibana-logging container, you will see these output:
$ kubectl -n kube-system logs -f kibana-logging-8675b4ffd-75rzd
{"type":"log","#timestamp":"2019-03-21T09:11:30Z","tags":["info","optimize"],"pid":1,"message":"Optimizing and caching bundles for graph, ml, kibana, stateSessionStorageRedirect, timelion and status_page. This may take a few minutes"}
You need to wait around 1h, then service will be available and you will able access kibana ui.
For more information see this github discussion

Related

Bash redirection in docker container failing when ran in ECS task on Amazon Linux 2 instances

I am trying to run an ECS task that contains 3 containers - postgres, redis, and an image from a private ECR repository. The custom image container definition has a command to wait until the postgres container can receive traffic via a bash command
"command": [
"/bin/bash",
"-c",
"while !</dev/tcp/postgres/5432; do echo \"Waiting for postgres database to start...\"; /bin/sleep 1; done; /bin/sh /app/start-server.sh;"
],
When I run this via docker-compose on my local machine through docker it works, but on the Amazon Linux 2 EC2 machine this is printed when the while loop runs:
/bin/bash: line 1: postgres: Name or service not known
/bin/bash: line 1: /dev/tcp/postgres/5432: Invalid argument
The postgres container runs without error and the last log from that container is
database system is ready to accept connections
I am not sure if this is a docker network issue or an issue with amazon linux 2's bash not being compiled with --enable-net-redirections which I found explained here
Task Definition:
{
"networkMode": "bridge",
"containerDefinitions": [
{
"environment": [
{
"name": "POSTGRES_DB",
"value": "metadeploy"
},
{
"name": "POSTGRES_USER",
"value": "<redacted>"
},
{
"name": "POSTGRES_PASSWORD",
"value": "<redacted>"
}
],
"essential": true,
"image": "postgres:12.9",
"mountPoints": [],
"name": "postgres",
"memory": 1024,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "metadeploy-postgres",
"awslogs-region": "us-east-1",
"awslogs-create-group": "true",
"awslogs-stream-prefix": "mdp"
}
}
},
{
"essential": true,
"image": "redis:6.2",
"name": "redis",
"memory": 1024
},
{
"command": [
"/bin/bash",
"-c",
"while !</dev/tcp/postgres/5432; do echo \"Waiting for postgres database to start...\"; /bin/sleep 1; done; /bin/sh /app/start-server.sh;"
],
"environment": [
{
"name": "DJANGO_SETTINGS_MODULE",
"value": "config.settings.local"
},
{
"name": "DATABASE_URL",
"value": "<redacted-postgres-url>"
},
{
"name": "REDIS_URL",
"value": "redis://redis:6379"
},
{
"name": "REDIS_HOST",
"value": "redis"
}
],
"essential": true,
"image": "the private ecr image uri built from here https://github.com/SFDO-Tooling/MetaDeploy",
"links": [
"redis"
],
"mountPoints": [
{
"containerPath": "/app/node_modules",
"sourceVolume": "AppNode_Modules"
}
],
"name": "web",
"portMappings": [
{
"containerPort": 8080,
"hostPort": 8080
},
{
"containerPort": 8000,
"hostPort": 8000
},
{
"containerPort": 6006,
"hostPort": 6006
}
],
"memory": 1024,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "metadeploy-web",
"awslogs-region": "us-east-1",
"awslogs-create-group": "true",
"awslogs-stream-prefix": "mdw"
}
}
}
],
"family": "MetaDeploy",
"volumes": [
{
"host": {
"sourcePath": "/app/node_modules"
},
"name": "AppNode_Modules"
}
]
}
The corresponding docker-compose.yml contains:
version: '3'
services:
postgres:
environment:
POSTGRES_DB: metadeploy
POSTGRES_USER: postgres
POSTGRES_PASSWORD: sample_db_password
volumes:
- ./postgres:/var/lib/postgresql/data:delegated
image: postgres:12.9
restart: always
redis:
image: redis:6.2
web:
build:
context: .
dockerfile: Dockerfile
command: |
/bin/bash -c 'while !</dev/tcp/postgres/5432; do echo "Waiting for postgres database to start..."; /bin/sleep 1; done; \
/bin/sh /app/start-server.sh;'
ports:
- '8080:8080'
- '8000:8000'
# Storybook server
- '6006:6006'
stdin_open: true
tty: true
depends_on:
- postgres
- redis
links:
- redis
environment:
DJANGO_SETTINGS_MODULE: config.settings.local
DATABASE_URL: postgres://postgres:sample_db_password#postgres:5432/metadeploy
REDIS_URL: redis://redis:6379
REDIS_HOST: redis
volumes:
- .:/app:cached
- /app/node_modules
Do I need to recompile bash to use --enable-net-redirections, and if so how can I do that?
Without bash's net redirection feature, your best bet is to use something like nc or netcat (if available) to determine if the port is open. If those aren't available, it may be worth modifying your app logic to better handle database failure cases.
Alternately, a potential better approach would be:
Adding a healthcheck to the postgres image.
Modifying the web service's depends_on clause "long syntax" to add a dependency on postgres being service_healthy instead of the default service_started.
This approach has two key benefits:
The postgres image likely has the tools to detect if the database is up and running.
The web service no longer needs to manually check if the database is ready or not.

apache/apisix working mocking plugin example

I've installed apisix and apisix-dashboard with helm on my k8s cluster.
I used all defaults except APIKEY for admin and viewer acc., and custom username/password for dashboard. So I'm currently running the 2.15 version.
My installation steps
helm repo add apisix https://charts.apiseven.com
helm repo update
# installing apisix/apisix
helm install --set-string admin.credentials.admin="new_api_key"
--set-string admin.credentials.viewer="new_api_key" apisix apisix/apisix --create-namespace --namespace my-apisix
# installing apisix/apisix-dashboard, where values.yaml contains username/password
helm install -f values.yaml apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace my-apisix
I'm unable to configure the mocking plugin, I've been following the docs.
In the provided example I'm unable to call the API on route with ID 1, so I've created a custom route and after that used the VIEW json, where I've changed the configuration accordingly to the sample provided.
All calls on this routes are returning 502 errors, in the logs i can see the route is routing traffic to a non existing server. All of that leads me to believe that the mocking plugin is disabled.
Example of my route:
{
"uri": "/mock-test.html",
"name": "mock-sample-read",
"methods": [
"GET"
],
"plugins": {
"mocking": {
"content_type": "application/json",
"delay": 1,
"disable": false,
"response_schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"properties": {
"a": {
"type": "integer"
},
"b": {
"type": "integer"
}
},
"required": [
"a",
"b"
],
"type": "object"
},
"response_status": 200,
"with_mock_header": true
}
},
"upstream": {
"nodes": [
{
"host": "127.0.0.1",
"port": 1980,
"weight": 1
}
],
"timeout": {
"connect": 6,
"send": 6,
"read": 6
},
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"keepalive_pool": {
"idle_timeout": 60,
"requests": 1000,
"size": 320
}
},
"status": 1
}
Can anyone provide me with an actual working example or point out what I'm missing? Any suggestions are welcomed.
EDIT:
Looking at the logs of the apache/apisix:2.15.0-alpine it looks like this mocking plugin is disabled. Looking at the docs The mocking Plugin is used for mocking an API. When executed, it returns random mock data in the format specified and the request is not forwarded to the Upstream.
Error logs where I've changed the domain and IP addr. suggest that the traffic is being redirected to the upstream:
10.10.10.24 - - [23/Sep/2022:11:33:16 +0000] my.domain.com "GET /mock-test.html HTTP/1.1" 502 154 0.001 "-" "PostmanRuntime/7.29.2" 127.0.0.1:1980 502 0.001 "http://my.domain.com"
Globally plugins are enabled, I've tested using the Keycloak plugin.
EDIT 2: Could this be a bug in version 2.15 of apisix? There is currently no open issue on the github repo.
yes, mocking plugin is not enabled.
you can just add it here.
https://github.com/apache/apisix-helm-chart/blob/7ddeca5395a2de96acd06bada30f3ab3580a6252/charts/apisix/values.yaml#L219-L269
You can also submit a PR directly to fix it

Kubernetes pod name/service name as index name in Kibana using fluentd

Currently we have many services running on k8s and sending logs with fluent-bit to elastic using fluentd.
In fluentd we have hard coded logstash_prefix xxx-logstash, so all logs are created with the same index. Now we want to send data to elastic with respect to podname/service name.
From the json document of logs in kibana, we see there is a key PodName, but how to use this in fluentd.conf? We are using helm for elastic stack deployment.
fluentd.conf
#see more ddetails in https://github.com/uken/fluent-plugin-elasticsearch
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-output
data:
fluentd.conf: |
#configure the logging level to error
<system>
log_level error
</system>
# Ignore fluentd own events
<label #FLUENT_LOG>
<match fluent.**>
#type null
</match>
</label>
# TCP input to receive logs from the forwarders
<source>
#type forward
bind 0.0.0.0
port 24224
</source>
# HTTP input for the liveness and readiness probes
<source>
#type http
bind 0.0.0.0
port 9880
</source>
# Throw the healthcheck to the standard output instead of forwarding it
<match fluentd.healthcheck>
#type null
</match>
# Send the logs to the standard output
<match **>
#type elasticsearch
include_tag_key true
host "{{ .Release.Name }}-es-http"
port "9200"
user "elastic"
password "{{ (.Values.env.secret.password | b64dec) | indent 4 | trim }}"
logstash_format true
scheme https
ssl_verify false
logstash_prefix xxx-logstash
logstash_prefix_separator -
logstash_dateformat %Y.%m.%d
<buffer>
#type file
path /opt/bitnami/fluentd/logs/buffers/logs.buffer
flush_thread_count 2
flush_interval 5s
</buffer>
</match>
** Sample log document from Kibana**
{
"_index": "xxx-logstash-2022.08.19",
"_type": "_doc",
"_id": "N34ntYIBvWtHvFBZmz-L",
"_version": 1,
"_score": 1,
"_ignored": [
"message.keyword"
],
"_source": {
"FileName": "/app/logs/app.log",
"#timestamp": "2022-08-19T08:10:46.854Z",
"#version": "1",
"message": "[com.couchbase.endpoint][EndpointConnectionFailedEvent][1485us] Connect attempt 16569 failed because of : finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting. {\"circuitBreaker\":\"DISABLED\",\"coreId\":\"0x94bd86a800000002\",\"remote\":\"xxx-couchbase-cluster.couchbase:8091\",\"type\":\"MANAGER\"}",
"logger_name": "com.couchbase.endpoint",
"thread_name": "cb-events",
"level": "WARN",
"level_value": 30000,
"stack_trace": "com.couchbase.client.core.endpoint.BaseEndpoint$2: finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting.\n",
"PodName": "product-59b7f4b567-r52vn",
"Namespace": "designer-dev",
"tag": "tail.0"
},
"fields": {
"thread_name.keyword": [
"cb-events"
],
"level": [
"WARN"
],
"FileName": [
"/app/logs/app.log"
],
"stack_trace.keyword": [
"com.couchbase.client.core.endpoint.BaseEndpoint$2: finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting.\n"
],
"PodName.keyword": [
"product-59b7f4b567-r52vn"
],
"#version.keyword": [
"1"
],
"message": [
"[com.couchbase.endpoint][EndpointConnectionFailedEvent][1485us] Connect attempt 16569 failed because of : finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting. {\"circuitBreaker\":\"DISABLED\",\"coreId\":\"0x94bd86a800000002\",\"remote\":\"xxx-couchbase-cluster.couchbase:8091\",\"type\":\"MANAGER\"}"
],
"Namespace": [
"designer-dev"
],
"PodName": [
"product-59b7f4b567-r52vn"
],
"#timestamp": [
"2022-08-19T08:10:46.854Z"
],
"level.keyword": [
"WARN"
],
"thread_name": [
"cb-events"
],
"level_value": [
30000
],
"Namespace.keyword": [
"designer-dev"
],
"#version": [
"1"
],
"logger_name": [
"com.couchbase.endpoint"
],
"tag": [
"tail.0"
],
"stack_trace": [
"com.couchbase.client.core.endpoint.BaseEndpoint$2: finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting.\n"
],
"tag.keyword": [
"tail.0"
],
"FileName.keyword": [
"/app/logs/app.log"
],
"logger_name.keyword": [
"com.couchbase.endpoint"
]
},
"ignored_field_values": {
"message.keyword": [
"[com.couchbase.endpoint][EndpointConnectionFailedEvent][1485us] Connect attempt 16569 failed because of : finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting. {\"circuitBreaker\":\"DISABLED\",\"coreId\":\"0x94bd86a800000002\",\"remote\":\"xxx-couchbase-cluster.couchbase:8091\",\"type\":\"MANAGER\"}"
]
}
}

Security error when create new role with field_security

In Elastic, you can create roles. For the same index, I would like to create a role to display some fields and for another role hidden some fields.
For that, I found that in the doc 'field_security'.
https://www.elastic.co/guide/en/elastic-stack-overview/7.3/field-level-security.html
Currently I use an Elastic + Kibana version 7.3.1 in a Docker container
My request for create role is :
POST /_security/role/myNewRole
{
"cluster": ["all"],
"indices": [
{
"names": [ "twitter" ],
"privileges": ["all"],
"field_security" : {
"grant" : [ "user", "password" ]
}
}
]
}
And response is :
{
"error": {
"root_cause": [
{
"type": "security_exception",
"reason": "current license is non-compliant for [field and document level security]",
"license.expired.feature": "field and document level security"
}
],
"type": "security_exception",
"reason": "current license is non-compliant for [field and document level security]",
"license.expired.feature": "field and document level security"
},
"status": 403
}
I checked the license with request :
{
"license" : {
"status" : "active",
"uid" : "864f625a-fc7a-41de-91f3-c4a64e045a55",
"type" : "basic",
"issue_date" : "2019-09-10T10:04:38.150Z",
"issue_date_in_millis" : 1568109878150,
"max_nodes" : 1000,
"issued_to" : "docker-cluster",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}
My docker-file
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- ELASTIC_PASSWORD=toto
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.type=single-node"
- "xpack.security.enabled=true"
- "xpack.security.dls_fls.enabled=true"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9200:9200"
networks:
- net
volumes:
- esdata1:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:7.3.1
environment:
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=toto
ports:
- "5601:5601"
networks:
- net
volumes:
esdata1:
driver: local
networks:
net:
How to fix this licensing problem ?
Thanks
Even though basic security features are free with a BASIC license, "field and document level security" are only available to Platinum-level users... and to Elastic Cloud users.
So the most simple and not too costly way of getting this feature is to subscribe to Elastic Cloud.

Can't connect to ElasticSearch server using Java API and shield

I am trying to connect to my Elasticsearch server using the Java Api and shield. I can execute index, get, delete and search operations on the existing cluster using sense plugin (e.g) and via curl on 9200. I've seen other threads about this but none of them worked and none of them were trying to connect to a Elasticsearch webserver with shield.
I used the same API to connect with my localhost of elasticsearch and it worked fine however when I try to connect with my web server I always get the same error:
Error
1342 [main] DEBUG org.elasticsearch.shield.transport.netty - [Benjamin Jacob Grimm] connected to node [{#transport#-1}{HOST_IP}{HOST/HOST_IP:9300}]
1431 [elasticsearch[Benjamin Jacob Grimm][generic][T#1]] DEBUG org.elasticsearch.shield.transport.netty - [Benjamin Jacob Grimm] disconnecting from [{#transport#-1}{HOST_IP}{HOST/HOST_IP:9300}], channel closed event
1463 [main] INFO org.elasticsearch.client.transport - [Benjamin Jacob Grimm] failed to get node info for {#transport#-1}{HOST_IP}{HOST/HOST_IP:9300}, disconnecting...
NodeDisconnectedException[[][HOST/HOST_IP:9300][cluster:monitor/nodes/liveness] disconnected]
...9200/_nodes
"cluster_name": "elasticsearch",
"nodes": {
"UYdZbCQKQZavtFYOoUpawg": {
"name": "Desmond Pitt",
"transport_address": "HOST_IP:9300",
"host": "HOST_IP",
"ip": "HOST_IP",
"version": "2.3.3",
"build": "218bdf1",
"http_address": "HOST_IP:9200",
"settings": {
"pidfile": "/var/run/elasticsearch/elasticsearch.pid",
"cluster": {
"name": "elasticsearch"
},
"path": {
"conf": "/etc/elasticsearch",
"data": "/var/lib/elasticsearch",
"logs": "/var/log/elasticsearch",
"home": "/usr/share/elasticsearch"
},
"shield": {
"http": {
"ssl": "true"
},
"https": {
"ssl": "true"
},
"transport": {
"ssl": "true"
}
},
"name": "Desmond Pitt",
"client": {
"type": "node"
},
"http": {
"cors": {
"allow-origin": "*",
"allow-headers": "Authorization, Origin, X-Requested-With, Content-Type, Accept",
"allow-credentials": "true",
"allow-methods": "OPTIONS, HEAD, GET, POST, PUT, DELETE",
"enabled": "true"
}
},
"index": {
"queries": {
"cache": {
"type": "opt_out_cache"
}
}
},
"foreground": "false",
"config": {
"ignore_system_properties": "true"
},
"network": {
"host": "HOST_IP",
"bind_host": "0.0.0.0",
"publish_host": "HOST_IP"
}
}
Java code:
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(Settings.builder()
.put("cluster.name", ClusterName)
.put("shield.user", "USER:PASSWORD")
.build())
.build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(HOST), 9300));
I've tried as stated on Can't connect to ElasticSearch server using Java API to sync my Java API java version and my server and currently i'm using:
Java API:
C:\Program Files\Java\jdk1.8.0_92
Server:
"version": "1.8.0_91",
"vm_name": "OpenJDK 64-Bit Server VM",
I don't know if it has any problem using ...0_91 and 0_92 but doesn't seem to make any difference because the java API works weel on my localhost server.
If you need more information feel free to ask.
Thanks in advance!
UPDATE:
Changes I did in elasticsearch.yml
shield.ssl.keystore.path: /usr/share/elasticsearch/bin/shield/elastic.jks
shield.ssl.keystore.password: password
shield.ssl.keystore.key_password: password
shield.transport.ssl: true
shield.http.ssl: true
shield.https.ssl: true
network.host: HOST_IP
network.publish_host: HOST_IP
shield.ssl.hostname_verification.resolve_name: false
Result of https://HOST:9200/_cluster/health?pretty=true
{
"cluster_name": "elasticsearch",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"active_primary_shards": 5,
"active_shards": 5,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 5,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 50
}
UPDATE2:
I've tried activate SSL according to official documentation and I got the following errors:
2082 [elasticsearch[Steel Serpent][transport_client_worker][T#1]{New I/O worker #1}] DEBUG org.elasticsearch.shield.transport.netty - [Steel Serpent] SSL/TLS handshake failed, closing channel: null
java.nio.channels.ClosedChannelException
at org.jboss.netty.handler.ssl.SslHandler.channelDisconnected(SslHandler.java:575)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:396)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:360)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:93)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Temporary Solution
After that try I did as Vladislav Kysliy suggested and disabled SSL and it worked but I'm looking for a real solution and not a temporary one.
As i can see you enabled SSL encryption. But your java code didn't activate SSL. According official documentation you should use something like this:
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(Settings.builder()
.put("cluster.name", "myClusterName")
.put("shield.user", "transport_client_user:changeme")
.put("shield.ssl.keystore.path", "/path/to/client.jks") (1)
.put("shield.ssl.keystore.password", "password")
.put("shield.transport.ssl", "true")
...
.build())
Moreover i would test my code without any encryption and add some new features(e.g SSL) to config and code step by step.
UPD: To be honest remotely fixing ssl issues will be tricky. This errors often appeared when client sends an invalid SSL certificate. Probably you need to disable client auth
Because of you use SSL + Shield the main idea is check your functionality step-by-step: disable SSL - check in Java -API client, enable SSL - check again.

Resources