Kubernetes pod name/service name as index name in Kibana using fluentd - elasticsearch

Currently we have many services running on k8s and sending logs with fluent-bit to elastic using fluentd.
In fluentd we have hard coded logstash_prefix xxx-logstash, so all logs are created with the same index. Now we want to send data to elastic with respect to podname/service name.
From the json document of logs in kibana, we see there is a key PodName, but how to use this in fluentd.conf? We are using helm for elastic stack deployment.
fluentd.conf
#see more ddetails in https://github.com/uken/fluent-plugin-elasticsearch
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-output
data:
fluentd.conf: |
#configure the logging level to error
<system>
log_level error
</system>
# Ignore fluentd own events
<label #FLUENT_LOG>
<match fluent.**>
#type null
</match>
</label>
# TCP input to receive logs from the forwarders
<source>
#type forward
bind 0.0.0.0
port 24224
</source>
# HTTP input for the liveness and readiness probes
<source>
#type http
bind 0.0.0.0
port 9880
</source>
# Throw the healthcheck to the standard output instead of forwarding it
<match fluentd.healthcheck>
#type null
</match>
# Send the logs to the standard output
<match **>
#type elasticsearch
include_tag_key true
host "{{ .Release.Name }}-es-http"
port "9200"
user "elastic"
password "{{ (.Values.env.secret.password | b64dec) | indent 4 | trim }}"
logstash_format true
scheme https
ssl_verify false
logstash_prefix xxx-logstash
logstash_prefix_separator -
logstash_dateformat %Y.%m.%d
<buffer>
#type file
path /opt/bitnami/fluentd/logs/buffers/logs.buffer
flush_thread_count 2
flush_interval 5s
</buffer>
</match>
** Sample log document from Kibana**
{
"_index": "xxx-logstash-2022.08.19",
"_type": "_doc",
"_id": "N34ntYIBvWtHvFBZmz-L",
"_version": 1,
"_score": 1,
"_ignored": [
"message.keyword"
],
"_source": {
"FileName": "/app/logs/app.log",
"#timestamp": "2022-08-19T08:10:46.854Z",
"#version": "1",
"message": "[com.couchbase.endpoint][EndpointConnectionFailedEvent][1485us] Connect attempt 16569 failed because of : finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting. {\"circuitBreaker\":\"DISABLED\",\"coreId\":\"0x94bd86a800000002\",\"remote\":\"xxx-couchbase-cluster.couchbase:8091\",\"type\":\"MANAGER\"}",
"logger_name": "com.couchbase.endpoint",
"thread_name": "cb-events",
"level": "WARN",
"level_value": 30000,
"stack_trace": "com.couchbase.client.core.endpoint.BaseEndpoint$2: finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting.\n",
"PodName": "product-59b7f4b567-r52vn",
"Namespace": "designer-dev",
"tag": "tail.0"
},
"fields": {
"thread_name.keyword": [
"cb-events"
],
"level": [
"WARN"
],
"FileName": [
"/app/logs/app.log"
],
"stack_trace.keyword": [
"com.couchbase.client.core.endpoint.BaseEndpoint$2: finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting.\n"
],
"PodName.keyword": [
"product-59b7f4b567-r52vn"
],
"#version.keyword": [
"1"
],
"message": [
"[com.couchbase.endpoint][EndpointConnectionFailedEvent][1485us] Connect attempt 16569 failed because of : finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting. {\"circuitBreaker\":\"DISABLED\",\"coreId\":\"0x94bd86a800000002\",\"remote\":\"xxx-couchbase-cluster.couchbase:8091\",\"type\":\"MANAGER\"}"
],
"Namespace": [
"designer-dev"
],
"PodName": [
"product-59b7f4b567-r52vn"
],
"#timestamp": [
"2022-08-19T08:10:46.854Z"
],
"level.keyword": [
"WARN"
],
"thread_name": [
"cb-events"
],
"level_value": [
30000
],
"Namespace.keyword": [
"designer-dev"
],
"#version": [
"1"
],
"logger_name": [
"com.couchbase.endpoint"
],
"tag": [
"tail.0"
],
"stack_trace": [
"com.couchbase.client.core.endpoint.BaseEndpoint$2: finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting.\n"
],
"tag.keyword": [
"tail.0"
],
"FileName.keyword": [
"/app/logs/app.log"
],
"logger_name.keyword": [
"com.couchbase.endpoint"
]
},
"ignored_field_values": {
"message.keyword": [
"[com.couchbase.endpoint][EndpointConnectionFailedEvent][1485us] Connect attempt 16569 failed because of : finishConnect(..) failed: Connection refused: xxx-couchbase-cluster.couchbase/10.244.27.5:8091 - Check server ports and cluster encryption setting. {\"circuitBreaker\":\"DISABLED\",\"coreId\":\"0x94bd86a800000002\",\"remote\":\"xxx-couchbase-cluster.couchbase:8091\",\"type\":\"MANAGER\"}"
]
}
}

Related

Bash redirection in docker container failing when ran in ECS task on Amazon Linux 2 instances

I am trying to run an ECS task that contains 3 containers - postgres, redis, and an image from a private ECR repository. The custom image container definition has a command to wait until the postgres container can receive traffic via a bash command
"command": [
"/bin/bash",
"-c",
"while !</dev/tcp/postgres/5432; do echo \"Waiting for postgres database to start...\"; /bin/sleep 1; done; /bin/sh /app/start-server.sh;"
],
When I run this via docker-compose on my local machine through docker it works, but on the Amazon Linux 2 EC2 machine this is printed when the while loop runs:
/bin/bash: line 1: postgres: Name or service not known
/bin/bash: line 1: /dev/tcp/postgres/5432: Invalid argument
The postgres container runs without error and the last log from that container is
database system is ready to accept connections
I am not sure if this is a docker network issue or an issue with amazon linux 2's bash not being compiled with --enable-net-redirections which I found explained here
Task Definition:
{
"networkMode": "bridge",
"containerDefinitions": [
{
"environment": [
{
"name": "POSTGRES_DB",
"value": "metadeploy"
},
{
"name": "POSTGRES_USER",
"value": "<redacted>"
},
{
"name": "POSTGRES_PASSWORD",
"value": "<redacted>"
}
],
"essential": true,
"image": "postgres:12.9",
"mountPoints": [],
"name": "postgres",
"memory": 1024,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "metadeploy-postgres",
"awslogs-region": "us-east-1",
"awslogs-create-group": "true",
"awslogs-stream-prefix": "mdp"
}
}
},
{
"essential": true,
"image": "redis:6.2",
"name": "redis",
"memory": 1024
},
{
"command": [
"/bin/bash",
"-c",
"while !</dev/tcp/postgres/5432; do echo \"Waiting for postgres database to start...\"; /bin/sleep 1; done; /bin/sh /app/start-server.sh;"
],
"environment": [
{
"name": "DJANGO_SETTINGS_MODULE",
"value": "config.settings.local"
},
{
"name": "DATABASE_URL",
"value": "<redacted-postgres-url>"
},
{
"name": "REDIS_URL",
"value": "redis://redis:6379"
},
{
"name": "REDIS_HOST",
"value": "redis"
}
],
"essential": true,
"image": "the private ecr image uri built from here https://github.com/SFDO-Tooling/MetaDeploy",
"links": [
"redis"
],
"mountPoints": [
{
"containerPath": "/app/node_modules",
"sourceVolume": "AppNode_Modules"
}
],
"name": "web",
"portMappings": [
{
"containerPort": 8080,
"hostPort": 8080
},
{
"containerPort": 8000,
"hostPort": 8000
},
{
"containerPort": 6006,
"hostPort": 6006
}
],
"memory": 1024,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "metadeploy-web",
"awslogs-region": "us-east-1",
"awslogs-create-group": "true",
"awslogs-stream-prefix": "mdw"
}
}
}
],
"family": "MetaDeploy",
"volumes": [
{
"host": {
"sourcePath": "/app/node_modules"
},
"name": "AppNode_Modules"
}
]
}
The corresponding docker-compose.yml contains:
version: '3'
services:
postgres:
environment:
POSTGRES_DB: metadeploy
POSTGRES_USER: postgres
POSTGRES_PASSWORD: sample_db_password
volumes:
- ./postgres:/var/lib/postgresql/data:delegated
image: postgres:12.9
restart: always
redis:
image: redis:6.2
web:
build:
context: .
dockerfile: Dockerfile
command: |
/bin/bash -c 'while !</dev/tcp/postgres/5432; do echo "Waiting for postgres database to start..."; /bin/sleep 1; done; \
/bin/sh /app/start-server.sh;'
ports:
- '8080:8080'
- '8000:8000'
# Storybook server
- '6006:6006'
stdin_open: true
tty: true
depends_on:
- postgres
- redis
links:
- redis
environment:
DJANGO_SETTINGS_MODULE: config.settings.local
DATABASE_URL: postgres://postgres:sample_db_password#postgres:5432/metadeploy
REDIS_URL: redis://redis:6379
REDIS_HOST: redis
volumes:
- .:/app:cached
- /app/node_modules
Do I need to recompile bash to use --enable-net-redirections, and if so how can I do that?
Without bash's net redirection feature, your best bet is to use something like nc or netcat (if available) to determine if the port is open. If those aren't available, it may be worth modifying your app logic to better handle database failure cases.
Alternately, a potential better approach would be:
Adding a healthcheck to the postgres image.
Modifying the web service's depends_on clause "long syntax" to add a dependency on postgres being service_healthy instead of the default service_started.
This approach has two key benefits:
The postgres image likely has the tools to detect if the database is up and running.
The web service no longer needs to manually check if the database is ready or not.

consul deregister_critical_service_after is not woring

Hello everyone I have a healthcheck on my consul service, my goal is whenever the service is unhealthy then the consul should remove them from the service catalog.
Bellow is my config
{
"service": {
"name": "api",
"tags": [ "api-tag" ],
"port": 80
},
"check": {
"id": "api_up",
"name": "Fetch health check from local nginx",
"http": "http://localhost/HealthCheck",
"interval": "5s",
"timeout": "1s",
"deregister_critical_service_after": "15s"
},
"data_dir": "/consul/data",
"retry_join": [
"192.168.0.1",
"192.168.0.2",
]
}
Thanks for all the helps
The reason the service is not being de-registered is that the check is being specified outside of the service {} block in your JSON. This makes the check a node-level check, not a service-level check.
Here's a pretty-printed version of the config you provided.
{
"service": {
"name": "api",
"tags": [
"api-tag"
],
"port": 80
},
"check": {
"id": "api_up",
"name": "Fetch health check from local nginx",
"http": "http://localhost/HealthCheck",
"interval": "5s",
"timeout": "1s",
"deregister_critical_service_after": "15s"
},
"data_dir": "/consul/data",
"retry_join": [
"192.168.0.1",
"192.168.0.2",
]
}
Below is the configuration you should be using in order to correctly associate the check with the configured service, and de-register the service after the check has been marked as critical for more than 15 seconds.
{
"service": {
"name": "api",
"tags": [
"api-tag"
],
"port": 80,
"check": {
"id": "api_up",
"name": "Fetch health check from local nginx",
"http": "http://localhost/HealthCheck",
"interval": "5s",
"timeout": "1s",
"deregister_critical_service_after": "15s"
}
},
"data_dir": "/consul/data",
"retry_join": [
"192.168.0.1",
"192.168.0.2"
]
}
Note this statement from the docs for DeregisterCriticalServiceAfter.
If a check is in the critical state for more than this configured value, then its associated service (and all of its associated checks) will automatically be deregistered. The minimum timeout is 1 minute, and the process that reaps critical services runs every 30 seconds, so it may take slightly longer than the configured timeout to trigger the deregistration. This should generally be configured with a timeout that's much, much longer than any expected recoverable outage for the given service.

How to setup a single node Consul server/client?

What configuration is required to achieve this?
It's possible using the "development mode" mentioned here - https://learn.hashicorp.com/consul/getting-started/agent (but not recommended for production).
I've tried setting this up but I'm not sure how to set the client config. What I've tried is a config of:
{
"data_dir": "/tmp2/consul-client",
"log_level": "INFO",
"server": false,
"node_name": "master",
"addresses": {
"https": "127.0.0.1"
},
"bind_addr": "127.0.0.1"
}
Which results in a failure of:
consul agent -config-file=client.json
==> Starting Consul agent...
==> Error starting agent: Failed to start Consul client: Failed to start lan serf: Failed to create memberlist: Could not set up network transport: failed to obtain an address: Failed to start TCP listener on "127.0.0.1" port 8301: listen tcp 127.0.0.1:8301: bind: address already in use
No "client" agent is required to run for an operational Consul cluster.
I had to set this server / master with the bootstrap_expect set to 1(number of nodes for boostrap process):
{
"retry_join" : ["127.0.0.1"],
"data_dir": "/tmp2/consul",
"log_level": "INFO",
"server": true,
"node_name": "master",
"addresses": {
"https": "127.0.0.1"
},
"bind_addr": "127.0.0.1",
"ui": true,
"bootstrap_expect": 1
}

not getting kibana gui outside kubernetes

I have created kops cluster and elasticsearch logging as below.
kops create cluster --zones ap-southeast-1a,ap-southeast-1b,ap-southeast-1c --topology private --networking calico --master-size t2.micro --master-count 3 --node-size t2.micro --node-count 2 --cloud-labels "Project=Kubernetes,Team=Devops" ${NAME} --ssh-public-key /root/.ssh/id_rsa.pub --yes
https://github.com/kubernetes/kops/blob/master/addons/logging-elasticsearch/v1.7.0.yaml
then some important cluster information.
root#ubuntu:~# kubectl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-logging ClusterIP 100.67.69.222 <none> 9200/TCP 2m
kibana-logging ClusterIP 100.67.182.172 <none> 5601/TCP 2m
kube-dns ClusterIP 100.64.0.10 <none> 53/UDP,53/TCP 6m
root#ubuntu:~# kubectl cluster-info
Kubernetes master is running at https://${NAME}
Elasticsearch is running at https://${NAME}/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://${NAME}/api/v1/namespaces/kube-system/services/kibana-logging/proxy
KubeDNS is running at https://${NAME}/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
now when i access "https://${NAME}/api/v1/namespaces/kube-system/services/kibana-logging"
i got as below in browser
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kibana-logging",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kibana-logging",
"uid": "8fc914f7-3f65-11e9-a970-0aaac13c99b2",
"resourceVersion": "923",
"creationTimestamp": "2019-03-05T16:41:44Z",
"labels": {
"k8s-addon": "logging-elasticsearch.addons.k8s.io",
"k8s-app": "kibana-logging",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Kibana"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 5601,
"targetPort": "ui"
}
],
"selector": {
"k8s-app": "kibana-logging"
},
"clusterIP": "100.67.182.172",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
when i access "https://${NAME}/api/v1/namespaces/kube-system/services/elasticsearch-logging"
i got as below in browser
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "elasticsearch-logging",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/elasticsearch-logging",
"uid": "8f7cc654-3f65-11e9-a970-0aaac13c99b2",
"resourceVersion": "902",
"creationTimestamp": "2019-03-05T16:41:44Z",
"labels": {
"k8s-addon": "logging-elasticsearch.addons.k8s.io",
"k8s-app": "elasticsearch-logging",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Elasticsearch"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 9200,
"targetPort": "db"
}
],
"selector": {
"k8s-app": "elasticsearch-logging"
},
"clusterIP": "100.67.69.222",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
when i access "https://${NAME}/api/v1/namespaces/kubesystem/services/elasticsearch-logging/proxy/"
i got below in browser
{
"name" : "elasticsearch-logging-0",
"cluster_name" : "kubernetes-logging",
"cluster_uuid" : "_na_",
"version" : {
"number" : "5.6.4",
"build_hash" : "8bbedf5",
"build_date" : "2017-10-31T18:55:38.105Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
when i access "https://${NAME}/api/v1/namespaces/kube-system/services/kibana-logging/proxy"
i got error as below.
:Error: 'dial tcp 100.111.147.69:5601: connect: connection refused'
Trying to reach: 'http://100.111.147.69:5601/'
why i am not getting GUI of kibana here?
After one hour, i got kibana logs as below
{"type":"log","#timestamp":"2019-03-22T12:46:58Z","tags":["info","optimize"],"pid":1,"message":"Optimizing and caching bundles for graph, ml, kibana, stateSessionStorageRedirect, timelion and status_page. This may take a few minutes"}
{"type":"log","#timestamp":"2019-03-22T13:18:19Z","tags":["info","optimize"],"pid":1,"message":"Optimization of bundles for graph, ml, kibana, stateSessionStorageRedirect, timelion and status_page complete in 1880.89 seconds"}
{"type":"log","#timestamp":"2019-03-22T13:18:20Z","tags":["status","plugin:kibana#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:21Z","tags":["status","plugin:elasticsearch#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:21Z","tags":["status","plugin:xpack_main#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:22Z","tags":["status","plugin:graph#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:30Z","tags":["reporting","warning"],"pid":1,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
{"type":"log","#timestamp":"2019-03-22T13:18:30Z","tags":["status","plugin:reporting#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:18:36Z","tags":["status","plugin:xpack_main#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:18:36Z","tags":["status","plugin:graph#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:18:36Z","tags":["status","plugin:reporting#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:18:36Z","tags":["status","plugin:elasticsearch#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:18:55Z","tags":["status","plugin:elasticsearch#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
{"type":"log","#timestamp":"2019-03-22T13:18:57Z","tags":["license","info","xpack"],"pid":1,"message":"Imported license information from Elasticsearch for [data] cluster: mode: trial | status: active | expiry date: 2019-04-21T11:47:30+00:00"}
{"type":"log","#timestamp":"2019-03-22T13:18:57Z","tags":["status","plugin:xpack_main#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
{"type":"log","#timestamp":"2019-03-22T13:18:57Z","tags":["status","plugin:graph#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
{"type":"log","#timestamp":"2019-03-22T13:18:57Z","tags":["status","plugin:reporting#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
{"type":"log","#timestamp":"2019-03-22T13:20:37Z","tags":["status","plugin:searchprofiler#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:37Z","tags":["status","plugin:ml#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:ml#5.6.4","info"],"pid":1,"state":"yellow","message":"Status changed from green to yellow - Waiting for Elasticsearch","prevState":"green","prevMsg":"Ready"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:ml#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:tilemap#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:watcher#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:grokdebugger#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:upgrade#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:38Z","tags":["status","plugin:console#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:39Z","tags":["status","plugin:metrics#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:41Z","tags":["status","plugin:timelion#5.6.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-03-22T13:20:41Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
{"type":"log","#timestamp":"2019-03-22T13:20:41Z","tags":["status","ui settings","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"response","#timestamp":"2019-03-22T13:26:01Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/","method":"get","headers":{"host":"MYHOSTNAME","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9","cache-control":"max-age=0","upgrade-insecure-requests":"1","x-forwarded-for":"172.20.1.246","x-forwarded-uri":"/api/v1/namespaces/kube-system/services/kibana-logging/proxy/"},"remoteAddress":"100.124.142.0","userAgent":"100.124.142.0"},"res":{"statusCode":200,"responseTime":622,"contentLength":9},"message":"GET / 200 622ms - 9.0B"}
also curl from master node
admin#ip-172-20-51-6:~$ curl 100.66.205.174:5601
<script>var hashRoute = '/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana';
var defaultRoute = '/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana';
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;
browser log
This is happening because it takes some time to find optimal setup of ELK.
If you check log of kibana-logging container, you will see these output:
$ kubectl -n kube-system logs -f kibana-logging-8675b4ffd-75rzd
{"type":"log","#timestamp":"2019-03-21T09:11:30Z","tags":["info","optimize"],"pid":1,"message":"Optimizing and caching bundles for graph, ml, kibana, stateSessionStorageRedirect, timelion and status_page. This may take a few minutes"}
You need to wait around 1h, then service will be available and you will able access kibana ui.
For more information see this github discussion

Restart server on node failure with Consul

Newbie to Microservices here.
I have been looking into develop a microservice with spring actuator while having Consul for service discovery and fail recovery.
I have configured a cluster as explained in Consul documentation.
Now what I'm trying to do is configure a Consul Watch to trigger when any of my service is down and execute a shell script to restart my service. Following is my configuration file.
{
"bind_addr": "127.0.0.1",
"datacenter": "dc1",
"encrypt": "EXz7LsrhpQ4idwqffiFoQ==",
"data_dir": "/data",
"log_level": "INFO",
"enable_syslog": true,
"enable_debug": true,
"enable_script_checks": true,
"ui":true,
"node_name": "SpringConsulClient",
"server": false,
"service": { "name": "Apache", "tags": ["HTTP"], "port": 8080,
"check": {"script": "curl localhost >/dev/null 2>&1", "interval": "10s"}},
"rejoin_after_leave": true,
"watches": [
{
"type": "service",
"handler": "/Consul-Script.sh"
}
]
}
Any help/tip would be greatly appreciate.
Regards,
Chrishan
Take a closer look at the description of the service watch type in the official documentation. It has an example, how you can specify it:
{
"type": "service",
"service": "redis",
"args": ["/usr/bin/my-service-handler.sh", "-redis"]
}
Note that it has no property handler and but takes a path to the script as an argument. And one more:
It requires the "service" parameter
It seems, in you case you need to specify it as follows:
"watches": [
{
"type": "service",
"service": "Apache",
"args": ["/fully/qualified/path/to/Consul-Script.sh"]
}
]

Resources