elasticsearch error kubernetes.labels.app - elasticsearch

I have a customized kubernetes, I want to analyze all the logs in it, I found the documentation
set everything up according to the documentation, my filebeat-kubernetes.yaml configuration files turned out to be
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts:['${ELASTICSEARCH_HOST:my_ip}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
namespace: kube-system
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.4.2
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: my_ip
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
run filebeat-kubernetes.yaml
kubectl create -f filebeat-kubernetes.yaml
I get indexes in elasticsearch
yellow open filebeat-6.4.2-2018.10.09 9A42qYPRSem4Z6ZBZQ1P7A 5 1 1129 0 457.3kb 457.3kb
yellow open filebeat-6.4.2-2018.10.11 6-8oKQ_RQBCx9D71kHhSiQ 5 1 32 0 56.4kb 56.4kb
yellow open filebeat-6.4.2-2018.10.10 Wc5xG55KRMWJXqJjfhBbUA 5 1 36826 0 29.8mb 29.8mb
but I have such errors in the elasticsearch logs
[DEBUG][o.e.a.b.TransportShardBulkAction] [filebeat-6.4.2-2018.10.11]
[3] failed to execute bulk item (index) BulkShardRequest [[filebeat-
6.4.2-2018.10.11][3]] containing [8] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [kubernetes.labels.app]
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:302) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:481) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:478) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:478) ~[elasticsearch-6.4.2.jar:6.4.2]
...
kubernetes version and elasticsearch version
kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
curl -XGET localhost:9200
{
"name" : "el3",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "hmmQcpMdSYCM8P3i9gOENw",
"version" : {
"number" : "6.4.2",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "04711c2",
"build_date" : "2018-09-26T13:34:09.098244Z",
"build_snapshot" : false,
"lucene_version" : "7.4.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
how to fix error failed to parse [kubernetes.labels.app]? or how can i remove filebeat - label from config?
Update
I added filebeat index template in elasticsearch, my file file-index-template.json
{
"mappings": {
"_default_": {
"dynamic_templates": [
{
"template1": {
"mapping": {
"doc_values": true,
"ignore_above": 1024,
"index": "false",
"type": "{dynamic_type}"
},
"match": "*"
}
}
],
"properties": {
"#timestamp": {
"type": "date"
},
"message": {
"type": "text",
"index": "true"
},
"offset": {
"type": "long",
"doc_values": "true"
},
"geoip": {
"type": "object",
"dynamic": true,
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
},
"settings": {
"index.refresh_interval": "5s"
},
"template": "filebeat-*"
}
added template in elasticsearch
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d#filebeat.json
check template
curl localhost:9200/_template/filebeat
{"filebeat":{"order":0,"index_patterns":["filebeat-*"],"settings":{"index":{"refresh_interval":"5s"}},"mappings":{"_default_":{"dynamic_templates":[{"template1":{"mapping":{"doc_values":true,"ignore_above":1024,"index":"false","type":"{dynamic_type}"},"match":"*"}}],"properties":{"#timestamp":{"type":"date"},"message":{"type":"text","index":"true"},"offset":{"type":"long","doc_values":"true"},"geoip":{"type":"object","dynamic":true,"properties":{"location":{"type":"geo_point"}}}}}},"aliases":{}}}
check index
curl localhost:9200/_cat/indices
yellow open filebeat-6.4.2-2018.10.17 c9EmKOQ9T7W_pl9tDRDycQ 5 1 13719988 0 13.8gb 13.8gb
yellow open filebeat-6.4.2-2018.10.14 daA_KAT_TYeL5Fn3SrT2Pw 5 1 56400 0 10.5mb 10.5mb
yellow open filebeat-6.4.2-2018.10.16 70uY3kooTjWRNaFCky24jQ 5 1 277731 0 69.3mb 69.3mb
green open .kibana DgMyQx7QSK659uBo1CccJQ 1 0 3 0 34.3kb 34.3kb
yellow open filebeat-6.4.2-2018.10.13 LsC4soOYSEqY3vwv-HOcjg 5 1 135921 0 19.1mb 19.1mb
yellow open filebeat-6.4.2-2018.10.15 hKNvyDl9SFSgw3nEU3faKg 5 1 72960 0 18.7mb 18.7mb
but still I see in the elasticsearch logs
[DEBUG][o.e.a.b.TransportShardBulkAction] [filebeat-6.4.2-2018.10.17][4] failed to execute bulk item (index) BulkShardRequest [[filebeat-6.4.2-2018.10.17][4]] containing [13] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [kubernetes.labels.app]
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:302) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:481) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:478) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:478) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:95) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:69) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:263) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:725) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:702) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:682) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.action.bulk.TransportShardBulkAction.lambda$executeIndexRequestOnPrimary$2(TransportShardBulkAction.java:560) ~[elasticsearch-6.4.2.jar:6.4.2]
...
Update 2
curl localhost:9200/_cat/indices
yellow open filebeat-6.4.2-2018.10.25 0RCTMniqQyucD530dz_eOQ 5 1 511 0 491.1kb 491.1kb
yellow open filebeat-6.4.2-2018.10.27 64b5ThH1TauvwMIo_ueTIg 5 1 487 0 479.4kb 479.4kb
yellow open filebeat-6.4.2-2018.10.28 Lf4UzVzESIGfGvx7VsRzFQ 5 1 283 0 357.4kb 357.4kb
yellow open filebeat-6.4.2-2018.10.24 fCUmzy2UQSy9lsNOMWmkEQ 5 1 2866 0 1.8mb 1.8mb
yellow open filebeat-6.4.2-2018.10.26 t3rPwBS4TYOhJWjtFRYk6g 5 1 323 0 428.9kb 428.9kb
yellow open filebeat-6.4.2-2018.10.22 -Rq7SbeqS_yNX3I4lwsGRg 5 1 92 0 173.2kb 173.2kb
yellow open filebeat-6.4.2-2018.10.29 yAje-vFhQqmavxSO7tlDGA 5 1 4810 0 8.5mb 8.5mb
Check elasticksearch
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 33,
"timed_out" : false,
"_shards" : {
"total" : 35,
"successful" : 35,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 67309,
"max_score" : 1.0,
"hits" : [
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "-m0iwGYBP2-nX77s4y_g",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:36.393Z",
"message" : "2018-10-22 07:32:36.393 [INFO][92] int_dataplane.go 747: Finished applying updates to dataplane. msecToApply=92.064514",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 630130,
"stream" : "stdout"
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "_m0iwGYBP2-nX77s4y_g",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:38.159Z",
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"offset" : 630467,
"stream" : "stdout",
"message" : "2018-10-22 07:32:38.158 [INFO][92] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "n20iwGYBP2-nX77s5jGM",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:41.172Z",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 631205,
"stream" : "stdout",
"message" : "2018-10-22 07:32:41.172 [INFO][92] table.go 438: Loading current iptables state and checking it is correct. ipVersion=0x4 table=\"raw\"",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "WG0iwGYBP2-nX77s6DIH",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:45.710Z",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 632166,
"stream" : "stdout",
"message" : "2018-10-22 07:32:45.710 [INFO][92] ipsets.go 222: Asked to resync with the dataplane on next update. family=\"inet\"",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2",
"name" : "filebeat-6p7rc"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "Wm0iwGYBP2-nX77s6DIH",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:45.710Z",
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 632353,
"stream" : "stdout",
"message" : "2018-10-22 07:32:45.710 [INFO][92] ipsets.go 253: Resyncing ipsets with dataplane. family=\"inet\"",
"prospector" : {
"type" : "docker"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "XG0iwGYBP2-nX77s6DIH",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:45.711Z",
"stream" : "stdout",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"message" : "2018-10-22 07:32:45.711 [INFO][92] ipsets.go 295: Finished resync family=\"inet\" numInconsistenciesFound=0 resyncDuration=876.908µs",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 632522
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "QG0iwGYBP2-nX77s6TNr",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:45.711Z",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 632726,
"stream" : "stdout",
"message" : "2018-10-22 07:32:45.711 [INFO][92] int_dataplane.go 747: Finished applying updates to dataplane. msecToApply=1.061403",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2",
"name" : "filebeat-6p7rc"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "1W0iwGYBP2-nX77s8zc2",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:58.158Z",
"message" : "2018-10-22 07:32:58.158 [INFO][92] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 634199,
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2",
"name" : "filebeat-6p7rc"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"stream" : "stdout"
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "-G0iwGYBP2-nX77s8zc2",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:33:00.168Z",
"message" : "2018-10-22 07:33:00.167 [INFO][92] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 634391,
"stream" : "stdout",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "yW0iwGYBP2-nX77s_j2e",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:33:18.158Z",
"offset" : 636780,
"stream" : "stdout",
"message" : "2018-10-22 07:33:18.158 [INFO][92] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log"
}
}
]
}
}

I suppose that you haven't set up an index template for Filebeat fields, which should be parsed to Elasticsearch for further processing. You can find useful information in this Article about implementing the Filebeat index template on your cluster.
In addition, there was a similar issue reported in GitHub about parsing kubernetes.labels using Logstash event collector.

Related

Csv file load through logstash to elasticsearch not working

I am trying to load a csv file from Linux system throughlogstash(docker based) with the below conf file.
./logstash/pipeline/logstash_csv_report.conf
input {
file {
path => "/home/user/elk/logstash/report-file.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
columns => ["start_time", "date", "requester", "full-name", "id", "config", "status"]
}
}
output {
elasticsearch {
action => "index"
hosts => "http://elasticsearch:9200"
index => "project-info"
}
stdout {}
}
I do not know the reason that why is my csv file not getting uploaded into Elasticsearch. My logstash docker logs last few lines as follows. In my logstash i don't see any errors.
logstash | [2021-01-18T04:12:36,076][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.1}
logstash | [2021-01-18T04:12:36,213][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash | [2021-01-18T04:12:36,280][INFO ][filewatch.observingtail ][main][497c9eb0da97efa19ad20783321e7bf30eb302262f92ac565b074e3ad91ea72d] START, creating Discoverer, Watch with file and sincedb collections
logstash | [2021-01-18T04:12:36,282][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash | [2021-01-18T04:12:36,474][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
My docker-compose file as follows.
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
container_name: elasticsearch
restart: unless-stopped
environment:
- node.name=elasticsearch
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- '9200:9200'
- '9300:9300'
volumes:
- './elasticsearch:/usr/share/elasticsearch/data'
networks:
- elk
kibana:
image: docker.elastic.co/kibana/kibana:7.10.1
container_name: kibana
restart: unless-stopped
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
ports:
- '5601:5601'
volumes:
- './kibana:/usr/share/kibana/data'
depends_on:
- elasticsearch
networks:
- elk
logstash:
image: docker.elastic.co/logstash/logstash:7.10.1
container_name: logstash
restart: unless-stopped
environment:
- 'HEAP_SIZE:1g'
- 'LS_JAVA_OPTS=-Xms1g -Xmx1g'
- 'ELASTICSEARCH_HOST:elasticsearch'
- 'ELASTICSEARCH_PORT:9200'
command: sh -c "logstash -f /usr/share/logstash/pipeline/logstash_csv_report.conf"
ports:
- '5044:5044'
- '5000:5000/tcp'
- '5000:5000/udp'
- '9600:9600'
volumes:
- './logstash/pipeline:/usr/share/logstash/pipeline'
depends_on:
- elasticsearch
networks:
- elk
networks:
elk:
driver: bridge
In my ./logstash/pipeline folder i have only logstash_csv_report.conf file.
Same csv file able to upload using Kibana GUI using import option.
Someone please help me to resolve this problem using logstash upload.
Curl output.
# curl -XGET http://51.52.53.54:9600/_node/stats/?pretty
{
"host" : "3c08f83dfc9b",
"version" : "7.10.1",
"http_address" : "0.0.0.0:9600",
"id" : "5f301139-33bf-4e4d-99a0-7b4d7b464675",
"name" : "3c08f83dfc9b",
"ephemeral_id" : "95a0101e-e54d-4f72-aa7a-dd18ccb2814e",
"status" : "green",
"snapshot" : false,
"pipeline" : {
"workers" : 64,
"batch_size" : 125,
"batch_delay" : 50
},
"jvm" : {
"threads" : {
"count" : 157,
"peak_count" : 158
},
"mem" : {
"heap_used_percent" : 16,
"heap_committed_in_bytes" : 4151836672,
"heap_max_in_bytes" : 4151836672,
"heap_used_in_bytes" : 689455928,
"non_heap_used_in_bytes" : 190752760,
"non_heap_committed_in_bytes" : 218345472,
"pools" : {
"survivor" : {
"peak_max_in_bytes" : 143130624,
"max_in_bytes" : 143130624,
"committed_in_bytes" : 143130624,
"peak_used_in_bytes" : 65310304,
"used_in_bytes" : 39570400
},
"old" : {
"peak_max_in_bytes" : 2863333376,
"max_in_bytes" : 2863333376,
"committed_in_bytes" : 2863333376,
"peak_used_in_bytes" : 115589344,
"used_in_bytes" : 115589344
},
"young" : {
"peak_max_in_bytes" : 1145372672,
"max_in_bytes" : 1145372672,
"committed_in_bytes" : 1145372672,
"peak_used_in_bytes" : 1145372672,
"used_in_bytes" : 534296184
}
}
},
"gc" : {
"collectors" : {
"old" : {
"collection_count" : 3,
"collection_time_in_millis" : 1492
},
"young" : {
"collection_count" : 7,
"collection_time_in_millis" : 303
}
}
},
"uptime_in_millis" : 4896504
},
"process" : {
"open_file_descriptors" : 91,
"peak_open_file_descriptors" : 92,
"max_file_descriptors" : 1048576,
"mem" : {
"total_virtual_in_bytes" : 21971415040
},
"cpu" : {
"total_in_millis" : 478180,
"percent" : 0,
"load_average" : {
"1m" : 1.35,
"5m" : 0.7,
"15m" : 0.53
}
}
},
"events" : {
"in" : 0,
"filtered" : 0,
"out" : 0,
"duration_in_millis" : 0,
"queue_push_duration_in_millis" : 0
},
"pipelines" : {
"main" : {
"events" : {
"out" : 0,
"duration_in_millis" : 0,
"queue_push_duration_in_millis" : 0,
"filtered" : 0,
"in" : 0
},
"plugins" : {
"inputs" : [ {
"id" : "497c9eb0da97efa19ad20783321e7bf30eb302262f92ac565b074e3ad91ea72d",
"events" : {
"out" : 0,
"queue_push_duration_in_millis" : 0
},
"name" : "file"
} ],
"codecs" : [ {
"id" : "rubydebug_a060ea28-52ce-4186-a474-272841e0429e",
"decode" : {
"out" : 0,
"writes_in" : 0,
"duration_in_millis" : 0
},
"encode" : {
"writes_in" : 0,
"duration_in_millis" : 2
},
"name" : "rubydebug"
}, {
"id" : "plain_d2037602-bfe9-4eaf-8cc8-0a84665fa186",
"decode" : {
"out" : 0,
"writes_in" : 0,
"duration_in_millis" : 0
},
"encode" : {
"writes_in" : 0,
"duration_in_millis" : 0
},
"name" : "plain"
}, {
"id" : "plain_1c01f964-82e5-45a1-b9f9-a400bc2ac486",
"decode" : {
"out" : 0,
"writes_in" : 0,
"duration_in_millis" : 0
},
"encode" : {
"writes_in" : 0,
"duration_in_millis" : 0
},
"name" : "plain"
} ],
"filters" : [ {
"id" : "3eee98d7d4b500333a2c45a729786d4d2aefb7cee7ae79b066a50a1630312b25",
"events" : {
"out" : 0,
"duration_in_millis" : 39,
"in" : 0
},
"name" : "csv"
} ],
"outputs" : [ {
"id" : "8959d62efd3616a9763067781ec2ff67a7d8150d6773a48fc54f71478a9ef7ab",
"events" : {
"out" : 0,
"duration_in_millis" : 0,
"in" : 0
},
"name" : "elasticsearch"
}, {
"id" : "b457147a2293c2dee97b6ee9a5205de24159b520e86eb89be71fde7ba394a0d2",
"events" : {
"out" : 0,
"duration_in_millis" : 22,
"in" : 0
},
"name" : "stdout"
} ]
},
"reloads" : {
"last_success_timestamp" : null,
"last_error" : null,
"successes" : 0,
"failures" : 0,
"last_failure_timestamp" : null
},
"queue" : {
"type" : "memory",
"events_count" : 0,
"queue_size_in_bytes" : 0,
"max_queue_size_in_bytes" : 0
},
"hash" : "3479b7408213a7b52f36d8ad3dbd5a3174768a004119776e0244ed1971814f72",
"ephemeral_id" : "ffc4d5d6-6f90-4c24-8b2a-e932d027a5f2"
},
".monitoring-logstash" : {
"events" : null,
"plugins" : {
"inputs" : [ ],
"codecs" : [ ],
"filters" : [ ],
"outputs" : [ ]
},
"reloads" : {
"last_success_timestamp" : null,
"last_error" : null,
"successes" : 0,
"failures" : 0,
"last_failure_timestamp" : null
},
"queue" : null
}
},
"reloads" : {
"successes" : 0,
"failures" : 0
},
"os" : {
"cgroup" : {
"cpuacct" : {
"usage_nanos" : 478146261497,
"control_group" : "/"
},
"cpu" : {
"cfs_quota_micros" : -1,
"stat" : {
"number_of_times_throttled" : 0,
"time_throttled_nanos" : 0,
"number_of_elapsed_periods" : 0
},
"control_group" : "/",
"cfs_period_micros" : 100000
}
}
},
"queue" : {
"events_count" : 0
}
You need to make sure that /home/user/elk/logstash/report-file.csv can be read by Logstash. I don't see that file being mapped to a volume accessible to Logstash.
In your docker compose configuration you need to add another volume like this:
logstash:
...
volumes:
- './logstash/pipeline:/usr/share/logstash/pipeline'
- '/home/user/elk/logstash:/home/user/elk/logstash'

Elasticsearch ILM not rolling

I have configured my ILM to rollover when the indice size be 20GB or after passing 30 days in the hot node
but my indice passed 20GB and still didn't pass to the cold node
and when I run: GET _cat/indices?v I get:
green open packetbeat-7.9.2-2020.10.22-000001 RRAnRZrrRZiihscJ3bymig 10 1 63833049 0 44.1gb 22gb
Could you tell me how to solve that please !
Knowing that in my packetbeat file configuration, I have just changed the number of shards:
setup.template.settings:
index.number_of_shards: 10
index.number_of_replicas: 1
when I run the command GET packetbeat-7.9.2-2020.10.22-000001/_settings I get this output:
{
"packetbeat-7.9.2-2020.10.22-000001" : {
"settings" : {
"index" : {
"lifecycle" : {
"name" : "packetbeat",
"rollover_alias" : "packetbeat-7.9.2"
},
"routing" : {
"allocation" : {
"include" : {
"_tier_preference" : "data_content"
}
}
},
"mapping" : {
"total_fields" : {
"limit" : "10000"
}
},
"refresh_interval" : "5s",
"number_of_shards" : "10",
"provided_name" : "<packetbeat-7.9.2-{now/d}-000001>",
"max_docvalue_fields_search" : "200",
"query" : {
"default_field" : [
"message",
"tags",
"agent.ephemeral_id",
"agent.id",
"agent.name",
"agent.type",
"agent.version",
"as.organization.name",
"client.address",
"client.as.organization.name",
and the output of the command GET /packetbeat-7.9.2-2020.10.22-000001/_ilm/explain is :
{
"indices" : {
"packetbeat-7.9.2-2020.10.22-000001" : {
"index" : "packetbeat-7.9.2-2020.10.22-000001",
"managed" : true,
"policy" : "packetbeat",
"lifecycle_date_millis" : 1603359683835,
"age" : "15.04d",
"phase" : "hot",
"phase_time_millis" : 1603359684332,
"action" : "rollover",
"action_time_millis" : 1603360173138,
"step" : "check-rollover-ready",
"step_time_millis" : 1603360173138,
"phase_execution" : {
"policy" : "packetbeat",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_size" : "50gb",
"max_age" : "30d"
}
}
},
"version" : 1,
"modified_date_in_millis" : 1603359683339
}
}
}
}
It's weird that it's 50GB !!
Thanks for your help
So I found the solution of this problem.
After updating the policy, I removed the policy from the index using it, and then added it again to those index.

elasticsearch - moving from multi servers to one server

I have a cluster of 5 servers for elasticsearch, all with the same version of elasticsearch.
I need to move all data from servers 2, 3, 4, 5 to server 1.
How can I do it?
How can I know which server has data at all?
After change of _cluster/settings with:
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.require._host" : "server1"
}
}
I get for: curl -GET http://localhost:9200/_cat/allocation?v
the following:
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
6 54.5gb 170.1gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-5
6 50.4gb 167.4gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-3
6 22.6gb 139.8gb 2tb 2.1tb 6 *.*.*.* *.*.*.* node-2
6 49.8gb 166.6gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-4
6 54.8gb 172.1gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-1
and for: GET _cluster/settings?include_defaults
the following:
#! Deprecation: [node.max_local_storage_nodes] setting was deprecated in Elasticsearch and will be removed in a future release!
{
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"require" : {
"_host" : "server1"
}
}
}
}
},
"transient" : { },
"defaults" : {
"cluster" : {
"max_voting_config_exclusions" : "10",
"auto_shrink_voting_configuration" : "true",
"election" : {
"duration" : "500ms",
"initial_timeout" : "100ms",
"max_timeout" : "10s",
"back_off_time" : "100ms",
"strategy" : "supports_voting_only"
},
"no_master_block" : "write",
"persistent_tasks" : {
"allocation" : {
"enable" : "all",
"recheck_interval" : "30s"
}
},
"blocks" : {
"read_only_allow_delete" : "false",
"read_only" : "false"
},
"remote" : {
"node" : {
"attr" : ""
},
"initial_connect_timeout" : "30s",
"connect" : "true",
"connections_per_cluster" : "3"
},
"follower_lag" : {
"timeout" : "90000ms"
},
"routing" : {
"use_adaptive_replica_selection" : "true",
"rebalance" : {
"enable" : "all"
},
"allocation" : {
"node_concurrent_incoming_recoveries" : "2",
"node_initial_primaries_recoveries" : "4",
"same_shard" : {
"host" : "false"
},
"total_shards_per_node" : "-1",
"shard_state" : {
"reroute" : {
"priority" : "NORMAL"
}
},
"type" : "balanced",
"disk" : {
"threshold_enabled" : "true",
"watermark" : {
"low" : "85%",
"flood_stage" : "95%",
"high" : "90%"
},
"include_relocations" : "true",
"reroute_interval" : "60s"
},
"awareness" : {
"attributes" : [ ]
},
"balance" : {
"index" : "0.55",
"threshold" : "1.0",
"shard" : "0.45"
},
"enable" : "all",
"node_concurrent_outgoing_recoveries" : "2",
"allow_rebalance" : "indices_all_active",
"cluster_concurrent_rebalance" : "2",
"node_concurrent_recoveries" : "2"
}
},
...
"nodes" : {
"reconnect_interval" : "10s"
},
"service" : {
"slow_master_task_logging_threshold" : "10s",
"slow_task_logging_threshold" : "30s"
},
...
"name" : "cluster01",
...
"max_shards_per_node" : "1000",
"initial_master_nodes" : [ ],
"info" : {
"update" : {
"interval" : "30s",
"timeout" : "15s"
}
}
},
...
You can use shard allocation filtering to move all your data to server 1.
Simply run this:
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.require._name" : "node-1",
"cluster.routing.allocation.exclude._name" : "node-2,node-3,node-4,node-5"
}
}
Instead of _name you can also use _ip or _host depending on what is more practical for you.
After running this command, all primary shards will migrate to server1 (the replicas will be unassigned). You just need to make sure that server1 has enough storage space to store all the primary shards.
If you want to get rid of the unassigned replicas (and get back to green state), simply run this:
PUT _all/_settings
{
"index" : {
"number_of_replicas" : 0
}
}

Elasticsearch Cluster On EC2 Aws Fails to join cluster

I installed oracle-jdk8 and elasticsearch on a ec2 instance and created an ami out of it. In the next copy of the ec2 machine i just changed the node name in elasticsearch.yml
However both the nodes if run individually are running.[NOTE the node id is appearing as same] But if run simultaneously, the one started later is failing with following in the logs:
[2018-08-07T16:35:06,260][INFO ][o.e.d.z.ZenDiscovery ] [node-1]
failed to send join request to master
[{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}], reason
[RemoteTransportException[[node-2][10.127.114.212:9300][internal:discovery/zen/join]];
nested: IllegalArgumentException[can't add node
{node-1}{uQHBhDuxTeWOgmZHsuaZmA}{Ba1r1GoMSZOMeIWVKtPD2Q}{10.127.114.194}{10.127.114.194:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66716696576, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}, found existing node
{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, xpack.installed=true,
ml.max_open_jobs=20, ml.enabled=true} with the same id but is a
different node instance];
My elasticsearch.yml:
cluster.name: elk
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
network.publish_host: _ec2:privateIp_
transport.publish_host: _ec2:privateIp_
discovery.zen.hosts_provider: ec2
discovery.ec2.tag.ElasticSearch: elk-tag
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
Output from _nodes endpoint:
//----Output when node-1 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-1",
"transport_address" : "10.127.114.194:9300",
"host" : "10.127.114.194",
"ip" : "10.127.114.194",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66716696576",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 3110,
"mlockall" : true
}
}
}
}
//----Output when node-2 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-2",
"transport_address" : "10.127.114.212:9300",
"host" : "10.127.114.212",
"ip" : "10.127.114.212",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66718932992",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 4869,
"mlockall" : true
}
}
}
}
Solved this by deleting rm -rf /var/lib/elasticsearch/nodes/ in every instance and restarting elasticsearch.

Elastic Search 2.3.4 Stops allocating shards with no obvious reason

I am attempting to upgrade our Elastic Search cluster from 1.6 to 2.3.4. The upgrade seems to work, and I can see shard allocation starting to happen within Kopf - but at some point the shard allocation appears to stop with many shards left unallocated, and no errors being reported in the logs. Typically I'm left with 1200 / 3800 shards unallocated.
We have a typical 3 node cluster and I am trialing this standalone on my local machine with all 3 nodes running on my local machine.
I have seen similar symptoms reported - see https://t37.net/how-to-fix-your-elasticsearch-cluster-stuck-in-initializing-shards-mode.html
. The solution here seemed to be to manually allocate the shards, which I've tried (and works) but I'm at a loss to explain the behaviour of elastic search here. I'd prefer not to go down this route, as I want my cluster to spin up automatically without intervention.
There is also https://github.com/elastic/elasticsearch/pull/14494 which seems to be resolved with the latest ES version, so shouldn't be a problem.
There are no errors in log files - I have upped the root level logging to 'DEBUG' in order to see what I can. What I can see is lines like the below for each unallocated shard (this from the master node logs):
[2016-07-26 09:18:04,859][DEBUG][gateway ] [germany] [index][4] found 0 allocations of [index][4], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-07-26T08:05:04.447Z]], highest version: [-1]
[2016-07-26 09:18:04,859][DEBUG][gateway ] [germany] [index][4]: not allocating, number_of_allocated_shards_found [0]
Config file (with comments removed):
cluster.name: elasticsearch-jm-2.3.4
node.name: germany
script.inline: true
script.indexed: true
If I query the cluster health after reallocation has stopped - I get the response below:
http://localhost:9200/_cluster/health?pretty
cluster_name : elasticsearch-jm-2.3.4
status : red
timed_out : False
number_of_nodes : 3
number_of_data_nodes : 3
active_primary_shards : 1289
active_shards : 2578
relocating_shards : 0
initializing_shards : 0
unassigned_shards : 1264
delayed_unassigned_shards : 0
number_of_pending_tasks : 0
number_of_in_flight_fetch : 0
task_max_waiting_in_queue_millis : 0
active_shards_percent_as_number : 67.10046850598647
Further querying for shards - filtered to one index with unallocated shards. As can be seen - shard 0 and 4 are unallocated whereas shard 1 2 and 3 have been allocated :
http://localhost:9200/_cat/shards
cs-payment-warn-2016.07.20 3 p STARTED 106 92.4kb 127.0.0.1 germany
cs-payment-warn-2016.07.20 3 r STARTED 106 92.4kb 127.0.0.1 switzerland
cs-payment-warn-2016.07.20 4 p UNASSIGNED
cs-payment-warn-2016.07.20 4 r UNASSIGNED
cs-payment-warn-2016.07.20 2 r STARTED 120 74.5kb 127.0.0.1 cyprus
cs-payment-warn-2016.07.20 2 p STARTED 120 74.5kb 127.0.0.1 germany
cs-payment-warn-2016.07.20 1 r STARTED 120 73.8kb 127.0.0.1 cyprus
cs-payment-warn-2016.07.20 1 p STARTED 120 73.8kb 127.0.0.1 germany
cs-payment-warn-2016.07.20 0 p UNASSIGNED
cs-payment-warn-2016.07.20 0 r UNASSIGNED
Manually rerouting an unassigned shard appears to work - (stripped back results set)
http://localhost:9200/_cluster/reroute
POST:
{
"dry_run": true,
"commands": [
{
"allocate": {
"index": "cs-payment-warn-2016.07.20",
"shard": 4,
"node": "switzerland" ,
"allow_primary": true
}
}
]
}
Response:
{
"acknowledged" : true,
"state" : {
"version" : 722,
"state_uuid" : "Vw2vPoCMQk2ZosjzviD4TQ",
"master_node" : "yhL7XXy-SKu_WAM-C33dzA",
"blocks" : {},
"nodes" : {},
"routing_table" : {
"indices" : {
"cs-payment-warn-2016.07.20" : {
"shards" : {
"3" : [{
"state" : "STARTED",
"primary" : true,
"node" : "yhL7XXy-SKu_WAM-C33dzA",
"relocating_node" : null,
"shard" : 3,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "x_Iq88hmTqiasrjW09hVuw"
}
}, {
"state" : "STARTED",
"primary" : false,
"node" : "1a8dgBscTUS3c7Pv4mN9CQ",
"relocating_node" : null,
"shard" : 3,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "DF-EUEy_SpeUElnZI6cgsQ"
}
}
],
"4" : [{
"state" : "INITIALIZING",
"primary" : true,
"node" : "1a8dgBscTUS3c7Pv4mN9CQ",
"relocating_node" : null,
"shard" : 4,
"index" : "cs-payment-warn-2016.07.20",
"version" : 1,
"allocation_id" : {
"id" : "1tw7C7YPQsWwm_O-8mYHRg"
},
"unassigned_info" : {
"reason" : "INDEX_CREATED",
"at" : "2016-07-26T14:20:15.395Z",
"details" : "force allocation from previous reason CLUSTER_RECOVERED, null"
}
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 4,
"index" : "cs-payment-warn-2016.07.20",
"version" : 1,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}
],
"2" : [{
"state" : "STARTED",
"primary" : false,
"node" : "rlRQ2u0XQRqxWld-wSrOug",
"relocating_node" : null,
"shard" : 2,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "eQ-_vWNbRp27So0iGSitmA"
}
}, {
"state" : "STARTED",
"primary" : true,
"node" : "yhL7XXy-SKu_WAM-C33dzA",
"relocating_node" : null,
"shard" : 2,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "O1PU1_NVS8-uB2yBrG76MA"
}
}
],
"1" : [{
"state" : "STARTED",
"primary" : false,
"node" : "rlRQ2u0XQRqxWld-wSrOug",
"relocating_node" : null,
"shard" : 1,
"index" : "cs-payment-warn-2016.07.20",
"version" : 24,
"allocation_id" : {
"id" : "ZmxtOvorRVmndR15OJMkMA"
}
}, {
"state" : "STARTED",
"primary" : true,
"node" : "yhL7XXy-SKu_WAM-C33dzA",
"relocating_node" : null,
"shard" : 1,
"index" : "cs-payment-warn-2016.07.20",
"version" : 24,
"allocation_id" : {
"id" : "ZNgzePThQxS-iqhRSXzZCw"
}
}
],
"0" : [{
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 0,
"index" : "cs-payment-warn-2016.07.20",
"version" : 0,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 0,
"index" : "cs-payment-warn-2016.07.20",
"version" : 0,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}
]
}
}
},
"routing_nodes" : {
"unassigned" : [{
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 4,
"index" : "cs-payment-warn-2016.07.20",
"version" : 1,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 0,
"index" : "cs-payment-warn-2016.07.20",
"version" : 0,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 0,
"index" : "cs-payment-warn-2016.07.20",
"version" : 0,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}
]
},
"nodes" : {
"rlRQ2u0XQRqxWld-wSrOug" : [{
"state" : "STARTED",
"primary" : false,
"node" : "rlRQ2u0XQRqxWld-wSrOug",
"relocating_node" : null,
"shard" : 2,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "eQ-_vWNbRp27So0iGSitmA"
}
}, {
"state" : "STARTED",
"primary" : false,
"node" : "rlRQ2u0XQRqxWld-wSrOug",
"relocating_node" : null,
"shard" : 1,
"index" : "cs-payment-warn-2016.07.20",
"version" : 24,
"allocation_id" : {
"id" : "ZmxtOvorRVmndR15OJMkMA"
}
}
]
}
}
}
}

Resources