Elasticsearch Cluster On EC2 Aws Fails to join cluster - elasticsearch

I installed oracle-jdk8 and elasticsearch on a ec2 instance and created an ami out of it. In the next copy of the ec2 machine i just changed the node name in elasticsearch.yml
However both the nodes if run individually are running.[NOTE the node id is appearing as same] But if run simultaneously, the one started later is failing with following in the logs:
[2018-08-07T16:35:06,260][INFO ][o.e.d.z.ZenDiscovery ] [node-1]
failed to send join request to master
[{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}], reason
[RemoteTransportException[[node-2][10.127.114.212:9300][internal:discovery/zen/join]];
nested: IllegalArgumentException[can't add node
{node-1}{uQHBhDuxTeWOgmZHsuaZmA}{Ba1r1GoMSZOMeIWVKtPD2Q}{10.127.114.194}{10.127.114.194:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66716696576, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}, found existing node
{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, xpack.installed=true,
ml.max_open_jobs=20, ml.enabled=true} with the same id but is a
different node instance];
My elasticsearch.yml:
cluster.name: elk
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
network.publish_host: _ec2:privateIp_
transport.publish_host: _ec2:privateIp_
discovery.zen.hosts_provider: ec2
discovery.ec2.tag.ElasticSearch: elk-tag
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
Output from _nodes endpoint:
//----Output when node-1 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-1",
"transport_address" : "10.127.114.194:9300",
"host" : "10.127.114.194",
"ip" : "10.127.114.194",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66716696576",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 3110,
"mlockall" : true
}
}
}
}
//----Output when node-2 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-2",
"transport_address" : "10.127.114.212:9300",
"host" : "10.127.114.212",
"ip" : "10.127.114.212",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66718932992",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 4869,
"mlockall" : true
}
}
}
}

Solved this by deleting rm -rf /var/lib/elasticsearch/nodes/ in every instance and restarting elasticsearch.

Related

cross cluster elasticsearch on kuberntes

I have 2 elastics cluster on 2 different kubernetes VMS I tried to connect with cross cluster . but its not working, I add detailed below can someone assist and tell me what I did wrong or missed?
I tried to connect from one elastic to another as below:
GET _cluster/settings
{
"persistent" : {
"cluster" : {
"remote" : {
"cluster_three" : {
"mode" : "proxy",
"proxy_address" : "122.22.111.222:30005"
},
"cluster_two" : {
"mode" : "sniff",
"skip_unavailable" : "false",
"transport" : {
"compress" : "true"
},
"seeds" : [
"122.22.222.182:30005"
]
},
"cluster_one" : {
"seeds" : [
"127.0.0.1:9200"
],
"transport" : {
"ping_schedule" : "30s"
}
}
}
}
},
"transient" : { }
}
}
I tried to search on cluster two and I get the following error:
{"statusCode":502,"error":"Bad Gateway","message":"Client request timeout"}
but when I do curl on elastic to cluste_two I get this :
curl 122.22.222.182:30005
{
"name" : "elasticsearch-client-7dcc49ddsdsd4-ljwasdpl",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "bOkaIrcFTgetsadaaY114N4a1EQ",
"version" : {
"number" : "7.10.2",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "747e1cc71def077253878a59143c1f785asdasafa92b9",
"build_date" : "2021-01-13T00:42:12.435326Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
this is my svc configured on kubernetes for cluste_two:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-client NodePort 10.111.11.28 <none> 9200:30005/TCP 27m
elasticsearch-discovery ClusterIP 10.111.11.11 <none> 9300/TCP 27m
Elasticsearch discovery work on port 9300 instead of 9200 while you are running curl it's going as client request over port 30005.
Please check 9300 is open to connect cross cluster. as your elasticsearch-discovery service running as clusterIP you might have to change the type of it to expose out of K8s using NodePort of LoadBalancer as per requirement.
for example
# From cluster 1, we’ll define how the cluster-2 can be accessed
PUT /_cluster/settings
{
"persistent" : {
"cluster" : {
"remote" : {
"us-cluster" : {
"seeds" : [
"127.0.0.1:9300"
]
}
}
}
}
}
you can also look into : https://www.elastic.co/blog/cross-datacenter-replication-with-elasticsearch-cross-cluster-replication

elasticsearch - moving from multi servers to one server

I have a cluster of 5 servers for elasticsearch, all with the same version of elasticsearch.
I need to move all data from servers 2, 3, 4, 5 to server 1.
How can I do it?
How can I know which server has data at all?
After change of _cluster/settings with:
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.require._host" : "server1"
}
}
I get for: curl -GET http://localhost:9200/_cat/allocation?v
the following:
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
6 54.5gb 170.1gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-5
6 50.4gb 167.4gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-3
6 22.6gb 139.8gb 2tb 2.1tb 6 *.*.*.* *.*.*.* node-2
6 49.8gb 166.6gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-4
6 54.8gb 172.1gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-1
and for: GET _cluster/settings?include_defaults
the following:
#! Deprecation: [node.max_local_storage_nodes] setting was deprecated in Elasticsearch and will be removed in a future release!
{
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"require" : {
"_host" : "server1"
}
}
}
}
},
"transient" : { },
"defaults" : {
"cluster" : {
"max_voting_config_exclusions" : "10",
"auto_shrink_voting_configuration" : "true",
"election" : {
"duration" : "500ms",
"initial_timeout" : "100ms",
"max_timeout" : "10s",
"back_off_time" : "100ms",
"strategy" : "supports_voting_only"
},
"no_master_block" : "write",
"persistent_tasks" : {
"allocation" : {
"enable" : "all",
"recheck_interval" : "30s"
}
},
"blocks" : {
"read_only_allow_delete" : "false",
"read_only" : "false"
},
"remote" : {
"node" : {
"attr" : ""
},
"initial_connect_timeout" : "30s",
"connect" : "true",
"connections_per_cluster" : "3"
},
"follower_lag" : {
"timeout" : "90000ms"
},
"routing" : {
"use_adaptive_replica_selection" : "true",
"rebalance" : {
"enable" : "all"
},
"allocation" : {
"node_concurrent_incoming_recoveries" : "2",
"node_initial_primaries_recoveries" : "4",
"same_shard" : {
"host" : "false"
},
"total_shards_per_node" : "-1",
"shard_state" : {
"reroute" : {
"priority" : "NORMAL"
}
},
"type" : "balanced",
"disk" : {
"threshold_enabled" : "true",
"watermark" : {
"low" : "85%",
"flood_stage" : "95%",
"high" : "90%"
},
"include_relocations" : "true",
"reroute_interval" : "60s"
},
"awareness" : {
"attributes" : [ ]
},
"balance" : {
"index" : "0.55",
"threshold" : "1.0",
"shard" : "0.45"
},
"enable" : "all",
"node_concurrent_outgoing_recoveries" : "2",
"allow_rebalance" : "indices_all_active",
"cluster_concurrent_rebalance" : "2",
"node_concurrent_recoveries" : "2"
}
},
...
"nodes" : {
"reconnect_interval" : "10s"
},
"service" : {
"slow_master_task_logging_threshold" : "10s",
"slow_task_logging_threshold" : "30s"
},
...
"name" : "cluster01",
...
"max_shards_per_node" : "1000",
"initial_master_nodes" : [ ],
"info" : {
"update" : {
"interval" : "30s",
"timeout" : "15s"
}
}
},
...
You can use shard allocation filtering to move all your data to server 1.
Simply run this:
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.require._name" : "node-1",
"cluster.routing.allocation.exclude._name" : "node-2,node-3,node-4,node-5"
}
}
Instead of _name you can also use _ip or _host depending on what is more practical for you.
After running this command, all primary shards will migrate to server1 (the replicas will be unassigned). You just need to make sure that server1 has enough storage space to store all the primary shards.
If you want to get rid of the unassigned replicas (and get back to green state), simply run this:
PUT _all/_settings
{
"index" : {
"number_of_replicas" : 0
}
}

elasticsearch error kubernetes.labels.app

I have a customized kubernetes, I want to analyze all the logs in it, I found the documentation
set everything up according to the documentation, my filebeat-kubernetes.yaml configuration files turned out to be
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts:['${ELASTICSEARCH_HOST:my_ip}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
namespace: kube-system
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.4.2
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: my_ip
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
run filebeat-kubernetes.yaml
kubectl create -f filebeat-kubernetes.yaml
I get indexes in elasticsearch
yellow open filebeat-6.4.2-2018.10.09 9A42qYPRSem4Z6ZBZQ1P7A 5 1 1129 0 457.3kb 457.3kb
yellow open filebeat-6.4.2-2018.10.11 6-8oKQ_RQBCx9D71kHhSiQ 5 1 32 0 56.4kb 56.4kb
yellow open filebeat-6.4.2-2018.10.10 Wc5xG55KRMWJXqJjfhBbUA 5 1 36826 0 29.8mb 29.8mb
but I have such errors in the elasticsearch logs
[DEBUG][o.e.a.b.TransportShardBulkAction] [filebeat-6.4.2-2018.10.11]
[3] failed to execute bulk item (index) BulkShardRequest [[filebeat-
6.4.2-2018.10.11][3]] containing [8] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [kubernetes.labels.app]
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:302) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:481) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:478) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:478) ~[elasticsearch-6.4.2.jar:6.4.2]
...
kubernetes version and elasticsearch version
kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
curl -XGET localhost:9200
{
"name" : "el3",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "hmmQcpMdSYCM8P3i9gOENw",
"version" : {
"number" : "6.4.2",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "04711c2",
"build_date" : "2018-09-26T13:34:09.098244Z",
"build_snapshot" : false,
"lucene_version" : "7.4.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
how to fix error failed to parse [kubernetes.labels.app]? or how can i remove filebeat - label from config?
Update
I added filebeat index template in elasticsearch, my file file-index-template.json
{
"mappings": {
"_default_": {
"dynamic_templates": [
{
"template1": {
"mapping": {
"doc_values": true,
"ignore_above": 1024,
"index": "false",
"type": "{dynamic_type}"
},
"match": "*"
}
}
],
"properties": {
"#timestamp": {
"type": "date"
},
"message": {
"type": "text",
"index": "true"
},
"offset": {
"type": "long",
"doc_values": "true"
},
"geoip": {
"type": "object",
"dynamic": true,
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
},
"settings": {
"index.refresh_interval": "5s"
},
"template": "filebeat-*"
}
added template in elasticsearch
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d#filebeat.json
check template
curl localhost:9200/_template/filebeat
{"filebeat":{"order":0,"index_patterns":["filebeat-*"],"settings":{"index":{"refresh_interval":"5s"}},"mappings":{"_default_":{"dynamic_templates":[{"template1":{"mapping":{"doc_values":true,"ignore_above":1024,"index":"false","type":"{dynamic_type}"},"match":"*"}}],"properties":{"#timestamp":{"type":"date"},"message":{"type":"text","index":"true"},"offset":{"type":"long","doc_values":"true"},"geoip":{"type":"object","dynamic":true,"properties":{"location":{"type":"geo_point"}}}}}},"aliases":{}}}
check index
curl localhost:9200/_cat/indices
yellow open filebeat-6.4.2-2018.10.17 c9EmKOQ9T7W_pl9tDRDycQ 5 1 13719988 0 13.8gb 13.8gb
yellow open filebeat-6.4.2-2018.10.14 daA_KAT_TYeL5Fn3SrT2Pw 5 1 56400 0 10.5mb 10.5mb
yellow open filebeat-6.4.2-2018.10.16 70uY3kooTjWRNaFCky24jQ 5 1 277731 0 69.3mb 69.3mb
green open .kibana DgMyQx7QSK659uBo1CccJQ 1 0 3 0 34.3kb 34.3kb
yellow open filebeat-6.4.2-2018.10.13 LsC4soOYSEqY3vwv-HOcjg 5 1 135921 0 19.1mb 19.1mb
yellow open filebeat-6.4.2-2018.10.15 hKNvyDl9SFSgw3nEU3faKg 5 1 72960 0 18.7mb 18.7mb
but still I see in the elasticsearch logs
[DEBUG][o.e.a.b.TransportShardBulkAction] [filebeat-6.4.2-2018.10.17][4] failed to execute bulk item (index) BulkShardRequest [[filebeat-6.4.2-2018.10.17][4]] containing [13] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [kubernetes.labels.app]
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:302) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:481) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:478) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:478) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:501) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:95) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:69) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:263) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:725) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:702) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:682) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.action.bulk.TransportShardBulkAction.lambda$executeIndexRequestOnPrimary$2(TransportShardBulkAction.java:560) ~[elasticsearch-6.4.2.jar:6.4.2]
...
Update 2
curl localhost:9200/_cat/indices
yellow open filebeat-6.4.2-2018.10.25 0RCTMniqQyucD530dz_eOQ 5 1 511 0 491.1kb 491.1kb
yellow open filebeat-6.4.2-2018.10.27 64b5ThH1TauvwMIo_ueTIg 5 1 487 0 479.4kb 479.4kb
yellow open filebeat-6.4.2-2018.10.28 Lf4UzVzESIGfGvx7VsRzFQ 5 1 283 0 357.4kb 357.4kb
yellow open filebeat-6.4.2-2018.10.24 fCUmzy2UQSy9lsNOMWmkEQ 5 1 2866 0 1.8mb 1.8mb
yellow open filebeat-6.4.2-2018.10.26 t3rPwBS4TYOhJWjtFRYk6g 5 1 323 0 428.9kb 428.9kb
yellow open filebeat-6.4.2-2018.10.22 -Rq7SbeqS_yNX3I4lwsGRg 5 1 92 0 173.2kb 173.2kb
yellow open filebeat-6.4.2-2018.10.29 yAje-vFhQqmavxSO7tlDGA 5 1 4810 0 8.5mb 8.5mb
Check elasticksearch
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 33,
"timed_out" : false,
"_shards" : {
"total" : 35,
"successful" : 35,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 67309,
"max_score" : 1.0,
"hits" : [
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "-m0iwGYBP2-nX77s4y_g",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:36.393Z",
"message" : "2018-10-22 07:32:36.393 [INFO][92] int_dataplane.go 747: Finished applying updates to dataplane. msecToApply=92.064514",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 630130,
"stream" : "stdout"
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "_m0iwGYBP2-nX77s4y_g",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:38.159Z",
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"offset" : 630467,
"stream" : "stdout",
"message" : "2018-10-22 07:32:38.158 [INFO][92] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "n20iwGYBP2-nX77s5jGM",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:41.172Z",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 631205,
"stream" : "stdout",
"message" : "2018-10-22 07:32:41.172 [INFO][92] table.go 438: Loading current iptables state and checking it is correct. ipVersion=0x4 table=\"raw\"",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "WG0iwGYBP2-nX77s6DIH",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:45.710Z",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 632166,
"stream" : "stdout",
"message" : "2018-10-22 07:32:45.710 [INFO][92] ipsets.go 222: Asked to resync with the dataplane on next update. family=\"inet\"",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2",
"name" : "filebeat-6p7rc"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "Wm0iwGYBP2-nX77s6DIH",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:45.710Z",
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 632353,
"stream" : "stdout",
"message" : "2018-10-22 07:32:45.710 [INFO][92] ipsets.go 253: Resyncing ipsets with dataplane. family=\"inet\"",
"prospector" : {
"type" : "docker"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "XG0iwGYBP2-nX77s6DIH",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:45.711Z",
"stream" : "stdout",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"message" : "2018-10-22 07:32:45.711 [INFO][92] ipsets.go 295: Finished resync family=\"inet\" numInconsistenciesFound=0 resyncDuration=876.908µs",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 632522
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "QG0iwGYBP2-nX77s6TNr",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:45.711Z",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 632726,
"stream" : "stdout",
"message" : "2018-10-22 07:32:45.711 [INFO][92] int_dataplane.go 747: Finished applying updates to dataplane. msecToApply=1.061403",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2",
"name" : "filebeat-6p7rc"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "1W0iwGYBP2-nX77s8zc2",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:32:58.158Z",
"message" : "2018-10-22 07:32:58.158 [INFO][92] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 634199,
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2",
"name" : "filebeat-6p7rc"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"stream" : "stdout"
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "-G0iwGYBP2-nX77s8zc2",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:33:00.168Z",
"message" : "2018-10-22 07:33:00.167 [INFO][92] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}",
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log",
"offset" : 634391,
"stream" : "stdout",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"host" : {
"name" : "filebeat-6p7rc"
}
}
},
{
"_index" : "filebeat-6.4.2-2018.10.22",
"_type" : "doc",
"_id" : "yW0iwGYBP2-nX77s_j2e",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2018-10-22T07:33:18.158Z",
"offset" : 636780,
"stream" : "stdout",
"message" : "2018-10-22 07:33:18.158 [INFO][92] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}",
"prospector" : {
"type" : "docker"
},
"input" : {
"type" : "docker"
},
"host" : {
"name" : "filebeat-6p7rc"
},
"beat" : {
"name" : "filebeat-6p7rc",
"hostname" : "filebeat-6p7rc",
"version" : "6.4.2"
},
"source" : "/var/lib/docker/containers/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659/02ed5c70d5341a7d3f15fecbb24dd94bc43d850fc4fd6c609a771d487518d659-json.log"
}
}
]
}
}
I suppose that you haven't set up an index template for Filebeat fields, which should be parsed to Elasticsearch for further processing. You can find useful information in this Article about implementing the Filebeat index template on your cluster.
In addition, there was a similar issue reported in GitHub about parsing kubernetes.labels using Logstash event collector.

Elastic Search 2.3.4 Stops allocating shards with no obvious reason

I am attempting to upgrade our Elastic Search cluster from 1.6 to 2.3.4. The upgrade seems to work, and I can see shard allocation starting to happen within Kopf - but at some point the shard allocation appears to stop with many shards left unallocated, and no errors being reported in the logs. Typically I'm left with 1200 / 3800 shards unallocated.
We have a typical 3 node cluster and I am trialing this standalone on my local machine with all 3 nodes running on my local machine.
I have seen similar symptoms reported - see https://t37.net/how-to-fix-your-elasticsearch-cluster-stuck-in-initializing-shards-mode.html
. The solution here seemed to be to manually allocate the shards, which I've tried (and works) but I'm at a loss to explain the behaviour of elastic search here. I'd prefer not to go down this route, as I want my cluster to spin up automatically without intervention.
There is also https://github.com/elastic/elasticsearch/pull/14494 which seems to be resolved with the latest ES version, so shouldn't be a problem.
There are no errors in log files - I have upped the root level logging to 'DEBUG' in order to see what I can. What I can see is lines like the below for each unallocated shard (this from the master node logs):
[2016-07-26 09:18:04,859][DEBUG][gateway ] [germany] [index][4] found 0 allocations of [index][4], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-07-26T08:05:04.447Z]], highest version: [-1]
[2016-07-26 09:18:04,859][DEBUG][gateway ] [germany] [index][4]: not allocating, number_of_allocated_shards_found [0]
Config file (with comments removed):
cluster.name: elasticsearch-jm-2.3.4
node.name: germany
script.inline: true
script.indexed: true
If I query the cluster health after reallocation has stopped - I get the response below:
http://localhost:9200/_cluster/health?pretty
cluster_name : elasticsearch-jm-2.3.4
status : red
timed_out : False
number_of_nodes : 3
number_of_data_nodes : 3
active_primary_shards : 1289
active_shards : 2578
relocating_shards : 0
initializing_shards : 0
unassigned_shards : 1264
delayed_unassigned_shards : 0
number_of_pending_tasks : 0
number_of_in_flight_fetch : 0
task_max_waiting_in_queue_millis : 0
active_shards_percent_as_number : 67.10046850598647
Further querying for shards - filtered to one index with unallocated shards. As can be seen - shard 0 and 4 are unallocated whereas shard 1 2 and 3 have been allocated :
http://localhost:9200/_cat/shards
cs-payment-warn-2016.07.20 3 p STARTED 106 92.4kb 127.0.0.1 germany
cs-payment-warn-2016.07.20 3 r STARTED 106 92.4kb 127.0.0.1 switzerland
cs-payment-warn-2016.07.20 4 p UNASSIGNED
cs-payment-warn-2016.07.20 4 r UNASSIGNED
cs-payment-warn-2016.07.20 2 r STARTED 120 74.5kb 127.0.0.1 cyprus
cs-payment-warn-2016.07.20 2 p STARTED 120 74.5kb 127.0.0.1 germany
cs-payment-warn-2016.07.20 1 r STARTED 120 73.8kb 127.0.0.1 cyprus
cs-payment-warn-2016.07.20 1 p STARTED 120 73.8kb 127.0.0.1 germany
cs-payment-warn-2016.07.20 0 p UNASSIGNED
cs-payment-warn-2016.07.20 0 r UNASSIGNED
Manually rerouting an unassigned shard appears to work - (stripped back results set)
http://localhost:9200/_cluster/reroute
POST:
{
"dry_run": true,
"commands": [
{
"allocate": {
"index": "cs-payment-warn-2016.07.20",
"shard": 4,
"node": "switzerland" ,
"allow_primary": true
}
}
]
}
Response:
{
"acknowledged" : true,
"state" : {
"version" : 722,
"state_uuid" : "Vw2vPoCMQk2ZosjzviD4TQ",
"master_node" : "yhL7XXy-SKu_WAM-C33dzA",
"blocks" : {},
"nodes" : {},
"routing_table" : {
"indices" : {
"cs-payment-warn-2016.07.20" : {
"shards" : {
"3" : [{
"state" : "STARTED",
"primary" : true,
"node" : "yhL7XXy-SKu_WAM-C33dzA",
"relocating_node" : null,
"shard" : 3,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "x_Iq88hmTqiasrjW09hVuw"
}
}, {
"state" : "STARTED",
"primary" : false,
"node" : "1a8dgBscTUS3c7Pv4mN9CQ",
"relocating_node" : null,
"shard" : 3,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "DF-EUEy_SpeUElnZI6cgsQ"
}
}
],
"4" : [{
"state" : "INITIALIZING",
"primary" : true,
"node" : "1a8dgBscTUS3c7Pv4mN9CQ",
"relocating_node" : null,
"shard" : 4,
"index" : "cs-payment-warn-2016.07.20",
"version" : 1,
"allocation_id" : {
"id" : "1tw7C7YPQsWwm_O-8mYHRg"
},
"unassigned_info" : {
"reason" : "INDEX_CREATED",
"at" : "2016-07-26T14:20:15.395Z",
"details" : "force allocation from previous reason CLUSTER_RECOVERED, null"
}
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 4,
"index" : "cs-payment-warn-2016.07.20",
"version" : 1,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}
],
"2" : [{
"state" : "STARTED",
"primary" : false,
"node" : "rlRQ2u0XQRqxWld-wSrOug",
"relocating_node" : null,
"shard" : 2,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "eQ-_vWNbRp27So0iGSitmA"
}
}, {
"state" : "STARTED",
"primary" : true,
"node" : "yhL7XXy-SKu_WAM-C33dzA",
"relocating_node" : null,
"shard" : 2,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "O1PU1_NVS8-uB2yBrG76MA"
}
}
],
"1" : [{
"state" : "STARTED",
"primary" : false,
"node" : "rlRQ2u0XQRqxWld-wSrOug",
"relocating_node" : null,
"shard" : 1,
"index" : "cs-payment-warn-2016.07.20",
"version" : 24,
"allocation_id" : {
"id" : "ZmxtOvorRVmndR15OJMkMA"
}
}, {
"state" : "STARTED",
"primary" : true,
"node" : "yhL7XXy-SKu_WAM-C33dzA",
"relocating_node" : null,
"shard" : 1,
"index" : "cs-payment-warn-2016.07.20",
"version" : 24,
"allocation_id" : {
"id" : "ZNgzePThQxS-iqhRSXzZCw"
}
}
],
"0" : [{
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 0,
"index" : "cs-payment-warn-2016.07.20",
"version" : 0,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 0,
"index" : "cs-payment-warn-2016.07.20",
"version" : 0,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}
]
}
}
},
"routing_nodes" : {
"unassigned" : [{
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 4,
"index" : "cs-payment-warn-2016.07.20",
"version" : 1,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 0,
"index" : "cs-payment-warn-2016.07.20",
"version" : 0,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 0,
"index" : "cs-payment-warn-2016.07.20",
"version" : 0,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-07-26T11:24:11.868Z"
}
}
]
},
"nodes" : {
"rlRQ2u0XQRqxWld-wSrOug" : [{
"state" : "STARTED",
"primary" : false,
"node" : "rlRQ2u0XQRqxWld-wSrOug",
"relocating_node" : null,
"shard" : 2,
"index" : "cs-payment-warn-2016.07.20",
"version" : 22,
"allocation_id" : {
"id" : "eQ-_vWNbRp27So0iGSitmA"
}
}, {
"state" : "STARTED",
"primary" : false,
"node" : "rlRQ2u0XQRqxWld-wSrOug",
"relocating_node" : null,
"shard" : 1,
"index" : "cs-payment-warn-2016.07.20",
"version" : 24,
"allocation_id" : {
"id" : "ZmxtOvorRVmndR15OJMkMA"
}
}
]
}
}
}
}

How to fix elasticsearch node version number in a cluster (having 2 different version nodes)

In my elasticsearch cluster i have 2 nodes. 1 is of version 1.7.3 and other of version 1.7.5.(my elasticsearch v1.7.3 got corrupted so reinstalled 1.7.5)
how can i upgrade the node from 1.7.3 to 1.7.5
refered: https://www.elastic.co/guide/en/elasticsearch/reference/1.7/setup-upgrade.html#rolling-upgrades.
But could not get the procedure for upgradation of nodes version.
kindly help me through this.
my cluster is green.
and nodes are as follows:
{
"cluster_name" : "graylog2",
"nodes" : {
"mC4Osz5IS0OLy2E8QbqZLQ" : {
"name" : "Decay II",
"transport_address" : "inet[/127.0.0.1:9300]",
"host" : "localhost",
"ip" : "127.0.0.1",
"version" : "1.7.5",
"build" : "00f95f4",
"http_address" : "inet[/127.0.0.1:9200]",
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 957,
"max_file_descriptors" : 65535,
"mlockall" : false
}
},
"qCDvg4XCREmj_iGmbt4v4w" : {
"name" : "graylog2-server",
"transport_address" : "inet[/127.0.0.1:9350]",
"host" : "localhost",
"ip" : "127.0.0.1",
"version" : "1.7.3",
"build" : "05d4530",
"attributes" : {
"client" : "true",
"data" : "false",
"master" : "false"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 8937,
"max_file_descriptors" : 64000,
"mlockall" : false
}
}
i suspect the difference in the version is the cause of the graylog refusing connection with the elasticsearch cluster
please help
Hmmmm I can't think of one of two ways to do this
Add 3rd node on 1.75, wait for it to go green. Shutdown 2, upgrade and re-introduce to cluster.
Shutdown cluster. Upgrade node to 1.75. Start node mC4Osz5IS0OLy2E8QbqZLQ first and then the other one.

Resources