I have an Elasticsearch cluster on two node with version 1.3.1
{
"status": 200,
"name": "Blue Streak",
"version": {
"number": "1.3.1",
"build_hash": "2de6dc5268c32fb49b205233c138d93aaf772015",
"build_timestamp": "2014-07-28T14:45:15Z",
"build_snapshot": false,
"lucene_version": "4.9"
},
"tagline": "You Know, for Search"
}
Now i need to add a node to this cluster but the version of elasticsearch on new node is 1.5.2
{
"status": 503,
"name": "Adrian Toomes",
"cluster_name": "tg-elasticsearch",
"version": {
"number": "1.5.2",
"build_hash": "62ff9868b4c8a0c45860bebb259e21980778ab1c",
"build_timestamp": "2015-04-27T09:21:06Z",
"build_snapshot": false,
"lucene_version": "4.10.4"
},
"tagline": "You Know, for Search"
}
Can this be possible? Since when i am trying to connect it is giving following error:
[2015-08-13 14:41:16,840][WARN ][transport.netty ] [10.33.57.169] Message not fully read (request) for [21602710] and action [discovery/zen/join/validate], resetting
[2015-08-13 14:41:16,859][INFO ][discovery.zen ] [10.33.57.169] failed to send join request to master [[Blue Streak][iNUjaFvqTu6nbzjgOr14rQ][tg-db3][inet[/10.65.40.65:9300]]], reason [RemoteTransportException[[Blue Streak][inet[/10.65.40.65:9300]][discovery/zen/join]]; nested: RemoteTransportException[[10.33.57.169][inet[/10.33.57.169:9300]][discovery/zen/join/validate]]; nested: ElasticsearchIllegalArgumentException[No custom index metadata factory registered for type [rivers]]; ]
Related
I'm trying to debug a major performance bottleneck after upgrading Elasticsearch to 7.11.1 - I'm experiencing slow PUT inserts/updates (which I do a lot of) and assume it relates changes to the way indexes are managed.
I found the new parameter realtime and thought I'd give it a shot but I get unrecognized parameter: [realtime] when trying it.
GET http://localhost:9200
{
"name": "myhost",
"cluster_name": "mycluster",
"cluster_uuid": "uc03F4mpq1mO8CzQSzfB1g",
"version": {
"number": "7.11.1",
"build_flavor": "default",
"build_type": "rpm",
"build_hash": "ff17057114c2199c9c1bbecc727003a907c0db7a",
"build_date": "2021-02-15T13:44:09.394032Z",
"build_snapshot": false,
"lucene_version": "8.7.0",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
GET http://localhost:9200/foo/bar/_count?q=foo:bar
{
"count": 382,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
}
}
GET http://localhost:9200/foo/bar/_count?q=foo:bar&realtime=false
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "request [/foo/bar/_count] contains unrecognized parameter: [realtime]"
}
],
"type": "illegal_argument_exception",
"reason": "request [/foo/bar/_count] contains unrecognized parameter: [realtime]"
},
"status": 400
}
I've double checked the manual and my version. I have 7.11.1, the manual page is 7.11:
https://www.elastic.co/guide/en/elasticsearch/reference/7.11/docs-get.html#realtime
Any help appreciated.
I'm still relatively new to Elasticsearch and, currently, I'm attempting to switch from Solr to Elasticsearch and am seeing a huge increase in CPU usage when ES is on our production website. The site sees anywhere from 10,000 to 30,000 requests to ES per second. Solr handles that load just fine with our current hardware.
The books index mapping: https://pastebin.com/bKM9egPS
A query for a book: https://pastebin.com/AdfZ895X
ES is hosted on AWS on an m4.xlarge.elasticsearch instance.
Our cluster is set up as follows (anything not included is default):
"persistent": {
"cluster": {
"routing": {
"allocation": {
"cluster_concurrent_rebalance": "2",
"node_concurrent_recoveries": "2",
"disk": {
"watermark": {
"low": "15.0gb",
"flood_stage": "5.0gb",
"high": "10.0gb"
}
},
"node_initial_primaries_recoveries": "4"
}
}
},
"indices": {
"recovery": {
"max_bytes_per_sec": "60mb"
}
}
Our nodes have the following configuration:
"_nodes": {
"total": 2,
"successful": 2,
"failed": 0
},
"cluster_name": "cluster",
"nodes": {
"####": {
"name": "node1",
"version": "6.3.1",
"build_flavor": "oss",
"build_type": "zip",
"build_hash": "####",
"roles": [
"master",
"data",
"ingest"
]
},
"###": {
"name": "node2",
"version": "6.3.1",
"build_flavor": "oss",
"build_type": "zip",
"build_hash": "###",
"roles": [
"master",
"data",
"ingest"
]
}
}
Can someone please help me figure out what exactly is happening so I can get this deployment finished?
I'm trying to log all queries from kibana. So I edited config/kibana.yml and added the following lines:
logging.dest: /tmp/test.log
logging.silent: false
logging.quiet: false
logging.verbose: true
elasticsearch.logQueries: true
Then I restarted kibana, queried for something.
Now logs start to appear, but only access logs are recorded, no ES queries there.
{
"type": "response",
"#timestamp": "2018-08-21T02:41:03Z",
"tags": [],
"pid": 28701,
"method": "post",
"statusCode": 200,
"req": {
"url": "/elasticsearch/_msearch",
"method": "post",
"headers": {
...
},
"remoteAddress": "xxxxx",
"userAgent": "xxxxx",
"referer": "http://xxxxxxx:8901/app/kibana"
},
"res": {
"statusCode": 200,
"responseTime": 62,
"contentLength": 9
},
"message": "POST /elasticsearch/_msearch 200 62ms - 9.0B"
}
Any ideas? I'm using ELK 6.2.2.
The elasticsearch.logQueries setting has been introduced in Kibana 6.3 as can be seen in this pull request
My cluster suddenly went to red. Because of an index shard allocation fail. when i run
GET /_cluster/allocation/explain
{
"index": "my_index",
"shard": 0,
"primary": true
}
output:
{
"shard": {
"index": "twitter_tracker",
"index_uuid": "mfXc8oplQpq2lWGjC1TxbA",
"id": 0,
"primary": true
},
"assigned": false,
"shard_state_fetch_pending": false,
"unassigned_info": {
"reason": "ALLOCATION_FAILED",
"at": "2018-01-02T08:13:44.513Z",
"failed_attempts": 1,
"delayed": false,
"details": "failed to create shard, failure IOException[failed to obtain in-memory shard lock]; nested: NotSerializableExceptionWrapper[shard_lock_obtain_failed_exception: [twitter_tracker][0]: obtaining shard lock timed out after 5000ms]; ",
"allocation_status": "no_valid_shard_copy"
},
"allocation_delay_in_millis": 60000,
"remaining_delay_in_millis": 0,
"nodes": {
"n91cV7ocTh-Zp58dFr5rug": {
"node_name": "elasticsearch-24-384-node-1",
"node_attributes": {},
"store": {
"shard_copy": "AVAILABLE"
},
"final_decision": "YES",
"final_explanation": "the shard can be assigned and the node contains a valid copy of the shard data",
"weight": 0.45,
"decisions": []
},
"_b-wXdjGRdGLEtvY76PDSA": {
"node_name": "elasticsearch-24-384-node-2",
"node_attributes": {},
"store": {
"shard_copy": "NONE"
},
"final_decision": "NO",
"final_explanation": "there is no copy of the shard available",
"weight": 0,
"decisions": []
}
}
}
What will be the solution? This is happened in my production node. My elasticsearch version 5.0. and i have two nodes
It is an issue that every Elastic Cluster developer will bump to anyway :)
Safeway to reroute your red index.
curl -XPOST 'localhost:9200/_cluster/reroute?retry_failed
This command will take some time, but you won't get allocation error while data transferring.
Here is issue explained wider.
I solved my issue with the following command.
curl -XPOST 'localhost:9200/_cluster/reroute?pretty' -d '{
"commands" : [ {
"allocate_stale_primary" :
{
"index" : "da-prod8-other", "shard" : 3,
"node" : "node-2-data-pod",
"accept_data_loss" : true
}
}
]
}'
So here u might lose the data. For me it works well. It was thrilling while running the command. luckily it worked well. For more details enter link description here Check this thread.
I'm trying to parse some JSON log files(that have stackTraces within them) into Logstash, but I think the stack traces are not letting it happen.
here is my logstash conf: (one of the many confs I've tried )
input {
# This configuration works but the input file needs to be set in a format like so [{},{}]
file {
path => "/opt/logstash/confFiles/suite1.json"
start_position => beginning
codec => json
sincedb_path => "/opt/logstash/confFiles/suite1incedb"
}
}
filter {
json{
source => "message"
}
}
output {
stdout {codec => rubydebug}
}
This is the json TYPE I'm trying to work around (its the result of a jsonReporter for jasmine)
{
"suite1": {
"id": "suite1",
"description": "create process - simulate test 2",
"fullName": "create process - simulate test 2",
"failedExpectations": [],
"status": "finished",
"specs": [{
"id": "spec0",
"description": "should redirect to modeler after create a process",
"fullName": "create process - simulate test 2 should redirect to modeler after create a process",
"failedExpectations": [{
"matcherName": "toMatch",
"message": "Expected 'http://localhost:3000/#!/signin' to match /myDashboard/.",
"stack": "Error: Failed expectation\n at Env.<anonymous> (/home/ls/code/ph/app/tests/e2e/create-process.e2e.test.js:18:35)\n at /home/ls/code/ph/node_modules/protractor/node_modules/jasminewd2/index.js:95:14\n at [object Object].webdriver.promise.ControlFlow.runInNewFrame_ (/home/ls/code/ph/node_modules/protractor/node_modules/selenium-webdriver/lib/webdriver/promise.js:1654:20)\n at [object Object].webdriver.promise.ControlFlow.runEventLoop_ (/home/ls/code/ph/node_modules/protractor/node_modules/selenium-webdriver/lib/webdriver/promise.js:1518:8)\n at [object Object].wrapper [as _onTimeout] (timers.js:274:14)\n at Timer.listOnTimeout (timers.js:119:15)",
"passed": false,
"expected": {},
"actual": "http://localhost:3000/#!/signin"
}, {
"matcherName": "",
"message": "Failed: No element found using locator: By.id(\"selectedUser\")",
"stack": "Error: Failed: No element found using locator: By.id(\"selectedUser\")\n at stack (/home/ls/code/ph/node_modules/protractor/node_modules/jasmine/node_modules/jasmine-core/lib/jasmine-core/jasmine.js:1441:17)\n at buildExpectationResult (/home/ls/code/ph/node_modules/protractor/node_modules/jasmine/node_modules/jasmine-core/lib/jasmine-core/jasmine.js:1411:14)\n at Spec.Env.expectationResultFactory (/home/ls/code/ph/node_modules/protractor/node_modules/jasmine/node_modules/jasmine-core/lib/jasmine-core/jasmine.js:533:18)\n at Spec.addExpectationResult (/home/ls/code/ph/node_modules/protractor/node_modules/jasmine/node_modules/jasmine-core/lib/jasmine-core/jasmine.js:293:34)\n at Env.fail (/home/ls/code/ph/node_modules/protractor/node_modules/jasmine/node_modules/jasmine-core/lib/jasmine-core/jasmine.js:837:25)\n at Function.next.fail (/home/ls/code/ph/node_modules/protractor/node_modules/jasmine/node_modules/jasmine-core/lib/jasmine-core/jasmine.js:1776:19)\n at /home/ls/code/ph/node_modules/protractor/node_modules/jasminewd2/index.js:104:16\n at /home/ls/code/ph/node_modules/protractor/node_modules/selenium-webdriver/lib/goog/base.js:1582:15\n at [object Object].webdriver.promise.ControlFlow.runInNewFrame_ (/home/ls/code/ph/node_modules/protractor/node_modules/selenium-webdriver/lib/webdriver/promise.js:1654:20)\n at notify (/home/ls/code/ph/node_modules/protractor/node_modules/selenium-webdriver/lib/webdriver/promise.js:465:12)",
"passed": false,
"expected": "",
"actual": ""
}],
"passedExpectations": [],
"status": "failed"
}]
},
"suite2": {
"id": "suite2",
"description": "login to PB Modeler",
"fullName": "login to PB Modeler",
"failedExpectations": [],
"status": "finished",
"specs": [{
"id": "spec1",
"description": "should redirect to myDashboard after login",
"fullName": "login to PB Modeler should redirect to myDashboard after login",
"failedExpectations": [{
"matcherName": "toMatch",
"message": "Expected 'http://localhost:3000/#!/signin' to match /myDashboard/.",
"stack": "Error: Failed expectation\n at Env.<anonymous> (/home/ls/code/ph/app/tests/e2e/login.e2e.test.js:18:37)\n at /home/ls/code/ph/node_modules/protractor/node_modules/jasminewd2/index.js:95:14\n at [object Object].webdriver.promise.ControlFlow.runInNewFrame_ (/home/ls/code/ph/node_modules/protractor/node_modules/selenium-webdriver/lib/webdriver/promise.js:1654:20)\n at [object Object].webdriver.promise.ControlFlow.runEventLoop_ (/home/ls/code/ph/node_modules/protractor/node_modules/selenium-webdriver/lib/webdriver/promise.js:1518:8)\n at [object Object].wrapper [as _onTimeout] (timers.js:274:14)\n at Timer.listOnTimeout (timers.js:119:15)",
"passed": false,
"expected": {},
"actual": "http://localhost:3000/#!/signin"
}],
"passedExpectations": [],
"status": "failed"
}]
}
I've try many configurations now,
If I try to send it like it is above, Logstash will not map it correctly.
So what I did was remove the spaces and set beautify=false in the reporter and surrounded it with "[ ]" to make it look like an array and Logstash would "randomly" take it.
-So what would it be a good approach to parse both the nested JSON objects,taking into account the stackTraces, so that when its sent to ES and kibana is a workable DATA.
-How can I make a mapping for this structure or Model the data so that it understands it when sending this to ELK
I'm using
ES 2.1
Logstash 2.1
Kibana 4.3.1
Filebeat 1.0.1