I'm trying to debug a major performance bottleneck after upgrading Elasticsearch to 7.11.1 - I'm experiencing slow PUT inserts/updates (which I do a lot of) and assume it relates changes to the way indexes are managed.
I found the new parameter realtime and thought I'd give it a shot but I get unrecognized parameter: [realtime] when trying it.
GET http://localhost:9200
{
"name": "myhost",
"cluster_name": "mycluster",
"cluster_uuid": "uc03F4mpq1mO8CzQSzfB1g",
"version": {
"number": "7.11.1",
"build_flavor": "default",
"build_type": "rpm",
"build_hash": "ff17057114c2199c9c1bbecc727003a907c0db7a",
"build_date": "2021-02-15T13:44:09.394032Z",
"build_snapshot": false,
"lucene_version": "8.7.0",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
GET http://localhost:9200/foo/bar/_count?q=foo:bar
{
"count": 382,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
}
}
GET http://localhost:9200/foo/bar/_count?q=foo:bar&realtime=false
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "request [/foo/bar/_count] contains unrecognized parameter: [realtime]"
}
],
"type": "illegal_argument_exception",
"reason": "request [/foo/bar/_count] contains unrecognized parameter: [realtime]"
},
"status": 400
}
I've double checked the manual and my version. I have 7.11.1, the manual page is 7.11:
https://www.elastic.co/guide/en/elasticsearch/reference/7.11/docs-get.html#realtime
Any help appreciated.
Related
Executing the following
http://172.21.21.151:9200/printer-stats-*/_doc/_count
I get the following response
{
"count": 19299,
"_shards": {
"total": 44,
"successful": 44,
"skipped": 0,
"failed": 0
}
}
How can i modify the query to only return
{
"count": 19299
}
On _search queries we can use filter_path to only get the desired output, but this is not working on count as it seems.
I also tried to add a body like the following
{
"_shards": false
}
But it throws the following error
{
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "request does not support [_shards]",
"line": 2,
"col": 3
}
],
"type": "parsing_exception",
"reason": "request does not support [_shards]",
"line": 2,
"col": 3
},
"status": 400
}
My version is 7.9.2
Probably this has been asked before, but I have not found a relevant question.
filter_path definitely works on the _count endpoint:
http://172.21.21.151:9200/printer-stats-*/_doc/_count?filter_path=count
Response =>
{
"count": 19299
}
I need to integrate the Klarna Checkout module into magento 2.1.2. I am using the version of the "klarna/m2-checkout module": 4.2.2.
When choosing a delivery method, I always get an error in the pop-up window:
Sorry, the delivery option you chose cannot be processed. Please select another delivery option.
When i choose shipping method, i get this responce:
{
"shared": {
"customer": {
"type": "person"
},
"user_preferences": {
"remember_me": true
},
"language": "en",
"locale": "en-US",
"customer_details": {
"client_token": "eyJhbGciOiJSUzUxMiJ9.eyJz",
"country": "swe",
"completed": true,
"fields_with_obfuscation": {
"email": "melosicuva#royalhost.info",
"given_name": "Testperson-se",
"family_name": "Approved",
"street_address": "Stårgatan 1",
"postal_code": "123 45",
"city": "Ankeborg",
"country": "SE",
"phone": "076-526 00 00",
"date_of_birth": "1941-03-21",
"national_identification_number": "19410321-9202"
},
"reference": "2f9a445a57a49215175178099002fc7165ee"
},
"shipping_details": {
"client_token": "eyJhbGciOiJSUzUxMiJ9.eyJzZXNzaW9uX"
},
"currency": "SEK",
"obfuscated_fields": []
},
"cart": {
"total_tax_amount": 30000,
"total_price_including_tax": 150000,
"total_price_excluding_tax": 120000,
"total_shipping_amount_excluding_tax": 0,
"total_surcharge_amount_excluding_tax": 0,
"total_discount_amount_excluding_tax": 0,
"total_shipping_amount_including_tax": 0,
"total_surcharge_amount_including_tax": 0,
"total_discount_amount_including_tax": 0,
"subtotal": 120000,
"total_store_credit": 0,
"items": [{
"type": "physical",
"reference": "1201018390010",
"name": "Armour Bib Shorts",
"quantity": 1,
"unit_price": 150000,
"total_tax_amount": 30000,
"tax_rate": 2500,
"total_price_including_tax": 150000,
"total_price_excluding_tax": 120000,
"product_url": "https://local.com/armour-bib-shorts-black.html?___store%5B_data%5D%5Bstore_id%5D=2&___store%5B_data%5D%5Bcode%5D=se&___store%5B_data%5D%5Bwebsite_id%5D=2&___store%5B_data%5D%5Bgroup_id%5D=2&___store%5B_data%5D%5Bname%5D=Sweden+Store&___store%5B_data%5D%5Bsort_order%5D=30&___store%5B_data%5D%5Bis_active%5D=1&___store%5B_data%5D%5Balias%5D=Sweden&___store%5B_data%5D%5Bavailable_currency_codes%5D%5B0%5D=SEK",
"image_url": "https://local.com//media/catalog/product/a/r/armour-bib-shorts-aw18-01.jpg"
}]
},
"errors": {
"generic": ["shipping_service_failed"]
},
"options": {
"allow_separate_shipping_address": false,
"date_of_birth_mandatory": false,
"title_mandatory": false,
"national_identification_number_mandatory": false,
"phone_mandatory": true,
"allowed_customer_types": ["person"],
"payment_selector_on_load": false
},
"preview_payment_methods": [{
"id": "-1",
"type": "invoice",
"locked": false,
"selected": false,
"data": {
"days": 14
}
}, {
"id": "-1",
"type": "direct_debit",
"locked": false,
"selected": false
}, {
"id": "-1",
"type": "credit_card",
"locked": false,
"selected": false,
"data": {
"available_cards": ["VISA", "MASTER"],
"allow_saved_card": false,
"do_save_card": false,
"collect_consent": false,
"consent_given": false
}
}],
"allowed_billing_countries": ["swe"],
"status": {
"prescreened": false
},
"analytics_user_id": "ELmpDn1f600JYxHtagC7FcsOdAXe9-2iwWhIzHSfmhM=",
"merchant": {
"hashed_id": "a9c814c7a780d46a7fb2403e452829b3",
"name": "Your business name"
},
"merchant_urls": {
"checkout": "https://local.com/checkout/klarna",
"confirmation": "https://checkout-eu.playground.klarna.com/yaco/orders/ffc4101d-00cb-5e63-81fc-0f0c15baeac3/redirect?auth_token=0el7mltb89prfz2fz2mw",
"terms": "https://local.com/terms",
"confirmation_page": "https://local.com/checkout/klarna/confirmation/id/ffc4101d-00cb-5e63-81fc-0f0c15baeac3"
}
}
Here I do not like the block:
"errors": {
"generic": ["shipping_service_failed"]
}
Does anyone know how to fix it?
Delivery error :
This error occurs when you set address_update callback and and it's not handled in the right way. This callback should be set if you need to update order's addresses, and should not take more than 10 sec.
Here's an example: https://developers.klarna.com/api/#checkout-api-callbacks-address-update
And some best practices: https://developers.klarna.com/documentation/klarna-checkout/best-practices/#address-updated
If you run Klarna Checkout on localhost, then you should make the localhost-based application reachable from Klarna via the HTTP protocol (e.g., for the address_update callback).
You can do it via services like Ngrok.
In case of this error it's good to know that:
Klarna Checkout is calling callbacks regarding the shipping on checkout page:
address_update
shipping_option_update
If Klarna doesn't receive the answer from callback request in 10s it will end the connection and eventually you will see the error message. You can find access status logs in your http server, for example access status 499 in nginx. On the other hand in Klarna Merchant Portal you will see logs with status "???".
The callback request may be not accessible or not accessible in time below 10s:
if you work on localhost configure tunnel to expose your local environment to be visible by Klarna. For example with ngrok.
make sure that magento cache is enabled.
disable xdebug (unless it's version >=3)
check internet connection quality
check php.ini and http server performance related settings
If error still occurs you can debug the callback api to find the bottleneck. For example you can use logs in Klarna Merchant Portal to create a postman request to the callback api.
I'm still relatively new to Elasticsearch and, currently, I'm attempting to switch from Solr to Elasticsearch and am seeing a huge increase in CPU usage when ES is on our production website. The site sees anywhere from 10,000 to 30,000 requests to ES per second. Solr handles that load just fine with our current hardware.
The books index mapping: https://pastebin.com/bKM9egPS
A query for a book: https://pastebin.com/AdfZ895X
ES is hosted on AWS on an m4.xlarge.elasticsearch instance.
Our cluster is set up as follows (anything not included is default):
"persistent": {
"cluster": {
"routing": {
"allocation": {
"cluster_concurrent_rebalance": "2",
"node_concurrent_recoveries": "2",
"disk": {
"watermark": {
"low": "15.0gb",
"flood_stage": "5.0gb",
"high": "10.0gb"
}
},
"node_initial_primaries_recoveries": "4"
}
}
},
"indices": {
"recovery": {
"max_bytes_per_sec": "60mb"
}
}
Our nodes have the following configuration:
"_nodes": {
"total": 2,
"successful": 2,
"failed": 0
},
"cluster_name": "cluster",
"nodes": {
"####": {
"name": "node1",
"version": "6.3.1",
"build_flavor": "oss",
"build_type": "zip",
"build_hash": "####",
"roles": [
"master",
"data",
"ingest"
]
},
"###": {
"name": "node2",
"version": "6.3.1",
"build_flavor": "oss",
"build_type": "zip",
"build_hash": "###",
"roles": [
"master",
"data",
"ingest"
]
}
}
Can someone please help me figure out what exactly is happening so I can get this deployment finished?
I have a managed cluster hosted by elastio.co. Here is the configuration
|Platform => Amazon Web Services| |Memory => 4 GB|
|Storage => 96 GB| |SSD => Yes| |High availability => Yes 2 data centers|
Each index in this cluster contain log data of exactly one day. Average index size is 15 mb and average doc count is 15000. The cluster is not in any way under any kind of pressure (JVM, Indexing & Searching time, Disk Space all are in very comfort zone)
When I opened a previously closed index the cluster is turned RED. Here are some matrices I found querying the elasticsearch.
GET /_cluster/allocation/explain
{
"index": "some_index_name", # 1 Primary shard , 1 replica shard
"shard": 0,
"primary": true
}
Response :
"unassigned_info": {
"reason": "ALLOCATION_FAILED"
"failed_allocation_attempts": 3,
"details": "failed recovery, failure RecoveryFailedException[[some_index_name][0]: Recovery failed on {instance-*****}{Hash}{HASH}{IP}{IP}{logical_availability_zone=zone-1, availability_zone=***, region=***}]; nested: IndexShardRecoveryException[failed to fetch index version after copying it over]; nested: IndexShardRecoveryException[shard allocated for local recovery (post api), should exist, but doesn't, current files: []]; nested: IndexNotFoundException[no segments* file found in store(mmapfs(/app/data/nodes/0/indices/MFIFAQO2R_ywstzqrfbY4w/0/index)): files: []]; ",
"last_allocation_status": "no_valid_shard_copy"
},
"can_allocate": "no_valid_shard_copy",
"allocate_explanation": "cannot allocate because all found copies of the shard are either stale or corrupt",
"node_allocation_decisions": [
{
"node_name": "instance-***",
"node_decision": "no",
"store": {
"in_sync": false,
"allocation_id": "RANDOM_HASH",
"store_exception": {
"type": "index_not_found_exception",
"reason": "no segments* file found in SimpleFSDirectory#/app/data/nodes/0/indices/RANDOM_HASH/0/index lockFactory=org.apache.lucene.store.NativeFSLockFactory#346e1b99: files: []"
}
}
},
{
"node_name": "instance-***",
"node_attributes": {
"logical_availability_zone": "zone-0",
},
"node_decision": "no",
"store": {
"found": false
}
}
I've tried rerouting the shards to a node. Even setting data loss flag to true.
POST _cluster/reroute
{
"commands" : [
{"allocate_stale_primary" : {
"index" : "some_index_name", "shard" : 0,
"node" : "instance-***",
"accept_data_loss" : true
}
}
]
}
Response:
"acknowledged": true,
"state": {
"version": 338190,
"state_uuid": "RANDOM_HASH",
"master_node": "RANDOM_HASH",
"blocks": {
"indices": {
"restored_**: {
"4": {
"description": "index closed",
"retryable": false,
"levels": [
"read",
"write"
]
}
},
"restored_**": {
"4": {
"description": "index closed",
"retryable": false,
"levels": [
"read",
"write"
]
}
}
}
},
"routing_table": {
"indices": {
"SOME_INDEX_NAME": {
"shards": {
"0": [
{
"state": "INITIALIZING",
"primary": true,
"relocating_node": null,
"shard": 0,
"index": "SOME_INDEX_NAME",
"recovery_source": {
"type": "EXISTING_STORE"
},
"allocation_id": {
"id": "HASH"
},
"unassigned_info": {
"reason": "ALLOCATION_FAILED",
"failed_attempts": 4,
"delayed": false,
"details": "same as explanation above ^ ",
"allocation_status": "no_valid_shard_copy"
}
},
{
"state": "UNASSIGNED",
"primary": false,
"node": null,
"relocating_node": null,
"shard": 0,
"index": "some_index_name",
"recovery_source": {
"type": "PEER"
},
"unassigned_info": {
"reason": "INDEX_REOPENED",
"delayed": false,
"allocation_status": "no_attempt"
}
}
]
}
},
Any kind of suggestion is welcomed. Thanks and regards.
This occurs when the master-node is brought down abruptly.
Here are the steps I took to resolve the same issue, that I had encountered ,
Step 1: Check the allocation
curl -XGET http://localhost:9200/_cat/allocation?v
Step 2: Check the shard stores
curl -XGET http://localhost:9200/_shard_stores?pretty
Look out for "index", "shard" and "node" that has the error that you displayed.
The ERROR should be --> "no segments* file found in SimpleFSDirectory#/...."
Step 3: Now reroute that index as shown below
curl -XPOST 'http://localhost:9200/_cluster/reroute?master_timeout=5m' \
-d '{ "commands": [ { "allocate_empty_primary": { "index": "IndexFromStep2", "shard": ShardFromStep2 , "node": "NodeFromStep2", "accept_data_loss" : true } } ] }'
Step 4: Repeat Step2 and Step3 until you see this output.
curl -XGET 'http://localhost:9200/_shard_stores?pretty'
{
"indices" : { }
}
Your cluster should go green soon.
I have an Elasticsearch cluster on two node with version 1.3.1
{
"status": 200,
"name": "Blue Streak",
"version": {
"number": "1.3.1",
"build_hash": "2de6dc5268c32fb49b205233c138d93aaf772015",
"build_timestamp": "2014-07-28T14:45:15Z",
"build_snapshot": false,
"lucene_version": "4.9"
},
"tagline": "You Know, for Search"
}
Now i need to add a node to this cluster but the version of elasticsearch on new node is 1.5.2
{
"status": 503,
"name": "Adrian Toomes",
"cluster_name": "tg-elasticsearch",
"version": {
"number": "1.5.2",
"build_hash": "62ff9868b4c8a0c45860bebb259e21980778ab1c",
"build_timestamp": "2015-04-27T09:21:06Z",
"build_snapshot": false,
"lucene_version": "4.10.4"
},
"tagline": "You Know, for Search"
}
Can this be possible? Since when i am trying to connect it is giving following error:
[2015-08-13 14:41:16,840][WARN ][transport.netty ] [10.33.57.169] Message not fully read (request) for [21602710] and action [discovery/zen/join/validate], resetting
[2015-08-13 14:41:16,859][INFO ][discovery.zen ] [10.33.57.169] failed to send join request to master [[Blue Streak][iNUjaFvqTu6nbzjgOr14rQ][tg-db3][inet[/10.65.40.65:9300]]], reason [RemoteTransportException[[Blue Streak][inet[/10.65.40.65:9300]][discovery/zen/join]]; nested: RemoteTransportException[[10.33.57.169][inet[/10.33.57.169:9300]][discovery/zen/join/validate]]; nested: ElasticsearchIllegalArgumentException[No custom index metadata factory registered for type [rivers]]; ]