Elasticsearch: What's the shape/model of failures object in the delete by query response? - elasticsearch

According to the ElasticSearch documentation, the delete by query response looks like this:
{
"took" : 147,
"timed_out": false,
"total": 119,
"deleted": 119,
"batches": 1,
"version_conflicts": 0,
"noops": 0,
"retries": {
"bulk": 0,
"search": 0
},
"throttled_millis": 0,
"requests_per_second": -1.0,
"throttled_until_millis": 0,
"failures" : [ ]
}
How do I find out what the shape/model of the failures object is so that I can add failure handling logic on the client-side?
Edit: I found the Faliure class from the ElasticSearch Github repository, which is a good starting point.

Related

no shard available action exception on kibana discover

when i wanted to view the logs on kibana, i recieved this error :
1 of 37 shards failed The data you are seeing might be incomplete or wrong.
this is Response:
{
"took": 10,
"timed_out": false,
"_shards": {
"total": 21,
"successful": 20,
"skipped": 20,
"failed": 1,
"failures": [
{
"shard": 0,
"index": "tourism-2022.12.11",
"node": null,
"reason": {
"type": "no_shard_available_action_exception",
"reason": null,
"index_uuid": "j2J6dUvTQ_q7qeyyU56bag",
"shard": "0",
"index": "tourism-2022.12.11"
}
}
]
},
"hits": {
"total": 0,
"max_score": 0,
"hits": []
}
}
i delete some indexes
expand pvc
but doesnt work anything
Via Kibana console check the cluster status, if you haven't got Kibana available convert the command to curl request.
Check the Elasticsearch cluster status:
GET /_cluster/health?wait_for_status=yellow&timeout=50s
Check index status:
GET /_cat/indices/tourism-2022.12.11?v=true&s=index
All shards are green, have you got documents available in your index ?

Put json value to Hazelcast as json format from nifi

How can I put json flow file to Hazelcast cache?
when I put data it has a binary format.
Flow file content example:
{
"InsCode": 72055516303318490,
"IsinCode": "IRO9MKBT6041",
"Symbol": "ضمخا8009",
"Company": "اختيارخ اخابر-12000-1401/08/29",
"LastTradeTime": 61457,
"FirstPrice": 0,
"ClosePrice": 1,
"LastPrice": 1,
"TransactionCount": 0,
"Volume": 0,
"Value": 0,
"LowPrice": 0,
"HighPrice": 0,
"YesyterdayPrice": 1,
"EPS": null,
"BaseVolume": 1,
"UnkownCol1": 200,
"Flow": 3,
"SectorID": 64,
"MaxAllowedPrice": 100000,
"MinAlloedPrice": 1,
"NumberOfShare": 1000,
"YVal": 311,
"DateKey": "20220516",
"CreatedAt": "2022-05-16 09:54:01.887Z"
}
But when I get data from Hazelcast API it would be like this:
In response header, content-type is application/binary

Convert Flash Coordinate into GeoJSON

I have a legacy .swf file that my team used to create a custom map.
The .swf file looks like this with following format:
{
"Signature": "CWS",
"Version": 8,
"FileLength": 87736,
"FrameSize": {
"Xmin": 0,
"Xmax": 14400,
"Ymin": 0,
"Ymax": 10000
},
"FrameRate": 12,
"FrameCount": 1,
"Tags": [
{
"TagName": "FileAttributes",
"Length": 4,
"Reserved": 0,
"HasMetaData": 0,
"SWFFlagsAS3": 0,
"SWFFlagsNoCrossDomainCache": 0,
"SWFFlagsUseNetwork": 0,
"UNDEFINED": 0
},
{
"TagName": "SetBackgroundColor",
"Length": 3,
"BackgroundColor": [
51,
51,
51
]
},
{
"TagName": "Protect",
"Length": 0
},
{
"TagName": "DefineShape4",
"Length": 309,
"ShapeId": 1,
"ShapeBounds": {
"Xmin": 10629,
"Xmax": 12137,
"Ymin": 4084,
"Ymax": 4748
},
"EdgeBounds": {
"Xmin": 10630,
"Xmax": 12136,
"Ymin": 4085,
"Ymax": 4747
},
"Reserved": 0,
"UsesFillWindingRule": 0,
"UsesNonScalingStrokes": 0,
"UsesScalingStrokes": 1,
"Shapes": {
"FillStyles": [
{
"FillStyleType": 0,
"FillStyleName": "solid fill",
"Color": [
255,
255,
102,
255
]
}
],
"LineStyles": [
{
"Width": 2,
"StartCapStyle": 0,
"JoinStyle": 0,
"HasFillFlag": 0,
"NoHScaleFlag": 0,
"NoVScaleFlag": 0,
"PixelHintingFlag": 0,
"Reserved": 0,
"NoClose": 0,
"EndCapStyle": 0,
"Color": [
255,
255,
255,
255
]
},
{
"Width": 2,
"StartCapStyle": 0,
"JoinStyle": 0,
"HasFillFlag": 0,
"NoHScaleFlag": 0,
"NoVScaleFlag": 0,
"PixelHintingFlag": 0,
"Reserved": 0,
"NoClose": 0,
"EndCapStyle": 0,
"Color": [
255,
255,
102,
255
]
}
],
"FillBits": 1,
"LineBits": 2,
"ShapeRecords": [
{
"RecordType": "stylechange",
"MoveDeltaX": 10630,
"MoveDeltaY": 4306,
"LineStyle": 1
},
{
"RecordType": "straightedge",
"LineType": "General",
"DeltaX": 23,
"DeltaY": -1
},
},
}
What format is this? and is there a way to convert this to GeoJson format so I can use it with d3.js?
To be specific, this is the data data for US combatant command(COCOM) map. I could not find the GeoJson format of this map in the entire Internet, so my only hope is to covert the legacy data into GeoJson
I ended up drawing my own cocom map using Geojson.io. I don't think there is a simple way to convert ShapeRecord to GeoJson since it is a completely different coordinate system.

How can I resolve the increase in index size when using nested objects in elasticsearch?

The total number of data is 1 billion.
When I configure an index by setting some fields of data as nested objects, the number of data increases and the index size increases.
There are about 20 nested objects in a document.
When I index 1 billion documents, the number of indexes is 20 billion, and the index size is about 20TB.
However, when I remove nested objects, the number of indexes is 1 billion, and the index size is about 5TB.
It's simply removed nested object and can not provide services with this index structure.
I know why nested objects have a higher index count than a simple object configuration.
But I ask why the index is four times larger and how to fix it.
version of elasticsearch : 5.1.1
The Sample Data is as follows:
nested object Mapping : idds, ishs, resources, versions
{
"fileType": {
"asdFormat": 1
},
"dac": {
"pe": {
"cal": {
"d1": -4634692645508395000,
"d2": -5805223225419042000,
"d3": -1705264433
},
"bytes": "6a7068e0",
"entry": 0,
"count": 7,
"css": {
"idh": 0,
"ish": 0,
"ifh": 0,
"ioh": 0,
"ish": 0,
"ied": 0,
"exp": 0,
"imp": 0,
"sec": 0
},
"ff": {
"field1": 23117,
"field2": 144,
"field3": 3,
"field4": 0,
"field5": 4,
"field6": 0,
"field7": 65535,
"field8": 0,
"field9": 184,
"field10": 0,
"field11": 0,
"field12": 0,
"field13": 64,
"field14": 0,
"field15": 40104,
"field16": 64563,
"field17": 0,
"field18": 0,
"field19": 0,
"field20": 0,
"field21": 0,
"field22": 0,
"field23": 0,
"field24": 0,
"field25": 0,
"field26": 0,
"field27": 0,
"field28": 0,
"field29": 0,
"field30": 0,
"field31": 224
},
"ifh": {
"mc": 332,
"nos": 3,
"time": 1091599505,
"ps": 0,
"ns": 0,
"soh": 224,
"chart": 271
},
"ioh": {
"magic": 267,
"mlv": 7,
"nlv": 10,
"soc": 80384,
"soid": 137216,
"soud": 0,
"aep": 70290,
"boc": 4096,
"bod": 86016,
"aib": "16777216",
"si": 4096,
"fa": 512,
"mosv": 5,
"nosv": 1,
"miv": 5,
"niv": 1,
"msv": 4,
"nsv": 0,
"wv": 0,
"si": 262144,
"sh": 1024,
"cs": 0,
"ss": 2,
"dllchart": 32768,
"ssr": "262144",
"ssc": "262144",
"ssh": "1048576",
"shc": "4096",
"lf": 0,
"nor": 16
},
"idds": [
{
"id": 1,
"address": 77504,
"size": 300
},
{
"id": 2,
"address": 106496,
"size": 134960
},
{
"id": 6,
"address": 5264,
"size": 28
},
{
"id": 11,
"address": 592,
"size": 300
},
{
"id": 12,
"address": 4096,
"size": 1156
}
],
"ishs": [
{
"id": 0,
"name": ".text",
"size": 79920,
"address": 4096,
"srd": 80384,
"ptr": 1024,
"ptrl": 0,
"ptl": 0,
"nor": 0,
"nol": 0,
"chart": 3758096480,
"ex1": 60404022,
"ex2": 61903965,
"ex": 61153993.5
},
{
"id": 1,
"name": ".data",
"size": 17884,
"address": 86016,
"srd": 2048,
"ptr": 81408,
"ptrl": 0,
"ptl": 0,
"nor": 0,
"nol": 0,
"chart": 3221225536,
"ex1": 27817394,
"ex2": -1,
"ex": 27817394
},
{
"id": 2,
"name": ".rsrc",
"size": 155648,
"address": 106496,
"srd": 135680,
"ptr": 83456,
"ptrl": 0,
"ptl": 0,
"nor": 0,
"nol": 0,
"chart": 3758096448,
"ex1": 38215005,
"ex2": 46960547,
"ex": 42587776
}
],
"resources": [
{
"id": 2,
"count": 3,
"hash": 658696779440676200
},
{
"id": 3,
"count": 14,
"hash": 4671329014159995000
},
{
"id": 5,
"count": 30,
"hash": -6413921454731808000
},
{
"id": 6,
"count": 17,
"hash": 8148183923057157000
},
{
"id": 14,
"count": 4,
"hash": 8004262029246967000
},
{
"id": 16,
"count": 1,
"hash": 7310592488525726000
},
{
"id": 2147487240,
"count": 2,
"hash": -7466967570237519000
}
],
"upx": {
"path": "xps",
"d64": 3570326159822345700
},
"versions": [
{
"language": 1042,
"codePage": 1200,
"companyName": "Microsoft Corporation",
"fileDescription": "Files and Settings Transfer Wizard",
"fileVersion": "5.1.2600.2180 (xpsp_sp2_rtm.040803-2158)",
"internalName": "Microsoft",
"legalCopyright": "Copyright (C) Microsoft Corp. 1999-2000",
"originalFileName": "calc.exe",
"productName": "Microsoft(R) Windows (R) 2000 Operating System",
"productVersion": "5.1.2600.2180"
}
],
"import": {
"dll": [
"GDI32.dll",
"KERNEL32.dll",
"USER32.dll",
"ole32.dll",
"ADVAPI32.dll",
"COMCTL32.dll",
"SHELL32.dll",
"msvcrt.dll",
"comdlg32.dll",
"SHLWAPI.dll",
"SETUPAPI.dll",
"Cabinet.dll",
"LOG.dll",
"MIGISM.dll"
],
"count": 14,
"d1": -149422985349905340,
"d2": -5344971616648705000,
"d3": 947564411044974800
},
"ddSec0": {
"d1": -3007779250746558000,
"d4": -2515772085422514700
},
"ddSec2": {
"d2": -4422408392580008000,
"d4": -8199520081862749000
},
"ddSec3": {
"d1": -8199520081862749000
},
"cdp": {
"d1": 787971,
"d2": 39,
"d3": 101980696,
"d4": 3,
"d5": 285349133
},
"cde": {
"d1": 67242500,
"d2": 33687042,
"d3": 218303490,
"d4": 1663632132,
"d5": 0
},
"cdm": {
"d1": 319293444,
"d2": 2819,
"d3": 168364553,
"d4": 50467081,
"d5": 198664
},
"cdb": {
"d1": 0,
"d2": 0,
"d3": 0,
"d4": 0,
"d5": 0
},
"mm": {
"d0": -3545367393134139000,
"d1": 1008464166428372900,
"d2": -6313842304565328000,
"d3": -5015640502060250000
},
"ser": 17744,
"ideal": 0,
"map": 130,
"ol": 0
}
},
"fileSize": 219136
}

GetStats duration and interval parameters, clarifying API documentation for Jelastic API

https://docs.jelastic.com/api/?class=environment.Control&member=GetStats
At the above link in the Jelastic API documentation for the GetStats method there are two parameters duration and interval.
When querying the api i can't figure out how these two parameters interact with each other.
If i query with the below i would expect 100 records at a resolution of 1 minute
/1.0/environment/control/rest/getstats?domain=[myDomiain]&session=[MySession]&duration=6000&interval=60&nodeid=[MyNode]
What i get back is 4 records for each hour so i'm unsure of how the parameters work.
Should i be using GetSumStats?
My final question would be what format are the cpu and mem stats in? MHz and Bytes?
{
"iops_used": 0,
"duration": 3600,
"cpumhz": 3,
"start": "2016-05-03 08:00:00",
"disk": 2141,
"mem": 194840,
"cpu": 12254,
"capacity": 0,
"net": {
"in_int": 703019,
"out_int": 566947,
"in_ext": 46222,
"out_ext": 367209
}
},
{
"iops_used": 0,
"duration": 3600,
"cpumhz": 3,
"start": "2016-05-03 09:00:00",
"disk": 2141,
"mem": 171992,
"cpu": 10076,
"capacity": 0,
"net": {
"in_int": 156703,
"out_int": 314023,
"in_ext": 12627,
"out_ext": 13535
}
},
{
"iops_used": 0,
"duration": 3580,
"cpumhz": 3,
"start": "2016-05-03 10:00:00",
"disk": 2141,
"mem": 172400,
"cpu": 11198,
"capacity": 0,
"net": {
"in_int": 515521,
"out_int": 551317,
"in_ext": 10329,
"out_ext": 17161
}
},
{
"iops_used": 0,
"duration": 3601,
"cpumhz": 3,
"start": "2016-05-03 11:00:00",
"disk": 2141,
"mem": 172610,
"cpu": 10032,
"capacity": 0,
"net": {
"in_int": 153394,
"out_int": 310694,
"in_ext": 10285,
"out_ext": 11210
}
}
#dlearious, for using interval equal 60 you should set duration value to 3600. This is due to the fact that Jelastic keeps detailed data hourly.
Also, you can start from minimal interval = 20.
Jelastic shows cpu in milliseconds and mem in Bytes.

Resources