In an index I have two mappings.
"mappings" : {
"deliveries" : {
"properties" : {
"#timestamp": { "type" : "date", "format": "yyyy-MM-dd" },
"receiptName" : { "type" : "text" },
"amountDelivered" : { "type" : "integer" },
"amountSold" : { "type" : "integer" },
"sellingPrice" : { "type" : "float" },
"earned" : { "type" : "float" }
}
},
"expenses" : {
"properties" : {
"#timestamp": { "type" : "date", "format": "yyyy-MM-dd" },
"description": { "type" : "text" },
"amount": { "type": "float" }
}
}
}
Now I wanted to create a simple Pie Chart in Kibana for sumarize up deliveries.earned and expenses.amount.
Is this possible or do I have to switch to an client application? The number of documents (2 or 3 a month) is really to less to start some development here xD
You can create a simple scripted_field through Kibana which maps amount and earned fields to the same field, called transaction_amount.
Painless script:
if(doc['earned'].length > 0) { return doc['earned']; } else { return doc['amount']; }
Then you can create a Pie Chart with "Slice Size" configured as the sum of transaction_amount and "Split Slices" configured as a Terms Aggregation on _type.
Related
I want to achieve the following in Kibana:!
I can't figure how to achieve this. This is my setup:
{
"mappings" : {
"properties" : {
"count_value" : {
"type" : "integer"
},
"date" : {
"type" : "date",
"format": "yyyy-MM-dd"
},
"datetime" : {
"type" : "date",
"format" : "yyyy-MM-ddHH:mm:ss"
},
"time" : {
"type" : "date",
"format": "HH:mm:ss"
}
}
}
}
Are my mappings wrong?
I've tried the mapping but can't get desired effect.
I have a nested object indexed in elasticsearch (7.10) and I need to visualize it with a kibana table. The problem is that kibana throws in the values from the nested field which have the same name in one column.
Part of the index:
{
"index" : {
"mappings" : {
"properties" : {
"data1" : {
"type" : "keyword"
},
"Details" : {
"type" : "nested",
"properties" : {
"Amount" : {
"type" : "float"
},
"Currency" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"DetailType" : {
"type" : "keyword"
},
"Price" : {
"type" : "float"
},
"Quantity" : {
"type" : "float"
},
"TotalAmount" : {
"type" : "float"
.......
The problem in the table:
How can I get three rows named Details each with one split term (e.g DetailType: "start_fee")?
Update:
I could query the nested object in the console:
GET _search
{
"query": {
"nested": {
"path": "Details",
"query": {
"bool": {
"must": [
{ "match": { "Details.DetailType": "energybased_fee" }}
]
}
},
"inner_hits": {
}
}}}
But how can I visualize in the table only the "inner_hits" value?
My purpose is to calculate the yield of each benchId. Which means: For each bench, what is the percentage of team that have isPassed=True the first time they pass the test. I would like to have a visualization of each yield for each bench.
My Elasticsearch mapping is:
"test-logs" : {
"mappings" : {
"log" : {
"properties" : {
"benchGroup" : {
"type" : "keyword"
},
"benchId" : {
"type" : "keyword"
},
"date" : {
"type" : "date",
"format" : "yyyy/MM/dd HH:mm:ss"
},
"duration" : {
"type" : "float"
},
"finalStatus" : {
"type" : "keyword"
},
"isCss" : {
"type" : "boolean"
},
"isPassed" : {
"type" : "boolean"
},
"machine" : {
"type" : "keyword"
},
"sha1" : {
"type" : "keyword"
},
"uuid" : {
"type" : "keyword"
},
"team" : {
"type" : "keyword"
}
I tried to divide this issue in several sub-issues. I think I need to aggregate the documents by benchId then sub-aggregate them by team, ordering them by date then taking the first document. Then I think need to use a script to calculate isPassed=True/all first attemps.
No idea how to visualize the result on Kibana though.
I manage to create aggregations with this search:
GET _search
{
"size" : 0,
"aggs": {
"benchId": {
"terms": {
"field": "benchId"
},
"aggs": {
"teams": {
"terms": {
"script": "doc['uut'].join(' & ')",
"size": 10
}
}
}
}
}
}
I get the result I want but I have difficulties to include order by date ascending with limitation to one document by uut
I am trying to work with geo code in elasticsearch, I have an index which is having two different unique field as latitude and longitude. Both are being stored as double, I want to use copy to feature of elasticsearch and copy both field value to a third field which will have geo_point type. I tried doing that but that's not working as intended.
{
"mappings": {
"properties": {
"unique_id": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"location_data": {
"properties": {
"latitude": {
"type": "float",
"copy_to": "last_location"
},
"longitude": {
"type": "float",
"copy_to": "last_location"
},
"last_location": {
"type": "geo_point"
}
}
}
}
}
}
When I index a sample document such as
{
"unique_id": "12345_mytest",
"location_data": {
"latitude": 37.16,
"longitude": -124.76
}
}
You will be able to see in the new mapping that the last_location field which was supposed to be inside location_data object is also populated at root level with a different data type other than geo_point.
{
"mappings" : {
"properties" : {
"last_location" : {
"type" : "float"
},
"location_data" : {
"properties" : {
"last_location" : {
"type" : "geo_point",
"store" : true
},
"latitude" : {
"type" : "float",
"copy_to" : [
"last_location"
]
},
"longitude" : {
"type" : "float",
"copy_to" : [
"last_location"
]
}
}
},
"unique_id" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword"
}
}
}
}
}
}
And furthermore when I query over the field I am unable to get the result as expected.
This doesn't works, any other ideas or way to do that. I know I can do that from the source itself or by altering the data before indexing, but I don't have to luxury to do that right away. Any other way by altering the mapping is most welcome. Thanks in advance for any pointers to get this done.
Thanks
Ashit
I am using elasticsearch version 6.3.1.
And I am creating a nested type field,I have created this field to append all the documents of same ID.
Here is my schema for index:-
curl -XPUT 'localhost:9200/axes_index_test12?pretty' -H 'Content-Type: application/json' -d'
{
"mappings": {
"axes_type_test12": {
"properties": {
"totalData": {
"type": "nested",
"properties": {
"gpsdt": {
"type": "date",
"format":"dateOptionalTime"
},
"extbatlevel": {
"type": "integer"
},
"intbatlevel" : {
"type" : "integer"
},
"lastgpsdt": {
"type": "date",
"format":"dateOptionalTime"
},
"satno" : {
"type" : "integer"
},
"srtangle" : {
"type" : "integer"
}
}
},
"imei": {
"type": "long"
},
"date": {
"type": "date", "format":"dateOptionalTime"
},
"id" : {
"type" : "long"
}
}
}
}
}'
And to append into existing array I call following API : -
Here is the documents which I have to append:-
data={
"script" : {"source": "ctx._source.totalData.add(params.count)",
"lang": "painless",
"params" : {"count" : { "gpsdt" : gpsdt,
"analog1" : analog1,
"analog2" : analog2,
"analog3" : analog3,
"analog4" : analog4,
"digital1" : digital1,
"digital2" : digital2,
"digital3" : digital3,
"digital4" : digital4,
"extbatlevel" : extbatlevel,
"intbatlevel" : intbatlevel,
"lastgpsdt" : lastgpsdt,
"latitude" : latitude,
"longitude" : longitude,
"odo" : odo,
"odometer" : odometer,
"satno" : satno,
"srtangle" : srtangle,
"speed" : speed
}
}
}
}
Document Parsing:-
json_data = json.dumps(data)
And API url is: -
API_ENDPOINT = "http://localhost:9200/axes_index_test12/axes_type_test12/"+str(documentId)+"/_update"
And Finnaly I call this API:-
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
r = requests.post(url = API_ENDPOINT, data = json_data,headers=headers
Everything is fine with this but I am not getting good performance when I append new documents in existing array.
So please suggest me what changes I should make?
And I have 4 node cluster, 1 master, 2 data nodes and one cordinator node.