I want to apply group by only on one filed with Spring Boot and MongoDB. Let say I've one collection in Mongo which consist following fields
{
"_id": {
"$binary": {
"base64": "RZX+2GWyTD2DWyD/01ZPmA==",
"subType": "04"
}
},
"categoryType": "CRICKET",
"description": "I like Cricket",
}
{
"_id": {
"$binary": {
"base64": "RZX+2GWyTD2DWyD/01ZPmA==",
"subType": "04"
}
},
"categoryType": "FOOTBALL",
"description": "I love Footbal",
}
{
"_id": {
"$binary": {
"base64": "RZX+2GWyTD2DWyD/01ZPmA==",
"subType": "04"
}
},
"categoryType": "CRICKET",
"description": "I love Cricket",
}
{
"_id": {
"$binary": {
"base64": "RZX+2GWyTD2DWyD/01ZPmA==",
"subType": "04"
}
},
"categoryType": "Martial Art",
"description": "Martial Art",
}
{
"_id": {
"$binary": {
"base64": "RZX+2GWyTD2DWyD/01ZPmA==",
"subType": "04"
}
},
"categoryType": "FOOTBALL",
"description": "Football",
}
My purpose is to display records by their group. So how I can apply group by with Spring Boot and MongoDB?
Related
I have an Opensearch index with a string field message defined as below:
{"name":"message.message","type":"string","esTypes":["text"],"count":0,"scripted":false,"searchable":true,"aggregatable":false,"readFromDocValues":false}
Sample data:
"_source" : {
"message" : {
"message" : "user: AB, from: home, to: /app1"
}
}
I would like to convert the message column into json so that I can access the values message.user, message.from and message.to individually.
How do I go about it?
You can use Json Processor.
POST /_ingest/pipeline/_simulate
{
"pipeline": {
"description": "convert json to object",
"processors": [
{
"json": {
"field": "foo",
"target_field": "json_target"
}
}
]
},
"docs": [
{
"_index": "index",
"_id": "id",
"_source": {
"foo": "{\"name\":\"message.message\",\"type\":\"string\",\"esTypes\":[\"text\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false}\r\n"
}
}
]
}
Response:
{
"docs": [
{
"doc": {
"_index": "index",
"_id": "id",
"_version": "-3",
"_source": {
"foo": """{"name":"message.message","type":"string","esTypes":["text"],"count":0,"scripted":false,"searchable":true,"aggregatable":false,"readFromDocValues":false}
""",
"json_target": {
"esTypes": [
"text"
],
"readFromDocValues": false,
"name": "message.message",
"count": 0,
"aggregatable": false,
"type": "string",
"scripted": false,
"searchable": true
}
},
"_ingest": {
"timestamp": "2022-11-09T19:38:01.16232Z"
}
}
}
]
}
I'm trying to return a Discovery Response, but the supportedCookingModes only seems to accept standard values and only in the format of ["OFF","BAKE"], not Custom values as indicated by the documentation. Any idea on how to specify custom values?
{
"event": {
"header": {
"namespace": "Alexa.Discovery",
"name": "Discover.Response",
"payloadVersion": "3",
"messageId": "asdf"
},
"payload": {
"endpoints": [
{
"endpointId": "asdf",
"capabilities": [
{
"type": "AlexaInterface",
"interface": "Alexa.Cooking",
"version": "3",
"properties": {
"supported": [
{
"name": "cookingMode"
}
],
"proactivelyReported": true,
"retrievable": true,
"nonControllable": false
},
"configuration": {
"supportsRemoteStart": true,
"supportedCookingModes": [
{
"value": "OFF"
},
{
"value": "BAKE"
},
{
"value": "CUSTOM",
"customName": "FANCY_NANCY_MODE"
}
]
}
}
]
}
]
}
}
}
Custom cooking modes are brand specific. This functionality is not yet publicly available. I recommend you to choose one of the existing cooking modes:
https://developer.amazon.com/en-US/docs/alexa/device-apis/cooking-property-schemas.html#cooking-mode-values
I am having my data indexed in elastic search in version 7.11. This is my mapping i got when i directly added documents to my index.
{"properties":{"name":{"type":"text","fields":{"keyword":{"type":"keyword","ignore_above":256}}}
I havent added the keyword part but no idea where it came from.
I am running a wild card query on the same. But unable to get data for keywords with spaces.
{
"query": {
"bool":{
"should":[
{"wildcard": {"name":"*hello world*"}}
]
}
}
}
Have seen many answers related to not_analyzed . And i have tried updating {"index":"true"} in mapping but with no help. How to make the wild card search work in this version of elastic search
Tried adding the wildcard field
PUT http://localhost:9001/indexname/_mapping
{
"properties": {
"name": {
"type" :"wildcard"
}
}
}
And got following response
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "mapper [name] cannot be changed from type [text] to [wildcard]"
}
],
"type": "illegal_argument_exception",
"reason": "mapper [name] cannot be changed from type [text] to [wildcard]"
},
"status": 400
}
Adding a sample document to match
{
"_index": "accelerators",
"_type": "_doc",
"_id": "602ec047a70f7f30bcf75dec",
"_score": 1.0,
"_source": {
"acc_id": "602ec047a70f7f30bcf75dec",
"name": "hello world example",
"type": "Accelerator",
"description": "khdkhfk ldsjl klsdkl",
"teamMembers": [
{
"userId": "karthik.r#gmail.com",
"name": "Karthik Ganesh R",
"shortName": "KR",
"isOwner": true
},
{
"userId": "anand.sajan#gmail.com",
"name": "Anand Sajan",
"shortName": "AS",
"isOwner": false
}
],
"sectorObj": [
{
"item_id": 14,
"item_text": "Cross-sector"
}
],
"geographyObj": [
{
"item_id": 4,
"item_text": "Global"
}
],
"technologyObj": [
{
"item_id": 1,
"item_text": "Artificial Intelligence"
}
],
"themeColor": 1,
"mainImage": "assets/images/Graphics/Asset 35.svg",
"features": [
{
"name": "Ideation",
"icon": "Asset 1007.svg"
},
{
"name": "Innovation",
"icon": "Asset 1044.svg"
},
{
"name": "Strategy",
"icon": "Asset 1129.svg"
},
{
"name": "Intuitive",
"icon": "Asset 964.svg"
},
],
"logo": {
"actualFileName": "",
"fileExtension": "",
"fileName": "",
"fileSize": 0,
"fileUrl": ""
},
"customLogo": {
"logoColor": "#B9241C",
"logoText": "EC",
"logoTextColor": "#F6F6FA"
},
"collaborators": [
{
"userId": "muhammed.arif#gmail.com",
"name": "muhammed Arif P T",
"shortName": "MA"
},
{
"userId": "anand.sajan#gmail.com",
"name": "Anand Sajan",
"shortName": "AS"
}
],
"created_date": "2021-02-18T19:30:15.238000Z",
"modified_date": "2021-03-11T11:45:49.583000Z"
}
}
You cannot modify a field mapping once created. However, you can create another sub-field of type wildcard, like this:
PUT http://localhost:9001/indexname/_mapping
{
"properties": {
"name": {
"type": "text",
"fields": {
"wildcard": {
"type" :"wildcard"
},
"keyword": {
"type" :"keyword",
"ignore_above":256
}
}
}
}
}
When the mapping is updated, you need to reindex your data so that the new field gets indexed, like this:
POST http://localhost:9001/indexname/_update_by_query
And then when this finishes, you'll be able to query on this new field like this:
{
"query": {
"bool": {
"should": [
{
"wildcard": {
"name.wildcard": "*hello world*"
}
}
]
}
}
}
I'm new to Elastic Stack.
Here, I'm trying to get the value for "pressure" and then convert it to numeric value(string⇒numeric) using Kibana scripted field.
I tried scripted field, but it didn't work for me.
Any idea? I really appreciate your support in advance.
One of my data records is as below.
{
"_index": "production",
"_type": "_doc",
"_id": "4570df7a0d4ec1b0e624e868a5861a0f1a9a7f6c35fdsssafafsa734fb152f4bed",
"_version": 1,
"_score": null,
"_source": {
"factorycode": "AM-NY",
"productcode": "STR",
"lastupdatetime": "2020-05-28T04:16:17.590Z",
"#timestamp": "2020-05-28T04:14:48.000Z",
"massproduction": {
"errorcode": null,
"equipment": "P17470110",
"operatorldap": null,
"machinetime": null,
"quantity": "1",
"externalfilename": null,
"errorcomment": null,
"datas": {
"data": [
{
"value": "45.4",
"id": "001",
"name": "pressure"
},
{
"value": "0.45",
"id": "002",
"name": "current"
}
]
},
"ladderver": null,
"eid": null,
"setupid": null,
"model": "0",
"identificationtagid": null,
"workid": "GD606546sf0B002020040800198",
"reuse": {
"num": "0"
},
"registrydate": "2020-05-28T13:14:48",
"product": "GD604564550B00",
"line": "STRS001",
"judge": "1",
"cycletime": null,
"processcode": "OP335",
"registryutcdate": "2020-04-28T04:14:48",
"name": "massproduction"
}
},
"fields": {
"massproduction.registrydate": [
"2020-05-28T13:14:48.000Z"
],
"#timestamp": [
"2020-05-28T04:14:48.000Z"
],
"lastupdatetime": [
"2020-05-28T04:16:17.590Z"
],
"registrydate": [
"2020-05-28T13:14:48.000Z"
],
"massproduction.registryutcdate": [
"2020-05-28T04:14:48.000Z"
],
"registryutcdate": [
"2020-05-28T04:14:48.000Z"
]
},
"sort": [
158806546548000
]
}
This is my "painless" scripted field in Kibana.
for(item in params._source.massproduction.datas.data)
{
if(item.name=='pressure'){
return item.value;
}
}
return 0;
You can use Float.parseFloat(value) to convert string to float
if(params._source.massproduction!=null && params._source.massproduction.datas!=null &¶ms._source.massproduction.datas.data.size()>0)
{
def data = params._source.massproduction.datas.data;
if(data instanceof ArrayList)
{
for(item in data)
{
if(item.name=='pressure')
{
return Float.parseFloat(item.value);
}
}
}else
{
if(data.name=='pressure')
{
return Float.parseFloat(data.value);
}
}
}
return 0;
I have a document with the following content in all the documents in an index:
"universities": {
"number": 1,
"state": [
{
"Name": "michigan",
"country": "us",
"code": 5696
}
]
}
I want to update all the documents in the index like this
"universities": {
"number": 1,
"state": [
{
"Name": "michigan",
"country": "us",
"code": 5696
},
{
"Name": "seatle",
"country": "us",
"code": 5695
}
]
}
IS this can be possible using update_by_query in elasticsearch 2.4.1?
I tried the below query:
"script": {
"inline": "for(i in ctx._source.univeristies.state){i.name=Text}",
"params": {
"Text": "seatle"
}
}
}
but is appending the name to existing one rather than creating a new one in a list.
You need to use this script instead:
"script": {
"inline": "ctx._source.universities.state.add(new_state)",
"params": {
"new_state": {
"Text": "Seattle",
"country": "us",
"code": 5695
}
}
}
}
UPDATE:
For later versions of ES (6+), the query looks like this instead:
"script": {
"source": "ctx._source.universities.state.add(params.new_state)",
"params": {
"new_state": {
"Text": "Seattle",
"country": "us",
"code": 5695
}
}
}
}