I try to refrain from asking questions with simple answers but I can't seem to figure out what the issue is here... (Issue in title)
Relevant code:
match := new(Match)
if _, msgB, err = ws.ReadMessage(); err != nil {
panic(err)
}else {
println(string(msgB))
err = json.Unmarshal(msgB, match)
if err != nil { panic(err) }
}
type Match struct {
Teams [][]Char
Map [][]Tile
ID string //uuid
Socket *websocket.Conn `json:'-'`
}
type Char struct {
ID int
HP int
CT int
Stats statList
X int
Y int
ACList Actions
}
type statList struct {
Str int
Vit int
Int int
Wis int
Dex int
Spd int
}
type Actions struct {
Actions []Action
TICKCT int
}
String to unmarshal (Formatted for visibility):
{
"Teams": [
[
{
"ID": 1,
"HP": 10,
"CT": 0,
"Stats": [
1,
1,
1,
1,
1,
1
],
"X": 0,
"Y": 0,
"ACList": {
"Actions": [],
"TICKCT": 0
}
}
],
[
{
"ID": 2,
"HP": 10,
"CT": 0,
"Stats": [
1,
1,
1,
1,
1,
1
],
"X": 2,
"Y": 2,
"ACList": {
"Actions": [],
"TICKCT": 0
}
}
]
],
"Map": [
[
{
"Depth": 1,
"Type": 1,
"Unit": 1
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
}
],
[
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
}
],
[
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": 2
}
]
],
"ID": "0b055e19-9b96-e492-b816-43297f12cc39"}
Error:
2014/03/28 12:11:41 http: panic serving 127.0.0.1:56436: json: cannot
unmarshal number into Go value of type main.Char
I made a fixed version of the code (playground). This seemed to be the main mistake:
type Char struct {
ID int
HP int
CT int
Stats []int // This was statList which won't work
X int
Y int
ACList Actions
}
Also note the definition I made of Tile which allows numbers to be nil.
type Tile struct {
Depth int
Type int
Unit *int
}
You didn't provide all the structures so I made some up - probably wrong! All together that is:
import (
"encoding/json"
"fmt"
)
type Match struct {
Teams [][]Char
Map [][]Tile
ID string //uuid
// Socket *websocket.Conn `json:'-'`
}
type Char struct {
ID int
HP int
CT int
Stats []int // This was statList which won't work
X int
Y int
ACList Actions
}
type statList struct {
Str int
Vit int
Int int
Wis int
Dex int
Spd int
}
type Action string
type Actions struct {
Actions []Action
TICKCT int
}
type Tile struct {
Depth int
Type int
Unit *int
}
var data = `{
"Teams": [
[
{
"ID": 1,
"HP": 10,
"CT": 0,
"Stats": [
1,
1,
1,
1,
1,
1
],
"X": 0,
"Y": 0,
"ACList": {
"Actions": [],
"TICKCT": 0
}
}
],
[
{
"ID": 2,
"HP": 10,
"CT": 0,
"Stats": [
1,
1,
1,
1,
1,
1
],
"X": 2,
"Y": 2,
"ACList": {
"Actions": [],
"TICKCT": 0
}
}
]
],
"Map": [
[
{
"Depth": 1,
"Type": 1,
"Unit": 1
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
}
],
[
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
}
],
[
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": null
},
{
"Depth": 1,
"Type": 1,
"Unit": 2
}
]
],
"ID": "0b055e19-9b96-e492-b816-43297f12cc39"}`
func main() {
match := new(Match)
err := json.Unmarshal([]byte(data), match)
if err != nil {
panic(err)
}
fmt.Printf("match = %#v\n", match)
}
Related
I have following structure:
{
"list": [
{ "depth": 0, "data": "lorem1" },
{ "depth": 1, "data": "lorem2" },
{ "depth": 2, "data": "lorem3" },
{ "depth": 2, "data": "lorem4" },
{ "depth": 0, "data": "lorem5" },
{ "depth": 1, "data": "lorem6" },
{ "depth": 1, "data": "lorem7" },
{ "depth": 2, "data": "lorem8" }
]
}
I am looking for an algorithm on how to create from that depth a parent-child-like, nested structure.
{
"list": [{
"depth": 0,
"data": "lorem1",
"children": [{
"depth": 1,
"data": "lorem2",
"children": [{
"depth": 2,
"data": "lorem3",
"children": [],
}, {
"depth": 2,
"data": "lorem4",
"children": [],
}]
}]
}, {
"depth": 0,
"data": "lorem5",
"children": [{
"depth": 1,
"data": "lorem6",
"children": [],
}, {
"depth": 1,
"data": "lorem7",
"children": [{
"depth": 2,
"data": "lorem8",
"children": [],
}]
}]
}
]}
The logic is like this:
Assumption: The first item in the list always starts with depth=0
If depth is larger than the last, it must be child of this last one
I can not get this to work. It should be recursive to have infinite nesting/depth levels.
Thank you guys for the help!
You can use a stack to keep track of the current path in the tree. When depth increases from one to the next, then push the new node also on that stack. If not, pop items from the stack until the right depth is reached.
Then you always know in which children collection you need to add the new node.
Here is an runnable implementation in JavaScript:
function algo(list) {
// Create a dummy node to always stay at the bottom of the stack:
let stack = [
{ "depth": -1, "data": "(root)", "children": [] }
];
for (let node of list) {
let newNode = { ...node, children: [] }; // Copy and add children property
if (newNode.depth >= stack.length || newNode.depth < 0) throw "Invalid depth";
while (newNode.depth < stack.length - 1) stack.pop();
stack[stack.length - 1].children.push(newNode);
stack.push(newNode);
}
return stack[0].children;
}
// Demo
let data = {
"list": [
{ "depth": 0, "data": "lorem1" },
{ "depth": 1, "data": "lorem2" },
{ "depth": 2, "data": "lorem3" },
{ "depth": 2, "data": "lorem4" },
{ "depth": 0, "data": "lorem5" },
{ "depth": 1, "data": "lorem6" },
{ "depth": 1, "data": "lorem7" },
{ "depth": 2, "data": "lorem8" }
]
}
// Create a new structure, and load the transformed list in its list property:
let result = {
"list": algo(data.list)
};
// Show result
console.log(result);
To answer to your request to do this without dummy node:
function algo(list) {
let result = [];
let stack = [];
for (let node of list) {
let newNode = { ...node, children: [] }; // Copy and add children property
if (newNode.depth > stack.length || newNode.depth < 0) throw "Invalid depth";
while (newNode.depth < stack.length) stack.pop();
if (!stack.length) result.push(newNode);
else stack[stack.length - 1].children.push(newNode);
stack.push(newNode);
}
return result;
}
// Demo
let data = {
"list": [
{ "depth": 0, "data": "lorem1" },
{ "depth": 1, "data": "lorem2" },
{ "depth": 2, "data": "lorem3" },
{ "depth": 2, "data": "lorem4" },
{ "depth": 0, "data": "lorem5" },
{ "depth": 1, "data": "lorem6" },
{ "depth": 1, "data": "lorem7" },
{ "depth": 2, "data": "lorem8" }
]
}
// Create a new structure, and load the transformed list in its list property:
let result = {
"list": algo(data.list)
};
// Show result
console.log(result);
I ran into the following problem:
I specified relationships in the data structures, but some data is output empty.
For example: I get data on a product, the price is pulled up, the stocks are pulled up, but the data on the store where they are located is not pulled up to the stock.
The structures that I use:
type Product struct {
Id uint `json:"id"`
Code int `json:"code" gorm:"index"`
ProductName string `json:"name"`
Brand string `json:"brand"`
Price Price `json:"price" gorm:"foreignKey:ProductCode;references:Code"`
Stocks []Stock `json:"stock" gorm:"foreignKey:ProductCode;references:Code"`
}
type Stock struct {
ProductCode int `json:"product_code"`
StoreCode uint `json:"store_code" gorm:"index;"`
Quantity int `json:"quantity"`
Store Store `json:"store" gorm:"foreignKey:StoreId;references:StoreCode;constraint:OnUpdate:CASCADE,OnDelete:CASCADE;"`
Timestamp int64 `json:"timestamp"`
}
type Price struct {
ProductCode int `json:"product_code"`
OldPrice int `json:"old_price"`
CurrentPrice int `json:"current_price"`
WholesalePrice int `json:"wholesale_price"`
Timestamp int64 `json:"timestamp"`
}
type Store struct {
Id uint `json:"id" gorm:"primaryKey"`
StoreKaspiId string `json:"store_kaspi_id"`
StoreId uint `json:"store_id"`
CityId uint `json:"city_id"`
City City `json:"city" gorm:"foreignKey:CityId"`
StoreAddress string `json:"store_address"`
}
type City struct {
Id uint `json:"id"`
Name string `json:"name"`
}
Query result:
{
"id": 79,
"code": 687,
"name": "Электромясорубка Аксион M21.03",
"brand": "Аксион",
"price": {
"product_code": 687,
"old_price": 30990,
"current_price": 23910,
"wholesale_price": 19900,
"timestamp": 1628824966063
},
"stock": [
{
"product_code": 687,
"store_code": 15,
"quantity": 37,
"store": {
"id": 0,
"store_kaspi_id": "",
"store_id": 0,
"city_id": 0,
"city": {
"id": 0,
"name": ""
},
"store_address": ""
},
"timestamp": 1628824966219
},
{
"product_code": 687,
"store_code": 20,
"quantity": 39,
"store": {
"id": 0,
"store_kaspi_id": "",
"store_id": 0,
"city_id": 0,
"city": {
"id": 0,
"name": ""
},
"store_address": ""
},
"timestamp": 1628824966219
},
{
"product_code": 687,
"store_code": 42,
"quantity": 39,
"store": {
"id": 0,
"store_kaspi_id": "",
"store_id": 0,
"city_id": 0,
"city": {
"id": 0,
"name": ""
},
"store_address": ""
},
"timestamp": 1628824966219
},
{
"product_code": 687,
"store_code": 45,
"quantity": 47,
"store": {
"id": 0,
"store_kaspi_id": "",
"store_id": 0,
"city_id": 0,
"city": {
"id": 0,
"name": ""
},
"store_address": ""
},
"timestamp": 1628824966219
}
]
}
Please help me understand!
Thank you in advance!
I'm working on aggregations in NEST, so far everything has worked well, but now when I try to access nested fields through .children the result is null, however the debugger is showing the data correctly.
If I post this query through postman I get the following results:
{
"took": 50,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 9,
"relation": "eq"
},
"max_score": null,
"hits": []
},
"aggregations": {
"filter#CollarSize": {
"meta": {},
"doc_count": 9,
"nested#VariantsProperties": {
"doc_count": 53,
"sterms#CollarSize": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "CollarSize",
"doc_count": 39,
"sterms#banana": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "15",
"doc_count": 7
},
{
"key": "16",
"doc_count": 7
},
{
"key": "17",
"doc_count": 6
},
{
"key": "18",
"doc_count": 6
},
{
"key": "LAR",
"doc_count": 2
},
{
"key": "MED",
"doc_count": 2
},
{
"key": "SML",
"doc_count": 2
},
{
"key": "X.L",
"doc_count": 2
},
{
"key": "XXL",
"doc_count": 2
},
{
"key": "15.5",
"doc_count": 1
},
{
"key": "16.5",
"doc_count": 1
},
{
"key": "XXXL",
"doc_count": 1
}
]
}
},
{
"key": "Colour",
"doc_count": 14,
"sterms#banana": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "Blue",
"doc_count": 7
},
{
"key": "White",
"doc_count": 7
}
]
}
}
]
}
},
"sterms#CollarSize": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": []
}
}
}
}
Is there a way to get inside the child "CollarSize" ? I've tried different combinations with .nested, .children, .terms, .filter however none of these seems to work.
You can get "CollarSize" terms and "banana" terms for each with
var response = client.Search<object>(/** your query here **/);
var collarSizeSignificantTermsAgg = response.Aggregations.Filter("CollarSize").Nested("VariantsProperties").Terms("CollarSize");
foreach(var bucket in collarSizeSignificantTermsAgg.Buckets)
{
Console.WriteLine(bucket.Key);
var bananaSigTerms = bucket.Terms("banana");
foreach(var subBucket in bananaSigTerms.Buckets)
{
Console.WriteLine($"key: {subBucket.Key}, doc_count: {subBucket.DocCount}");
}
}
which prints
CollarSize
key: 15, doc_count: 7
key: 16, doc_count: 7
key: 17, doc_count: 6
key: 18, doc_count: 6
key: LAR, doc_count: 2
key: MED, doc_count: 2
key: SML, doc_count: 2
key: X.L, doc_count: 2
key: XXL, doc_count: 2
key: 15.5, doc_count: 1
key: 16.5, doc_count: 1
key: XXXL, doc_count: 1
Colour
key: Blue, doc_count: 7
key: White, doc_count: 7
Here's a full example, using InMemoryConnection to stub the response
private static void Main()
{
var defaultIndex = "my_index";
var pool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var json = #"{
""took"": 50,
""timed_out"": false,
""_shards"": {
""total"": 1,
""successful"": 1,
""skipped"": 0,
""failed"": 0
},
""hits"": {
""total"": {
""value"": 9,
""relation"": ""eq""
},
""max_score"": null,
""hits"": []
},
""aggregations"": {
""filter#CollarSize"": {
""meta"": { },
""doc_count"": 9,
""nested#VariantsProperties"": {
""doc_count"": 53,
""sterms#CollarSize"": {
""doc_count_error_upper_bound"": 0,
""sum_other_doc_count"": 0,
""buckets"": [
{
""key"": ""CollarSize"",
""doc_count"": 39,
""sterms#banana"": {
""doc_count_error_upper_bound"": 0,
""sum_other_doc_count"": 0,
""buckets"": [
{
""key"": ""15"",
""doc_count"": 7
},
{
""key"": ""16"",
""doc_count"": 7
},
{
""key"": ""17"",
""doc_count"": 6
},
{
""key"": ""18"",
""doc_count"": 6
},
{
""key"": ""LAR"",
""doc_count"": 2
},
{
""key"": ""MED"",
""doc_count"": 2
},
{
""key"": ""SML"",
""doc_count"": 2
},
{
""key"": ""X.L"",
""doc_count"": 2
},
{
""key"": ""XXL"",
""doc_count"": 2
},
{
""key"": ""15.5"",
""doc_count"": 1
},
{
""key"": ""16.5"",
""doc_count"": 1
},
{
""key"": ""XXXL"",
""doc_count"": 1
}
]
}
},
{
""key"": ""Colour"",
""doc_count"": 14,
""sterms#banana"": {
""doc_count_error_upper_bound"": 0,
""sum_other_doc_count"": 0,
""buckets"": [
{
""key"": ""Blue"",
""doc_count"": 7
},
{
""key"": ""White"",
""doc_count"": 7
}
]
}
}
]
}
},
""sterms#CollarSize"": {
""doc_count_error_upper_bound"": 0,
""sum_other_doc_count"": 0,
""buckets"": []
}
}
}
}
";
var settings = new ConnectionSettings(pool, new InMemoryConnection(Encoding.UTF8.GetBytes(json)))
.DefaultIndex(defaultIndex);
var client = new ElasticClient(settings);
var response = client.Search<object>(s => s);
var collarSizeSignificantTermsAgg = response.Aggregations.Filter("CollarSize").Nested("VariantsProperties").Terms("CollarSize");
foreach (var bucket in collarSizeSignificantTermsAgg.Buckets)
{
Console.WriteLine(bucket.Key);
var bananaSigTerms = bucket.Terms("banana");
foreach (var subBucket in bananaSigTerms.Buckets)
{
Console.WriteLine($"key: {subBucket.Key}, doc_count: {subBucket.DocCount}");
}
}
}
I have an index that contains documents structured as follows:
{
"year": 2020,
"month": 10,
"day": 05,
"some_other_data": { ... }
}
the ID of each documents is constructed based on the date and some additional data from some_other_data document, like this: _id: "20201005_some_other_unique_data". There is no explicit _timestamp on the documents.
I can easily get the most recent additions by doing the following query:
{
"query": {
"match_all": {}
},
"sort": [
{"_uid": "desc"}
]
}
Now, the question is: how do I get documents that have essentially a date between day A and day B, where A is, for instance, 2020-07-12 and B is 2020-09-11. You can assume that the input date can be either integers, strings, or anything really as I can manipulate it beforehand.
edit: As requested, I'm including a sample result from the following query:
{
"size": 4,
"query": {
"match": {
"month": 7
}
},
"sort": [
{"_uid": "asc"}
]
}
The response:
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1609,
"max_score": null,
"hits": [
{
"_index": "my_index",
"_type": "nested",
"_id": "20200703_andromeda_cryptic",
"_score": null,
"_source": {
"year": 2020,
"month": 7,
"day": 3,
"yara": {
"strain": "Andromeda",
},
"parent_yara": {
"strain": "CrypticMut",
},
},
"sort": [
"nested#20200703_andromeda_cryptic"
]
},
{
"_index": "my_index",
"_type": "nested",
"_id": "20200703_betabot_boaxxe",
"_score": null,
"_source": {
"year": 2020,
"month": 7,
"day": 3,
"yara": {
"strain": "BetaBot",
},
"parent_yara": {
"strain": "Boaxxe",
},
},
"sort": [
"nested#20200703_betabot_boaxxe"
]
},
{
"_index": "my_index",
"_type": "nested",
"_id": "20200703_darkcomet_zorex",
"_score": null,
"_source": {
"year": 2020,
"month": 7,
"day": 3,
"yara": {
"strain": "DarkComet",
},
"parent_yara": {
"strain": "Zorex",
},
},
"sort": [
"nested#20200703_darkcomet_zorex"
]
},
{
"_index": "my_index",
"_type": "nested",
"_id": "20200703_darktrack_fake_template",
"_score": null,
"_source": {
"year": 2020,
"month": 7,
"day": 3,
"yara": {
"strain": "Darktrack",
},
"parent_yara": {
"strain": "CrypticFakeTempl",
},
},
"sort": [
"nested#20200703_darktrack_fake_template"
]
}
]
}
}
The above-mentioned query will return all documents that have matched the month. So basically anything that was put there in July of any year. What I want to achieve, if at all possible, is getting all documents inserted after a certain date and before another certain date.
Unfortunately, I cannot migrate the data so that it has a timestamp or otherwise nicely sortable fields. Essentially, I need to figure out a logic that will say: give me all documents inserted after july 1st, and before august 2nd. The problem here, is that there are plenty of edge cases, like how to do it when start date and end date are in different years, different months, and so on.
edit: I have solved it using the painless scripting, as suggested by Briomkez, with small changes to the script itself, as follows:
getQueryForRange(dateFrom: String, dateTo: String, querySize: Number) {
let script = `
DateTimeFormatter formatter = new DateTimeFormatterBuilder().appendPattern("yyyy-MM-dd")
.parseDefaulting(ChronoField.NANO_OF_DAY, 0)
.toFormatter()
.withZone(ZoneId.of("Z"));
ZonedDateTime l = ZonedDateTime.parse(params.l, formatter);
ZonedDateTime h = ZonedDateTime.parse(params.h, formatter);
ZonedDateTime x = ZonedDateTime.of(doc['year'].value.intValue(), doc['month'].value.intValue(), doc['day'].value.intValue(), 0, 0, 0, 0, ZoneId.of('Z'));
ZonedDateTime first = l.isAfter(h) ? h : l;
ZonedDateTime last = first.equals(l) ? h : l;
return (x.isAfter(first) || x.equals(first)) && (x.equals(last) || x.isBefore(last));
`
return {
size: querySize,
query: {
bool: {
filter: {
script: {
script: {
source: script,
lang: "painless",
params: {
l: dateFrom,
h: dateTo,
},
},
},
},
},
},
sort: [{ _uid: "asc" }],
}
}
With these changes, the query works well for my version of Elasticsearch (7.2) and the order of dates in not important.
I see (at least) two alternatives here. Either use script query or simple bool queries.
A. USE SCRIPT QUERIES
Basically, the idea is to build the a timestamp at query time, by exploiting the datetime in painless.
{
"query": {
"bool": {
"filter": {
"script": {
"script": {
"source": "<INSERT-THE-SCRIPT-HERE>",
"lang": "painless",
"params": {
"l": "2020-07-12",
"h": "2020-09-11"
}
}
}
}
}
}
}
The script can be the following one:
// Building a ZonedDateTime from params.l
ZonedDateTime l = ZonedDateTime.parse(params.l,DateTimeFormatter.ISO_LOCAL_DATE);
// Building a ZonedDateTime from params.h
ZonedDateTime h = ZonedDateTime.parse(params.h,DateTimeFormatter.ISO_LOCAL_DATE);
// Building a ZonedDateTime from the doc
ZonedDateTime doc_date = ZonedDateTime.of(doc['year'].value, doc['month'].value, doc['day'].value, 0, 0, 0, 0, ZoneId.of('Z'));
return (x.isAfter(l) || x.equals(l)) && (x.equals(h) || x.isBefore(h));
B. ALTERNATIVE: splitting the problem in its building blocks
Let us denote with x the document you are searching and let us denote l and h be our lower date and higher date. Let us denote with x.year, x.month and x.day to access the subfield.
So x is contained in the range (l, h) iff
[Condition-1] l <= x AND
[Condition-2] x <= h
The first condition is met if the disjunction of the following conditions holds:
[Condition-1.1] l.year < x.year
[Condition-1.2] l.year == x.year AND l.month < x.month
[Condition-1.3] l.year == x.year AND l.month == x.month AND l.day <= x.day
Similarly, the second condition can be expressed as the disjunction of the following conditions:
[Condition-2.1] h.year > x.year
[Condition-2.2] h.year == x.year AND h.month > x.month
[Condition-2.3] h.year == x.year AND h.month == x.month AND h.day <= x.day
It remains to express these conditions in Elasticsearch DSL:
B-1. Using script query
Given this idea we can write a simple script query. We should substitute
{
"query": {
"bool": {
"filter": {
"script": {
"script": {
"source": "<INSERT SCRIPT HERE>",
"lang": "painless",
"params": {
"l": {
"year": 2020,
"month": 07,
"day": 01
},
"h": {
"year": 2020,
"month": 09,
"day": 01
}
}
}
}
}
}
}
In painless you can express the Condition, considering that:
x.year is doc['year'].value, x.month is doc['month'].value, x.day is doc['day'].value
h.year is params.h.year, etc.
l.year is params.l.year, etc.
B-2. Using boolean query
Now we should transform these conditions into a bool conditions. The pseudo-code is the following:
{
"query": {
"bool": {
// AND of two conditions
"must": [
{
// Condition 1
},
{
// Condition 2
}
]
}
}
}
Each Condition-X block will look like this:
{
"bool": {
// OR
"should": [
{ // Condition-X.1 },
{ // Condition-X.2 },
{ // Condition-X.3 },
],
"minimum_should_match" : 1
}
}
So, for example, we can express [Condition-2-3] with h = 2020-09-11 we can use this range query:
{
"bool": {
"must": [
{
"range": {
"year": {
"gte": 2020,
"lte": 2020
}
}
},
{
"range": {
"month": {
"gte": 9,
"lte": 9
}
}
},
{
"range": {
"day": {
"lte": 11
}
}
}
]
}
}
Write the entire query is feasible, but I think it would be very long :)
I want to merge objects where personId and visitDate of objects is same else keep the object as it is in the array
Sample Input -
[
{
"personId": 1,
"visitDate": "1453545",
"htn": 1,
"dm": 0
},
{
"personId": 1,
"visitDate": "1453545",
"htn": 1,
"dm": 1
},
{
"personId": 2,
"visitDate": "4453545",
"htn": 1,
"dm": 1
},
{
"personId": 3,
"visitDate": "6453545",
"htn": 1,
"dm": 1
}
]
Sample Output
[
{
"personId": 1,
"visitDate": "1453545",
"htn": 1,
"dm": 1
},
{
"personId": 2,
"visitDate": "4453545",
"htn": 1,
"dm": 1
},
{
"personId": 3,
"visitDate": "6453545",
"htn": 1,
"dm": 1
}
]
See whether the below spec helps, segregate the object with respect to personId and then use cardinality to remove the duplicates, then shift the object to the array.
[
{
"operation": "shift",
"spec": {
"*": "#personId[]"
}
},
{
"operation": "cardinality",
"spec": {
"*": {
"#": "ONE"
}
}
}, {
"operation": "shift",
"spec": {
"*": {
"#": "[]"
}
}
}
]