Combine two freemarker hash with same key inside table - freemarker

My two hash/maps are as below. I want to get the value corresponding to the key from the first map and second map and then add them to table
Map 1
[
{
"key": "1",
"value":"Potato"
},
{
"key": "2",
"value":"Chilly"
}
]
Map 2
[
{
"key": "1",
"value":"Apple"
},
{
"key": "2",
"value":"Plum"
}
]
I want the data in the fashion so that I can fetch data for the same key from both the map at the same time
<#list map1+map2?keys as key>
<tr>
<td>${key}</td>
<td >${map1[key]}</td>
<td >${map2[key]}</td>
</tr>
</#list>
I know I am doing something wrong, but not able to process the code. Can someone help?

You need to parenthesize the map concatenation in order to be able to use the built-in ?keys, so (map1 + map2)?keys. Also, you might want to check for empty values if the maps are not equal by suffixing your expression with !"default value", or simply ! if you want to have no default:
<#assign map1 = { "1": "Potato", "2": "Chilly" } >
<#assign map2 = { "1": "Apple", "2": "Plum", "3": "Extra" } >
<#list (map1 + map2)?keys as key>
<tr>
<td>${key}</td>
<td>${map1[key]!}</td>
<td>${map2[key]!}</td>
</tr>
</#list>
See also:
Freemarker - default value for variable that may be missing or blank?

Related

Snaplogic - Expression Language Syntax - unique array value

I am using following EL
jsonPath($, "$array.map({id: value.get('id'), type: value.get('type') })")
which produces the next variable ...
But the key(id) is not kept unique ?!
[{
"id": "1",
"type": "1"
},
{
"id": "1",
"type": "2"
},
{
"id": "2",
"type": "1"
}]
What can i use in snaplogic expression language or a snap to get the following unique key array :
[{
"id": "1",
"types": ["1", "2"],
{
"id": "2",
"type": ["1"]
}]
Any ideas?
Use Group By Fields snap to group based on id and then use a simple mapper to create the desired JSON. Please note that you have to sort the incoming documents by id before doing the group by.
Sample Pipeline
Final Mapper expressions
$groupBy.id mapped to id
jsonPath($, "$groups[*].type") mapped to types
Resulting output

Update a subdocument list object value in RethinkDB

{
"id": 1,
"subdocuments": [
{
"id": "A",
"name": 1
},
{
"id": "B",
"name": 2
},
{
"id": "C",
"name": 3
}
]
}
How do update a subdocument "A"s "name" to a value of 2 in RethinkDB in either Javascript or Python?
If you can rely of the position of your "A " element you can update like this:
r.db("DB").table("TABLE").get(1)
.update({subdocuments:
r.row("subdocuments").changeAt(0, r.row("subdocuments").nth(0).merge({"name":2}))})
If you can not rely on the position, you have to find it yourself:
r.db("DB").table("TABLE").get(1).do(function(doc){
return doc("subdocuments").offsetsOf(function(sub){return sub("id").match("A")}).nth(0)
.do(function(index){
return r.db("DB").table("TABLE").update({"subdocuments":
doc("subdocuments").changeAt(index, doc("subdocuments").nth(index).merge({"name":2})) })})
})
As an alternative you can use the map function to iterate over the array elements and update the one that matches your condition
r.db("DB").table("TABLE").get(1)
.update({
subdocuments: r.row("subdocuments").map(function(sub){
return r.branch(sub("id").eq("A"), sub.merge({name: 2}), sub)
})
})

Dedup elasticsearch results using multiple fields as unique key

There have been similar question asked to this (see Remove duplicate documents from a search in Elasticsearch) but I haven't found a way to dedup using multiple fields as the "unique key". Here's a simple example to illustrate a bit of what I'm looking for:
Say this is our raw data:
{ "name": "X", "event": "A", "time": 1 }
{ "name": "X", "event": "B", "time": 2 }
{ "name": "X", "event": "B", "time": 3 }
{ "name": "Y", "event": "A", "time": 4 }
{ "name": "Y", "event": "C", "time": 5 }
I would essentially like to get the distinct event counts based on name and event. I want to avoid double counting the event B which happened on the same name X twice, so the counts I'd be looking for are:
event: A, count: 2
event: B, count: 1
event: C, count: 1
Is there a way to set up an agg query as seen in the related question? Another option I've deliberated is to index the object with a special key field (i.e. "X_A", "X_B", etc.). I could then simply dedup on this field. I'm not sure which is a preferred approach, but I'd personally prefer not to index the data with extra metadata.
You can specify a script in a terms aggregation in order to build a key out of multiple fields:
POST /test/dedup/_search
{
"aggs":{
"dedup" : {
"terms":{
"script": "[doc.name.value, doc.event.value].join('_')"
},
"aggs":{
"dedup_docs":{
"top_hits":{
"size":1
}
}
}
}
}
}
This will basically provide the following results:
X_A: 1
X_B: 2
Y_A: 1
Y_C: 1
Note: There's only one event C in your sample data, so the count cannot be two unless I'm missing something.

RethinkDB - get range of values inside nested arrays

I am new to RethinkDB, and am working with a data set with rows like the below:
{
"data": {
"items": [
{
"name: "Foo",
"value": 20
},
{
"name: "Bar",
"value": 70
}
]
}
}
I would like to run a query to return the range of item values in the entire dataset, for which the name is "Foo".
Any help is appreciated.
You could write [row('data')('items')('value').min(), row('data')('items')('value').max()].

RethinkDB: Equivalent for "select where field not in (items)"

I have a table that looks like this:
[
{ "name": "Alpha", "values": {
"someProperty": 1
}},
{ "name": "Beta", "values": {
"someProperty": 2
}},
{ "name": "Gamma", "values": {
"someProperty": 3
}}
]
I want to select all records where someProperty is not in some array of values (e.g., all records where someProperty not in [1, 2]). I want to get back complete records, not just the values of someProperty.
How should I do this with RethinkDB?
In python it would be:
table.filter(lambda doc: r.not(r.expr([1,2]).contains(doc["someProperty"]))
If the array comes from a subquery and you don't want to do it multiple times:
subquery.do(lambda array:
table.filter(lambda doc: r.not(array.contains(doc["someProperty"]))))

Resources