Given this document:
{
"Country": {
"ISO-3166-1-Alpha-2": "AD" ,
"ISO-3166-1-Alpha-3": "AND" ,
"ISO-3166-1-Numeric": 20 ,
"ISO-3166-2": "ISO 3166-2:AD" ,
"LongNames": {
"en-us": "Andorra"
} ,
"ShortNames": {
"en-us": "Andorra"
} ,
"WebName": "Andorra"
} ,
"id": "AD"
}
What would the right query to return just WebName ?
I've tried using Map(), but the results aren't what I expect:
r.db("main").table("countries").limit(1).map(function(r) {
return r.WebName;
});
RqlDriverError: Anonymous function returned `undefined`. Did you forget a `return`?
In JavaScript, (...) is the field selector.
r.db("main").table("countries").limit(1)('Country')('WebName')
Related
I'm trying to set an attribute of a document inside an array to uppercase.
This is a document example
{
"_id": ObjectId("5e786a078bc3b3333627341e"),
"test": [
{
"itemName": "alpha305102992",
"itemNumber": ""
},
{
"itemName": "beta305102630",
"itemNumber": "P5000"
},
{
"itemName": "gamma305102633 ",
"itemNumber": ""
}]
}
I already tried a lot of thing.
private void NameElementsToUpper() {
AggregationUpdate update = AggregationUpdate.update();
//This one does not work
update.set("test.itemName").toValue(StringOperators.valueOf(test.itemName).toUpper());
//This one also
update.set(SetOperation.set("test.$[].itemName").withValueOfExpression("test.#this.itemName"));
//And every variant in between these two.
// ...
Query query = new Query();
UpdateResult result = mongoTemplate.updateMulti(query, update, aClass.class);
log.info("updated {} records", result.getModifiedCount());
}
I see that Fields class in spring data is hooking into the "$" char and behaving special if you mention it. Do not seem to find the correct documentation.
EDIT: Following update seems to work but I do not seem to get it translated into spring-batch-mongo code
db.collection.update({},
[
{
$set: {
"test": {
$map: {
input: "$test",
in: {
$mergeObjects: [
"$$this",
{
itemName: {
$toUpper: "$$this.itemName"
}
}
]
}
}
}
}
}
])
Any solutions?
Thanks!
For now I'm using which does what i need. But a spring data way would be cleaner.
mongoTemplate.getDb().getCollection(mongoTemplate.getCollectionName(Application.class)).updateMany(
new BasicDBObject(),
Collections.singletonList(BasicDBObject.parse("""
{
$set: {
"test": {
$map: {
input: "$test",
in: {
$mergeObjects: [
"$$this",
{
itemName: { $toUpper: "$$this.itemName" }
}
]
}
}
}
}
}
"""))
);
I have a use case where an API i'm calling to retrieve data to put into elasticsearch is returning nulls.
I need to write an ingest pipeline that uses processors to remove all null fields before writing it into elasticsearch. Processors may or may not use painless scripting.
Here is a sample payload that i currently get from the API
{
"master_desc": "TESTING PART",
"date_added": "2019-10-24T09:30:03",
"master_no": {
"master_no": 18460110,
"barcode": "NLSKYTEST1-1",
"external_key": null,
"umid": null
}
}
The pipeline should ideally insert the document as -
{
"master_desc": "TESTING PART",
"date_added": "2019-10-24T09:30:03",
"master_no": {
"master_no": 18460110,
"barcode": "NLSKYTEST1-1"
}
}
Note, the fields are dynamic so i can't write a processor that checks for nulls against a defined set of fields.
Thanks!
Null fields are not indexed nor are searchable.I have written below pipeline to remove such fields. Please test it before use on all of your scenarios. After posting documents using this pipeline, you won't be able to search null fields using "exists"
Pipeline:
PUT _ingest/pipeline/remove_null_fields
{
"description": "Remove any null field",
"processors": [
{
"script": {
"source": """
// return list of field with null values
def loopAllFields(def x){
def ret=[];
if(x instanceof Map){
for (entry in x.entrySet()) {
if (entry.getKey().indexOf("_")==0) {
continue;
}
def val=entry.getValue();
if( val instanceof HashMap ||
val instanceof Map ||
val instanceof ArrayList)
{
def list=[];
if(val instanceof ArrayList)
{
def index=0;
// Call for each object in arraylist
for(v in val)
{
list=loopAllFields(v);
for(item in list)
{
ret.add(entry.getKey()+"["+index+"]."+ item);
}
index++;
}
}
else
{
list =loopAllFields(val);
}
if(list.size()==val.size())
{
ret.add(entry.getKey());
}
else{
for(item in list)
{
ret.add(entry.getKey()+"."+ item);
}
}
}
if(val==null)
{
ret.add(entry.getKey());
}
}
}
return ret;
}
/* remove fields from source, recursively deletes fields which part of other fields */
def removeField(def ctx, def fieldname)
{
def pos=fieldname.indexOf(".");
if(pos>0)
{
def str=fieldname.substring(0,pos);
if(str.indexOf('[')>0 && str.indexOf(']')>0)
{
def s=str.substring(0,str.indexOf('['));
def i=str.substring(str.indexOf('[')+1,str.length()-1);
removeField(ctx[s][Integer.parseInt(i)],fieldname.substring(pos+1,fieldname.length()));
}
else
{
if(ctx[str] instanceof Map)
{
removeField(ctx[str],fieldname.substring(pos+1,fieldname.length()));
}
}
}else{
ctx.remove(fieldname);
}
return ctx;
}
def list=[];
list=loopAllFields(ctx);
for(item in list)
{
removeField(ctx,item);
}
"""
}
}
]
}
Post Document:
POST index8/_doc?pipeline=remove_null_fields
{
"master_desc": "TESTING PART",
"ddd":null,
"date_added": "2019-10-24T09:30:03",
"master_no": {
"master_no": 18460110,
"barcode": "NLSKYTEST1-1",
"external_key": null,
"umid": null
}
}
Result:
"hits" : [
{
"_index" : "index8",
"_type" : "_doc",
"_id" : "06XAyXEBAWHHnYGOSa_M",
"_score" : 1.0,
"_source" : {
"date_added" : "2019-10-24T09:30:03",
"master_no" : {
"master_no" : 18460110,
"barcode" : "NLSKYTEST1-1"
},
"master_desc" : "TESTING PART"
}
}
]
#Jaspreet, so the script almost worked. It didn't however eliminate empty objects, empty arrays or empty values. Here is a doc i tried to index -
{
"master_desc": "TESTING PART",
"date_added": "2019-10-24T09:30:03",
"master_no": {
"master_no": 18460110,
"barcode": "NLSKYTEST1-1",
"external_key": null,
"umid": null
},
"remote_sync_state": "",
"lib_title_footage": [],
"prj_no": {
"prj_no": null,
"prj_desc": null,
}
The above returned -
{
"master_desc": "TESTING PART",
"date_added": "2019-10-24T09:30:03",
"master_no": {
"master_no": 18460110,
"barcode": "NLSKYTEST1-1"
},
"remote_sync_state": "",
"lib_title_footage": [ ],
"prj_no": { }
I tried updated the script to have the condition check for these patterns but got a compile error unfortunately.
Are there any data types in GraphQL that can be used to describe a JSON Patch operation?
The structure of a JSON Patch operation is as follows.
{ "op": "add|replace|remove", "path": "/hello", "value": ["world"] }
Where value can be any valid JSON literal or object, such as.
"value": { "name": "michael" }
"value": "hello, world"
"value": 42
"value": ["a", "b", "c"]
op and path are always simple strings, value can be anything.
If you need to return JSON type then graphql have scalar JSON
which return any JSON type where you want to return it.
Here is schema
`
scalar JSON
type Response {
status: Boolean
message: String
data: JSON
}
type Test {
value: JSON
}
type Query {
getTest: Test
}
type Mutation {
//If you want to mutation then pass data as `JSON.stringify` or json formate
updateTest(value: JSON): Response
}
`
In resolver you can return anything in json format with key name "value"
//Query resolver
getTest: async (_, {}, { context }) => {
// return { "value": "hello, world" }
// return { "value": 42 }
// return { "value": ["a", "b", "c"] }
// return anything in json or string
return { "value": { "name": "michael" } }
},
// Mutation resolver
async updateTest(_, { value }, { }) {
// Pass data in JSON.stringify
// value : "\"hello, world\""
// value : "132456"
// value : "[\"a\", \"b\", \"c\"]"
// value : "{ \"name\": \"michael\" }"
console.log( JSON.parse(value) )
//JSON.parse return formated required data
return { status: true,
message: 'Test updated successfully!',
data: JSON.parse(value)
}
},
the only thing you need to specifically return "value" key to identify to get in query and mutation
Query
{
getTest {
value
}
}
// Which return
{
"data": {
"getTest": {
"value": {
"name": "michael"
}
}
}
}
Mutation
mutation {
updateTest(value: "{ \"name\": \"michael\" }") {
data
status
message
}
}
// Which return
{
"data": {
"updateTest": {
"data": null,
"status": true,
"message": "success"
}
}
}
I have to filter payloads like this on an ElasticSearch query:
{
"bestPrices": {
"cia1": {},
"cia2": {}
}
}
I must get only results like:
{
"bestPrices": {
"cia1": {
"gol": {
"price1": 799,
"price2": null,
"miles": 25000
}
},
"cia2": {
"gol": {
"price1": null,
"price2": null,
"miles": null
}
}
}
}
I'm trying exists query, but seems that it do not apply to this particular situation:
{
"exists": {
"field": "searchIntention.bestSalePrices.cia1"
}
}
I'm using ElasticSearch 6.1
The Elasticsearch Documentation for the Exists Query specifies that null, [], and [null] qualify as non-existent values. Therefore, I believe all other values, including an empty object ({}) would be considered non-null. If the go1 member of the cia object is always populated, you could try using exists on that field instead.
{
"exists": {
"field": "searchIntention.bestSalePrices.cia1.go1"
}
}
I'm trying to insert the results of a query from one table into another table. However, when I attempt to run the query I am receiving an error.
{
"deleted": 0 ,
"errors": 1 ,
"first_error": "Expected type OBJECT but found ARRAY." ,
"inserted": 0 ,
"replaced": 0 ,
"skipped": 0 ,
"unchanged": 0
}
Here is the the insert and query:
r.db('test').table('destination').insert(
r.db('test').table('source').map(function(doc) {
var result = doc('result');
return result('section_list').concatMap(function(section) {
return section('section_content').map(function(item) {
return {
"code": item("code"),
"name": item("name"),
"foo": result("foo"),
"bar": result("bar"),
"baz": section("baz"),
"average": item("average"),
"lowerBound": item("from"),
"upperBound": item("to")
};
});
});
});
);
Is there a special syntax for this, or do I have to retrieve the results and then run a separate insert?
The problem is that your inner query is returning a stream of arrays. You can't insert arrays into a table (only objects), so the query fails. If you change the outermost map into a concatMap it should work.
The problem here was that the result was a sequence of an array of objects. i.e
[ [ { a:1, b:2 }, { a:1, b:2 } ], [ { a:2, b:3 } ] ]
Therefore, I had to change the outer map call to a concatMap call. The query then becomes:
r.db('test').table('destination').insert(
r.db('test').table('source').concatMap(function(doc) {
var result = doc('result');
return result('section_list').concatMap(function(section) {
return section('section_content').map(function(item) {
return {
"code": item("code"),
"name": item("name"),
"foo": result("foo"),
"bar": result("bar"),
"baz": section("baz"),
"average": item("average"),
"lowerBound": item("from"),
"upperBound": item("to")
};
)});
});
});
}
Thanks goes to #AtnNn on the #rethinkdb freenode for pointing me in the right direction.