Json generated Object Graph query using Linq? - linq

I'm working with a json file containing the following:
{
"objects":[
{"object": 1,
"data":{
"name": "object 1",
"priority_threshold": "6000",
"email_threshold": "2000"
}},
{"object": 3,
"data":{
"name": "object 3",
"priority_threshold": "5000",
"email_threshold": "2000"
}},
{"object": 5,
"data":{
"name": "object 5",
"priority_threshold": "5000",
"email_threshold": "1000"
}},
{"object": 6,
"data": {
"name": "object 6",
"priority_threshold": "4000",
"email_threshold": "2000"
}
}
]}
the .json file is an embedded file and is being returned as a string.
Then from the string I am deserializing the object using System.Web.Script.Serialization.JavaScriptSerializer to do the following:
Dictionary<string, object> toConfigGraph = (Dictionary<string, object> toSerializer.DeserializeObject(psJsonString);
object[] toEventServiceConfig = (object[])toConfigGraph["objects"];
The problem running into is that I only want to return the data for a particular object using the object ID, but I'm unsure as to the best process. The I would like to implement a Linq solution, but as of now I'm not even sure if that will work since toConfigGraph["applications"] returns an array of objects based on the structure of the json.
Any suggestions would be greatly appreciated.
I'd rather NOT have to iterate through the object array.
Thanks.

Dictionary<string, object> toObj = (Dictionary<string, object>)toEventServiceConfig.Where(o => Int32.Parse(((Dictionary<string, object>)o)["object"].ToString()) == 1).First<object>();
Dictionary<string, object> toData = (Dictionary<string, object>)toObj["data"];

Related

Defining an array of json docs in Elasticsearch Painless Lab

I'm trying to define some docs in ES Painless Lab, to test some logic in Painless Lab, before running it on the actual index, but can't figure out how to do it, and the docs are not helping either. There is very little documentation on the actual syntax and it's not much help for someone with no Java background.
If I try to define a doc like this:
def docs = [{ "id": 1, "name": "Apple" }];
I get an error:
Unhandled Exception illegal_argument_exception
invalid sequence of tokens near ['{'].
Stack:
[
"def docs = [{ \"id\": 1, \"name\": \"Apple ...",
" ^---- HERE"
]
If I want to do it the Java way:
String message;
JSONObject json = new JSONObject();
json.put("test1", "value1");
message = json.toString();
I'm also getting an error:
Unhandled Exception illegal_argument_exception
invalid declaration: cannot resolve type [JSONObject]
Stack:
[
"... ring message;\nJSONObject json = new JSONObject();\n ...",
" ^---- HERE"
]
So what's the proper way to define an array of json objects to play with in Painless Lab?
After more experimenting, I found out that the docs can be passed in the parameters tab as:
{
"docs": [
{ "id": 1, "name": "Apple" },
{ "id": 2, "name": "Pear" },
{ "id": 3, "name": "Pineapple" }
]
}
and then access it from the code as
def doc = params.docs[1];
return doc["name"];
I'd be still interested how to define an object or array in the code itself.

Jackson derealization with SpringBoot : To get field names present in request along with respective field mapping

I have a requirement to throw different error in case of different scenarios like below, and there are many such fields not just 1.
e.g.
{
"id": 1,
"name": "nameWithSpecialChar$"
}
Here it should throw error for special character.
{
"id": 1,
"name": null
}
Here throw field null error.
{
"id": 1
}
Here throw field missing error.
Handling, 1st and 2nd scenario is easy, but for 3rd one, is there any way we can have a List of name of fields that were passed in input json at the time of serialization itself with Jackson?
One way, I am able to do it is via mapping request to JsonNode and then check if nodes are present for required fields and after that deserialize that JsonNode manually and then validate rest of the members as below.
public ResponseEntity myGetRequest(#RequestBody JsonNode requestJsonNode) {
if(!requestJsonNode.has("name")){
throw some error;
}
MyRequest request = ObjectMapper.convertValue(requestJsonNode, MyRequest .class);
validateIfFieldsAreInvalid(request);
But I do not like this approach, is there any other way of doing it?
You can define a JSON schema and validate your object against it. In your example, your schema may look like this:
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"properties": {
"id": {
"description": "The identifier",
"type": "integer"
},
"name": {
"description": "The item name",
"type": "string",
"pattern": "^[a-zA-Z]*$"
}
},
"required": [ "id", "name" ]
}
To validate your object, you could use the json-schema-validator library. This library is built on Jackson. Since you're using Spring Boot anyway, you already have Jackson imported.
The example code looks more or less like this:
String schema = "<define your schema here>";
String data = "<put your data here>";
JsonSchemaFactory factory = JsonSchemaFactory.byDefault();
ObjectMapper m = new ObjectMapper();
JsonSchema jsonSchema = factory.getJsonSchema(m.readTree(schema));
JsonNode json = m.readTree(data);
ProcessingReport report = jsonSchema.validate(json);
System.out.println(report);
The report includes detailed errors for different input cases. For example, with this input
{
"id": 1,
"name": "nameWithSpecialChar$"
}
this output is printed out
--- BEGIN MESSAGES ---
error: ECMA 262 regex "^[a-zA-Z]*$" does not match input string "nameWithSpecialChar$"
level: "error"
schema: {"loadingURI":"#","pointer":"/properties/name"}
instance: {"pointer":"/name"}
domain: "validation"
keyword: "pattern"
regex: "^[a-zA-Z]*$"
string: "nameWithSpecialChar$"
--- END MESSAGES ---
Or instead of just printing out the report, you can loop through all errors and have your specific logic
for (ProcessingMessage message : report) {
// Add your logic here
}
You could check the example code to gain more information about how to use the library.

Group by field, sort and get the first (or last, whatever) items of the group in MongoDB (with Spring Data)

I have the following entity (getters, setters and constructor omitted)
public class Event {
#Id
private String id;
private String internalUuid;
private EventType eventType;
}
EventType is an enum containing arbitrary event types:
public enum EventType {
ACCEPTED,
PROCESSED,
DELIVERED;
}
My problem is that I have a table with a lot of events, some having the same internalUuid but different statuses. I need to get a list of Events with each Event representing the 'newest' status (ordering by EventType would suffice). Currently, I'm just fetching everything, grouping to separates lists in code, sorting the lists by EventType and then just creating a new list with the first element of each list.
Example would be as follows.
Data in table:
{ "id": "1", "internalUuid": "1", "eventType": "ACCEPTED" },
{ "id": "2", "internalUuid": "1", "eventType": "PROCESSED" },
{ "id": "3", "internalUuid": "1", "eventType": "DELIVERED" },
{ "id": "4", "internalUuid": "2", "eventType": "ACCEPTED" },
{ "id": "5", "internalUuid": "2", "eventType": "PROCESSED" },
{ "id": "6", "internalUuid": "3", "eventType": "ACCEPTED" }
Output of the query (any order would be ok):
[
{ "id": "3", "internalUuid": "1", "eventType": "DELIVERED" },
{ "id": "5", "internalUuid": "2", "eventType": "PROCESSED" },
{ "id": "6", "internalUuid": "3", "eventType": "ACCEPTED" }
]
It is not guaranteed that a "higher" status also has a "higher" ID.
How do I do that without doing the whole process by hand? I literally have no idea how to start as I'm very new to MongoDB but haven't found anything that helped me on Google. I'm using Spring Boot and Spring Data.
Thanks!
Okay I think I have figured it out (thanks to Joe's comment). I'm not a 100% sure that the code is correct but it seems to do what I want. I'm open to improvements.
(I had to add a priority field to Event and EventType because sorting by eventType obviously does String-based (alphabetic) sorting on the enum's name):
private List<Event> findCandidates() {
// First, 'match' so that all documents are found
final MatchOperation getAll = Aggregation.match(new Criteria("_id").ne(null));
// Then sort by priority
final SortOperation sort = Aggregation.sort(Sort.by(Sort.Direction.DESC, "priority"));
// After that, group by internalUuid and make sure to also push the full event to not lose it for the next step
final GroupOperation groupByUuid = Aggregation.group("internalUuid").push("$$ROOT").as("events");
// Get the first element of each sorted and grouped list (I'm not fully sure what the 'internalUuid' parameter does here and if I could change that)
final ProjectionOperation getFirst = Aggregation.project("internalUuid").and("events").arrayElementAt(0).as("event");
// We're nearly done! Only thing left to do is to map to our Event to have a usable List of Event in .getMappedResults()
final ProjectionOperation map = Aggregation.project("internalUuid")
.and("event._id").as("_id")
.and("event.internalUuid").as("internalUuid")
.and("event.eventType").as("eventType")
.and("event.priority").as("priority");
final Aggregation aggregation = Aggregation.newAggregation(getAll, sort, groupByUuid, getFirst, map);
final AggregationResults<InvoiceEvent> aggregationResults =
mongoTemplate.aggregateAndReturn(InvoiceEvent.class).by(aggregation).all();
return aggregationResults.getMappedResults();
}

Trouble Deserializing Object Pulled from SQS using GSON

I have a lambda function the receives an S3Event object when a file is put into an S3 Bucket. When the lambda fails, it goes to a dead letter queue set up in Amazon SQS.
When I pull these messages, this this the body:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "d",
"eventName": "d:Put",
"userIdentity": {
"principalId": ""
},
"requestParameters": {
"sourceIPAddress": "2"
},
"responseElements": {
"x-amz-request-id": "",
"x-amz-id-2": "g"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "",
"bucket": {
"name": "",
"ownerIdentity": {
"principalId": ""
},
"arn": ""
},
"object": {
"key": "",
"size": 12502,
"eTag": "",
"sequencer": ""
}
}
}
]
}
That looks quite a bit like the S3Event object which contains a list of S3EventNotification records. I have tried to deserialize it to the S3 Event Object using the following:
S3Event event = new GsonBuilder().serializeNulls().create().fromJson(s3EventString, S3Event.class);
This results in a null object like so:
{"records":null}
I noticed in the json return from SQS, the "R" in Records is capitalized. I wasn't sure if that made a difference so I changed it to a lowercase "r" and it throws this error:
java.lang.IllegalStateException: Expected BEGIN_OBJECT but was STRING
I'm really not sure what type of object this actually is.
Any help would be greatly appreciated.
Strange. Using Jackson it works perfectly so I will use this for now..
import com.fasterxml.jackson.databind.ObjectMapper;
import com.amazonaws.services.sqs.model.Message;
private S3Event extractS3Event(Message message){
ObjectMapper objectMapper = new ObjectMapper();
return objectMapper.readValue(message.getBody(), S3Event.class)
}
//Then to get the S3 Details
S3Event event = extractS3Event(message);
S3Entity entity = event.getRecords().get(0).getS3();
String bucketName = entity.getBucket().getName();
String s3Key = entity.getObject().getKey();
Re:BEGIN_OBJECT but was STRING.
This is because AWS uses JodaTime for EventTime. You can avoid the issue by removing the field from the JSON text (assuming you do not need it)

Error while updating nested field

Hi i am using elasticsearch java API for updating a document with script. But i am getting below exception
Exception in thread "main" MapperParsingException[object mapping for [content] tried to parse field [content] as object, but found a concrete value]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:215)
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:308)
at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:438)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:264)
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:124)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:309)
at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:580)
at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:559)
at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:211)
at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:223)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:66)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Below is the existing document in the ES
{
"_index": "index1",
"_type": "type1",
"_id": "1",
"_version": 8,
"found": true,
"_source": {
"content": {
"contentId": 1,
"metadata": {
"title": "content one",
"duration": 4500
}
},
"custom": {
"field1": "value1"
}
}
}
I would like to update the "content" field as below
"content": {
"contentId": 1,
"metadata": {
"duration": 900
}
}
when i am updating with REST call (localhost:9200/index1/type1/1/_update), it is working fine. I am getting error in java API prepareUpdate.
I have 3 java classes.
DTO class has Content object
Content class has Metadata object and contentId as long
Metadata class has title (String) and duration(long).
Below is the code to update
Map<String, Object> params = new HashMap<>();
params.put("contentScript", dto.toString());
Script s = new Script("ctx._source.content=contentScript",ScriptType.INLINE,null,params);
UpdateResponse resp = client.prepareUpdate("index1", "type1", "1").setScript(s).setScriptedUpsert(true).get();
dto is object of DTO class and values are set accordingly.
Please help.
params.put("contentScript", dto.toString());
You are passing a string where it is expecting an object. The below code might help.
String script = "ctx._source.pete = jsonMap";
Map<String, Object> jsonMap = new ObjectMapper().readValue(json, HashMap.class);
Map<String, Object> params = ImmutableMap.of("jsonMap", jsonMap);
return new Script(script, ScriptService.ScriptType.INLINE, null, params);
https://discuss.elastic.co/t/how-to-update-nested-objects-in-elasticsearch-2-2-script-via-java-api/43135/2

Resources