Javers querybuilder for anyDomainObject with commit property filter not working - javers

The Javers JQL to get all domain objects returns empty list.
I've written a wrapper rest api and exposed the Javers commit and getAllShadows api's as below.
#PutMapping("/commit")
public <T> CommitEntity<T> commit(#RequestBody CommitEntity<T> committedObject);
#GetMapping("/getEntityShadows")
public List<EntityShadow> getEntityShadows(#RequestParam(name = "entityId") String entityId);
Now when I use the commit API(above), I'm able to commit my domain object to the repository (mongo)
Sample below:
{
"_id" : ObjectId("5c5f6fb51ebaa93b96edadc8"),
"commitMetadata" : {
"author" : "UserFName UserLname",
"properties" : [
{
"key" : "entityId",
"value" : "user001/US"
}
],
"commitDate" : "2019-02-09T16:26:29.543",
"commitDateInstant" : "2019-02-10T00:26:29.543Z",
"id" : NumberLong(8440229536252376064)
},
"globalId" : {
"valueObject" : "org.javers.core.graph.LiveGraphFactory$MapWrapper"
},
"state" : {
"map" : {
"userId" : {
"id" : "user001",
"locale" : "US"
},
"createdDate" : "2019-02-08T22:16:58",
"Name" : "User Fname",
"address" : {
"state" : "CA",
"country" : "US"
},
"authorName" : "UserFName UserLname",
"lastModifiedBy" : "2019-02-09T16:26:29"
}
},
"changedProperties" : [
"map"
],
"type" : "INITIAL",
"version" : NumberLong(1),
"globalId_key" : "org.javers.core.graph.LiveGraphFactory$MapWrapper/"
}
Now when I try to get all shadows as below, I get back an empty List. I expected to get all the shadows from the repo.
JqlQuery jqlQuery = QueryBuilder.anyDomainObject().withCommitProperty("entityId", "user001/US").build();
List<Shadow<Object>> shadows = javers.findShadows(jqlQuery);
Am I missing anything here?
I tried to just get the shadows with any filter like below, still got back an empty list
JqlQuery jqlQuery = QueryBuilder.anyDomainObject().build();
List<Shadow<Object>> shadows = javers.findShadows(jqlQuery);

Related

Dynamic document loading with #DocumentRererence for Spring Data MongoDB

Is there a way to stop loading the referenced document marked with #DocumentReference during an api call as it'd make the db calls heavy in case there are multiple referenced documents? Or is there better way to do that? I have tried #DBRef but that won't solve the problem as the existing database looks something like this:
{
"_id" : "833f7d",
"name" : "Del Rey Books",
"arconym" : "DRB",
"foundationYear" : 1977,
"books" : [
NumberLong(300000),
NumberLong(395652)
]
}
and #DBRef would make the db entry to look something like this if I save new data (therefore won't solve the problem) :
{
"_id" : "833f7d",
"name" : "Del Rey Books",
"arconym" : "DRB",
"foundationYear" : 1977,
"books" : [
{
"$ref" : "book",
"$id" : 300000
},
{
"$ref" : "book",
"$id" : 395652
}
]
}
The domain class looks something like this:
class Publisher {
// ...
#DocumentReference(lazy = true)
List<Book> books;
}
The GET api request fetches the whole referenced document in response body, but along with it also adds the keys (i) target - containing the same referenced document all over again. (ii) source - containing the id of the referenced object again. Something like this:
{
"_id" : "833f7d",
"name" : "Del Rey Books",
"arconym" : "DRB",
"foundationYear" : 1977,
"books" : [
{
"id" : 300000,
"isbn13" : "978-0345503800",
"title" : "The Warded Man",
"pages" : 432,
"target": {
id" : 300000,
"isbn13" : "978-0345503800",
"title" : "The Warded Man",
"pages" : 432,
},
"source" : 300000
},
{
"id" : 395652,
"isbn13" : "978-0345503800",
"title" : "The Sailor",
"pages" : 420,
"target": {
id" : 395652,
"isbn13" : "908-093830380",
"title" : "The Sailor",
"pages" : 420,
},
"source" : 395652
}
]
}
What is the significance of target, source all over again in the response body?
Also is there any other way to have references but not load the references in the response body during an API call? I was expecting (lazy = true) would do so, but it doesn't.

Enriching documents in ElasticSearch with only matching nested elements by ID

We're creating some packages, but that process is currently rather slow, because of the sheer amount of data being sent between microservices. Therefore, I have pruned the information being sent between those microservices and instead want to enrich the documents with the necessary information directly from within ElasticSearch. This gives documents of the following shape:
{
"_index" : "packages-2022.02.28",
"_type" : "_doc",
"_id" : "SG_DH-8019-ao-74783-20220315-12",
"_score" : 1.0,
"_source" : {
"id" : "SG_DH-8019-ao-74783-20220315-12",
"updatedOn" : "2022-02-28T14:45:57.7511562+01:00",
"code" : "SG",
"createdDate" : "2022-02-28T15:17:48.2571391+01:00",
"content" : {
"contentId" : "74783",
"units" : [
{
"id" : "HB_DBL.ST_RO_NFP",
"globalId" : "74783_HB_DBL.ST_RO_NFP",
"globalIntId" : -592692223,
"forPackaging" : false
},
{
"id" : "HB_DBL.ST_BB_NFP",
"globalId" : "74783_HB_DBL.ST_BB_NFP",
"globalIntId" : 446952442,
"forPackaging" : false
},
{
"id" : "HB_DBL.ST_AI_NFP",
"globalId" : "74783_HB_DBL.ST_AI_NFP",
"globalIntId" : -1174348304,
"forPackaging" : false
},
{
"id" : "HB_DBL.SU_RO_NFP",
"globalId" : "74783_HB_DBL.SU_RO_NFP",
"globalIntId" : -2111509049,
"forPackaging" : false
},
{
"id" : "HB_DBL.SU_BB_NFP",
"globalId" : "74783_HB_DBL.SU_BB_NFP",
"globalIntId" : 307969427,
"forPackaging" : false
},
{
"id" : "HB_DBL.SU_AI_NFP",
"globalId" : "74783_HB_DBL.SU_AI_NFP",
"globalIntId" : 1418623211,
"forPackaging" : false
},
{
"id" : "HB_DBL.PO-1_RO_NFP",
"globalId" : "74783_HB_DBL.PO-1_RO_NFP",
"globalIntId" : 1328251159,
"forPackaging" : false
},
{
"id" : "HB_DBL.PO-1_BB_NFP",
"globalId" : "74783_HB_DBL.PO-1_BB_NFP",
"globalIntId" : -1228155826,
"forPackaging" : false
},
{
"id" : "HB_DBL.PO-1_AI_NFP",
"globalId" : "74783_HB_DBL.PO-1_AI_NFP",
"globalIntId" : 749215308,
"forPackaging" : false
},
{
"id" : "HB_DBL.OF_RO_NFP",
"globalId" : "74783_HB_DBL.OF_RO_NFP",
"globalIntId" : 1981865239,
"forPackaging" : false
},
{
"id" : "HB_DBL.OF_BB_NFP",
"globalId" : "74783_HB_DBL.OF_BB_NFP",
"globalIntId" : 545563435,
"forPackaging" : false
},
{
"id" : "HB_DBL.OF_AI_NFP",
"globalId" : "74783_HB_DBL.OF_AI_NFP",
"globalIntId" : -481310774,
"forPackaging" : false
}
]
"duration" : {
"value" : 12,
"durationType" : "Day"
}
},
"generatedInfo" : {
"productGroupName" : null,
"subProductGroupName" : "Foo",
"version" : 0
}
}
}
]
with information from an enrich policy's index of the shape (when queried):
{
"_index" : ".enrich-package-enrich-1646044129711",
"_type" : "_doc",
"_id" : "zt_gP38BZeMUiw0-LxLa",
"_score" : 1.0,
"_source" : {
"contentId" : "365114",
"name" : "PackageName",
"board" : [
"B1",
"B2"
],
"units" : [
{
"price" : [
{
"margin" : 0,
"combination" : 10000,
"value" : 189030,
"currency" : "EUR"
}
],
"id" : "W2M_AX2_SC_NFP",
"globalId" : "365114_W2M_AX2_SC_NFP",
"globalIntId" : -988330164,
"name" : "UnitName",
"prop1": "Foo",
"prop2": "Bar"
}
]
}
}
]
I originally could get this working. However, when enriching, I only want to keep the units with the same global ID as those in the document to save. To this end, I have tried also enriching each unit with a simple Enrich processor and a ForEach processor referencing the enrich policy, matching on globalId and have even attempted matching on its hash code globalIntId (although in even in the latter case I would often get the error that it 'is not an integer', even though it clearly is one). This separate enrich-policy index has a shape similar to the following:
{
"_index" : ".enrich-package-unit-enrich-1646044158417",
"_type" : "_doc",
"_id" : "dN_gP38BZeMUiw0-t2Io",
"_score" : 1.0,
"_source" : {
"units" : [
{
"price" : [
{
"margin" : 0,
"combination" : 10000,
"value" : 189030,
"currency" : "EUR"
}
],
"globalId" : "365114_W2M_AX2_SC_NFP",
"globalIntId" : -988330164,
"name" : "UnitName",
"prop1": "Foo",
"prop2": "Bar",
"id" : "W2M_AX2_SC_NFP"
}
]
}
}
]
I have also tried to use Painless script, but so far my experience hasn't been exactly painless (pun intended). Every time I would try to access any data (I've tried various ways I encountered), I would get nothing but compilation errors. Also, given that I'm working on making this process faster, I'm a bit worried about performance here if I were to get it to work. I've read that Painless is fast, yet I've also heard it's actually fairly slow (I think compared to using processors, not necessarily other scripts).
Now, I'm at a loss about how to get this to work. I would prefer to do this without scripting if possible. However, if it is only possible using scripting, that's okay as long as the performance is acceptable. I'm using Elastic 7.12.
Update 1:
I'm creating the enrich policy from C# using Nest like so:
var enrichPolicyRequest = new PutEnrichPolicyRequest(enrichPolicyName)
{
Match = new MyPackageBedEnrichPolicy(index)
};
var putEnrichPolicyResponse = await elasticClient.Enrich.PutPolicyAsync(enrichPolicyRequest);
var executeEnrichPolicyResponse = await elasticClient.Enrich.ExecutePolicyAsync(enrichPolicyName);
...
public class MyPackageBedEnrichPolicy : IEnrichPolicy
{
public MyPackageBedEnrichPolicy(string index)
{
Indices = index;
MatchField = "contentId";
EnrichFields = new[] { "name", "board", "units" };
}
public Indices Indices { get; set; }
public Field MatchField { get; set; }
public Fields EnrichFields { get; set; }
public string Query { get; set; }
}
and the index for the units very similarly, but with
public class MyPackageUnitEnrichPolicy : IEnrichPolicy
{
public MyPackageUnitEnrichPolicy(string index)
{
Indices = index;
MatchField = "units.globalId";
EnrichFields = new[] { "units" };
}
...
For now, I have created the ingest processors in Kibana for easier prototyping, though I will have take care of that using Nest later as well. I have defined them basically as follows:
This is the definition of the ingest pipeline in JSON:
[
{
"enrich": {
"field": "content.contentId",
"policy_name": "enrichPolicyName",
"target_field": "enrichTest"
}
},
{
"foreach": {
"field": "content.units.globalId",
"processor": {
"enrich": {
"field": "content.units.globalId",
"policy_name": "unitEnrichPolicyName",
"target_field": "enrichTest.units",
"tag": "enrich-units-on-globalId-processor"
}
}
}
}
]

Mongoose + GraphQL (Apollo Server) Schema

We have db collection which is little complicated. Many of our keys are JSON objects where fields aren't fixed and change based on input given by user on UI. How should we write mongoose and GraphQL Schema for such complex type ?
{
"_id" : ObjectId("5ababb359b3f180012762684"),
"item_type" : "Sample",
"title" : "This is sample title",
"sub_title" : "Sample sub title",
"byline" : "5c6ed39d6ed6def938b71562",
"lede" : "Sample description",
"promoted" : "",
"slug" : [
"myurl"
],
"categories" : [
"Technology"
],
"components" : [
{
"type" : "Slide",
"props" : {
"description" : {
"type" : "",
"props" : {
"value" : "Sample value"
}
},
"subHeader" : {
"type" : "",
"props" : {
"value" : ""
}
},
"ButtonWorld" : {
"type" : "a-button",
"props" : {
"buttonType" : "product",
"urlType" : "Internal Link",
"isServices" : false,
"title" : "Hello World",
"authors" : [
{
"__dataID__" : "Qm9va0F1dGhvcjo1YWJhYjI0YjllNDIxNDAwMTAxMGNkZmY=",
"_id" : null,
"First_Name" : "John",
"Last_Name" : "Doe",
"Display_Name" : "John Doe",
"Slug" : "john-doe",
"Role" : 1
}
],
"isbns" : [
"9781497603424"
],
"image" : "978-cover.jpg",
"price" : "8.99",
"bisacs" : [],
"customCategories" : [],
},
"salePrice" : {
"type" : "",
"props" : {
"value" : ""
}
}
}
},
"tags" : [
{
"id" : "5abab58042e2c90011875801",
"name" : "Tag Test 1"
},
{
"id" : "5abab5831242260011c248f9",
"name" : "Tag Test 2"
},
{
"id" : "592450e0b1be5055278eb5c6",
"name" : "horror",
},
{
"id" : "59244a96b1be5055278e9b0e",
"name" : "Special Report",
"_id" : "59244a96b1be5055278e9b0e"
}
],
"created_at" : ISODate("2018-03-27T21:44:21.412Z"),
"created_by" : ObjectId("591345bda429e90011c1797e")
}
I believe Mongoose have Mixed type but how do i represent such complex type in Apollo GraphQL Server and Mongoose Schema. Also, currently my resolver is just models.product.find(). So if i have such complex type, need to understand what update needs to make to my resolver.
It will be great if i get complete solution for GraphQL Apollo schema, mongoose schema and resolver for my data.
Finally found solution for problem.
You can declare new type and reference it in typeDef for GraphQL Schema.
In mongoose model, you can reference it as {type: Array}

Spring Boot MongoDB find records by value in list

I have in my mongodb collection with news.
{
"_id" : ObjectId("593a97cdb17cc6535522d16a"),
"title" : "Title",
"text" : "Test",
"data" : "9.06.2017, 14:39:33",
"author" : "Admin",
"categoryList" : [
{
"_id" : null,
"text" : "category1"
},
{
"_id" : null,
"text" : "category2"
},
{
"_id" : null,
"text" : "category3"
}
]
}
Every news record has list of categories. I woudl like to find all news who has category1 in categoryList I try do that by
newsRepository.findByCategoryList("category1"); but not working.
How to do that?
With your current repository method the generated query is
{ "categoryList" : "category1"}
What you need is
{ "categoryList.text" : "category1"}
You can create the query in two ways.
Using Repository
findByCategoryListText(String category)
Using Query Method
#Query("{'categoryList.text': ?0}")
findByCategoryList(String category)

Update object in array with new fields mongodb

ai have some mongodb document
horses is array with id, name, type
{
"_id" : 33333333333,
"horses" : [
{
"id" : 72029,
"name" : "Awol",
"type" : "flat",
},
{
"id" : 822881,
"name" : "Give Us A Reason",
"type" : "flat",
},
{
"id" : 826474,
"name" : "Arabian Revolution",
"type" : "flat",
}
}
I need to add new fields
I thought something like that, but I did not go to his head
horse = {
"place" : 1,
"body" : 11
}
Card.where({'_id' => 33333333333}).find_and_modify({'$set' => {'horses.' + index.to_s => horse}}, upsert:true)
But all existing fields are removed and inserted new how to do that would be new fields added to existing
Indeed, this command will overwrite the subdocument
'$set': {
'horses.0': {
"place" : 1,
"body" : 11
}
}
You need to set individual fields:
'$set': {
'horses.0.place': 1,
'horses.0.body': 11
}

Resources