I committed an object in javers and it created the initial records in the table, except I've noticed one thing:
In the jv_snapshot table, I have the following state of object Person (recorder):
{
"firstName": "test",
"lastName": "test",
"address": {
"valueObject": "x.model.Address",
"ownerId": {
"entity": "x.Place",
"cdoId": 99999
},
"fragment": "recorder/address"
},
"organisation": {
"valueObject": "x.model.Organisation",
"ownerId": {
"entity": "x.Place",
"cdoId": 99999
},
"fragment": "recorder/organisation"
},
...
}
Organisation is of ValueObject, and I made sure that it has a value before committing it with javers however when I checked jv_global_id, there's no record with the fragment "recorder/organisation". The other objects under Person and other Organisation objects associated to entity Place was audited properly. When I view the shadow of that commit, the place.recorder.organisation object is null.
When I edit the place -> person (recorder) -> organisation object and commit it in javers, that's when the INITIAL type record for managed_type "recorder/organisation" is saved, which shouldn't be because it's an update. So when I view the shadow of the first commit, it would display the value of the second commit.
If I change the Organisation to entity object, it would save the initial value, but I want to avoid that because we only want to maintain one entity (which is Place).
Aaand this doesn't happen to all place objects. Some objects store a "recorder/organisation" fragment on first commit Am I missing anything? Is there anything I should check?
I use MSSQL, and also Spring JPA.
Related
I'm using Java Spring and MongoDB.
I have two collections: customer and order.
I have a reference from the order to the customer collection.
I have an already existing customer.
I want to create a new order with reference to the existing customer.
My POST body request looks like this:
{
"type": "SaaS",
"units": 5,
"price": 30000,
"customer":{
"$ref": "customer",
"$id": {
"oid": "6230853866f97257c050d330"
}
}
}
However, the java serialization process can't resolve the customer subdocument. I understand that I need to apply some logic here but I can't find nor understand how to do it. Basically in mongosh syntax it look similar to this:
db.order.updateOne({_id: ObjectId("623070ab3207ac1de9f8351c")}, {$set: {customer: new DBRef('customer', new ObjectId("6230824c942afc6dee673f3b"))}})
I am adding a feature that allows users to select from a list of people of a certain type, Type1 and Type2. A type would be chosen from a dropdown, and the data from the API would look like
{
"id": 1,
"name": "TYPE1",
"desc": "Type 1 Person"
}
I am creating a POST endpoint that allows an admin user to insert more people into the list, but I'm unsure on the best way for the admin to include the person's type. In other languages/frameworks, I would do something like this:
{
"first_name": "John",
"last_name": "Doe",
"type_id": 1
}
then handle adding the entry in my own SQL. In Spring though, I'm trying to leverage an object being created from the data automatically. For this to be successful, I've need to send the data as:
{
"first_name": "John",
"last_name": "Doe",
"type": {
"id": 1,
"name": "TYPE1",
"desc": "Type 1 Person"
}
}
My question is in two parts.
In Spring, is there anything I can leverage that would allow me to just pass an identifier for person type when creating a new person entry? (I've looked into DTOs, but I've never used them, so I don't know if that is the proper solution)
In REST in general, how much data should be required when adding a resource that references another resource?
I have 1 million users in a Postgres table. It has around 15 columns which are of the different datatype (like integer, array of string, string). Currently using normal SQL query to filter the data as per my requirement.
I also have an "N" number of projects (max 5 projects) under each user. I have indexed these projects in the elasticsearch and doing the fuzzy search. Currently, for each project (text file) I have a created a document in the elasticsearch.
Both the systems are working fine.
Now my need is to query the data on both the systems. Ex: I want all the records which have the keyword java (on elasticsearch) and with experience of more than 10 years (available in Postgres).
Since the user's count will be increasing drastically, I have moved all the Postgres data into the elasticsearch.
There is a chance of applying filters only on the fields related to the user (except project related fields).
Now I need to created nest projects for the corresponding users. I tried parent-child types and didn't work for me.
Could anyone help me with the following things?
What will be the correct way of indexing projects associated with the users?
Since each project document has a field called category, is it possible to get the matched category name in the response?
Are there any other better way to implement this?
By your description, we can tell that the "base document" is all based on users.
Now, regarding your questions:
Based on what I said before, you can add all the projects associated to each user as an array. Like this:
{
"user_name": "John W.",
..., #More information from this user
"projects": [
{
"project_name": "project_1",
"role": "Dev",
"category": "Business Intelligence",
},
{
"project_name": "project_3",
"role": "QA",
"category": "Machine Learning",
}
]
},
{
"user_name": "Diana K.",
..., #More information from this user
"projects": [
{
"project_name": "project_1"
"role": "Project Leader",
"category": "Business Intelligence",
},
{
"project_name": "project_4",
"role": "DataBase Manager",
"category": "Mobile Devices",
},
{
"project_name": "project_5",
"role": "Project Manager",
"category": "Web services",
}
]
}
This structure is with the goal of adding all the info of the user to each document, doesn't matter if the info is repeated. Doing this will allow you to bring back, for example, all the users that work in a specific project with queries like this:
{
"query":{
"match": {
"projects.name": "project_1"
}
}
}
Yes. Like the query above, you can match all the projects by their "category" field. However, keep in mind that since your base document is merely related to users, it will bring back the whole user's document.
For that case, you might want to use the Terms aggregation, which will bring you the unique values of certain fields. This can be "combined" with a query. Like this:
{
"query":{
"match": {
"projects.category": "Mobile Devices"
}
}
},
"size", 0 #Set this to 0 since you want to focus on the aggregation's result.
{
"aggs" : {
"unique_projects_names" : {
"terms" : { "field" : "projects.name" }
}
}
}
That last query will bring back, in the aggregation fields, all the unique projects' name with the category "Mobile Devices".
You can create a new index where you'll store all the information related to your projects. However, the relationships betwen users and projects won't be easy to keep (remember that ES is NOT intended for being an structured or ER DB, like SQL) and the queries will become very complex, even if you decide to name both of your indices (users and projects) in a way you can call them with a wildcard.
EDIT: Additional, you can consider store all the info related to your projects in Postgress and do the call separately, first get the project ID (or name) from ES and then the project's info from Postgres (since I assume is maybe the info that is more likely not to change).
Hope this is helpful! :D
I am trying to store collections of objects in cache.
{
"name": "Dep1",
"employees": [{
"id": 1,
"name": "emp1",
"profilePic": "http://test.com/img1.png"
}, {
"id": 2,
"name": "emp2",
"profilePic": "http://test.com/img2.png"
}, {
"id": 3,
"name": "emp3",
"profilePic": "http://test.com/img3.png"
}, {
"id": 4,
"name": "emp4",
"profilePic": "http://test.com/img4.png"
}]
}
In this case if Employee 1 changes his profile picture, I need to invalidate the full cached object in order to maintain data consistency.
This approach undermines caching as whenever there is any update for an employee I need to clear that complete object.
Is there any better approach or design we can be followed to optimize this?
Thanks
Is this suppose to be a redis question? I'll assume it's general purpose.
Retrieve only IDs then request each entry by it's ID.
Store the list of entries per request as merely the list of IDs.
You can now cache the list, as well as cache each entry.
Since you are only retrieving IDs you can make very simple super efficient indexes on the data (the data will always be fetched directly from the index since it's just the ID). Updates may or may not need to invalidate the cache of the list, it may be sufficient to invalidate the entry cache. Even if you invalidate the list you will still have fairly good performance since you're very unlikely to invalidate entry caches all too often.
I have a problem mapping json to CoreData and reading it out again. I map from json to an Activity-Entity with a relationship of last participant entities. The last_particpants is an array with the most recent participants, ordered from most recent first by the API.
{
"id": 50,
"type": "Initiative",
"last_participants": [
{
"id": 15,
"first_name": "Chris",
},
{
"id": 3,
"first_name": "Mary",
},
{
"id": 213,
"first_name": "Dany",
}
]
}
I have RestKit logging on and see that the mapping reads the array elements one by one and keeps the order. However CoreData saves them as an NSSet of entities and then the order gets lost. When I read out the data its is mixed up. What options do I have to keep the order in which the array was mapped? Any help would be great.
2 options:
Use an ordered set in Core Data (set on the attribute in the properties inspector).
Use the #metadata provided by RestKit to access the collection order during mapping.