how can i update entity synchonous in spring jpa - spring

i have an issue :
i have a product entity which have 2 columns id and quantity
_so i have 2 api
one for update this will update product entity quantity (quantity = quantity - 1)
one for update this will update product entity quantity (quantity = quantity + 1)
the issue is then I call 2 api in the same time, this result not my expect here is my diagram
enter image description here
can anyone help my thank you

Well for your particular scenario there is a concept called locking. And there is two type of locking
Optimistic
Pessimistic
The idea is when one transaction is updating a row of a db table, you should not allow another transaction to update that row util the previous one is committed.
In application there are several ways to achieve this type of locking. We can describe this concurrent updating process as a Collision. In a system where a collision is not going to happen very frequently you can use the optimisting locking approach.
In optimistic approach you keep a version number in your row. When you perform an update you increase the version by 1. Let's analyse your scenario now and call your two service I (increase) and D (decrease). You have a product row P in your database table where quantity = 3, version = 0. When I and D is called for both of them when they fetch P from database the state of P is as below
quantity = 3, version = 0
Now D executes first and decrease and save P
Your update query should be like below
UPDATE Product p set p.quantity = :newQuantity
, p.version = p.version + 1 where p.version = :oldVersion and p.id = :id
For case of D value of newQuantity = 2 (oldQty - 1) and value of oldVersion = 0 (we fetched it at the beginning)
Now the current state of P is like below
quantity = 2, version = 1
Now when I tries to execute you should generate the same update query but for this case value of newQuantity = 4 (oldQty +1) and value of oldVersion = 0 (we fetched it at the beginning).
If you put these value to the update query your row won't be updated as it the version checking part will be false. From this you can then throw any locking exception to notify your client that the request could not be completed and can try again. This is basically the core concept of optimistic locking and there is much more efficient ways to handle it with frameworks like Hibernate
Here you can notice that we have not denied any of the read requests while updating the row but in the approach of Pessimistic locking you deny any read request when another transaction on going. So basically when D is on process of decreasing I would not be able to read the value and from there you can return to your client saying that the request was not completed. But this approach takes a toll on read heavy tables in exchange of tight data integrity.

Related

Dynamics crm + Plugin code to store sum formula across a entity collection

I have the below requirement to be implemented in a plugin code on an Entity say 'Entity A'-
Below is the data in 'Entity A'
Record 1 with field values
Price = 100
Quantity = 4
Record 2 with field values
Price = 200
Quantity = 2
I need to do 2 things
Add the values of the fields and update it in a new record
Store the Addition Formula in a different config entity
Example shown below -
Record 3
Price
Price Value = 300
Formula Value = 100 + 200
Quantity
Quantity Value = 6
Formula Value = 4 + 2
Entity A has a button named "Perform Addition" and once clicked this will trigger the plugin code.
Below is the code that i have tried -
AttributeList is the list of fields i need to perform sum on. All fields are decimal
Entity EntityA = new EntityA();
EntityA.Id = new Guid({"Guid String"});
var sourceEntityDataList = service.RetrieveMultiple(new FetchExpression(fetchXml)).Entities;
foreach (var value in AttributeList)
{
EntityA[value]= sourceEntityDataList.Sum(e => e.Contains(value) ? e.GetAttributeValue<Decimal>(value) : 0);
}
service.Update(EntityA);
I would like to know if there is a way through linq I can store the formula without looping?
and if not how can I achieve this?
Any help would be appreciated.
Here are some thoughts:
It's interesting that you're calculating values from multiple records and populating the result onto a sibling record rather than a parent record. This is different than a typical "rollup" calculation.
Dynamics uses the SQL sequential GUID generator to generate its ids. If you're generating GUIDs outside of Dynamics, you might want to look into leveraging the same logic.
Here's an example of how you might refactor your code with LINQ:
var target = new Entity("entitya", new Guid("guid"));
var entities = service.RetrieveMultiple(new FetchExpression(fetchXml)).Entities.ToList();
attributes.ForEach(a => target[a] = entities.Sum(e => e.GetAttributeValue<Decimal>(a));
service.Update(target);
The GetAttributeValue<Decimal>() method defaults to 0, so we can skip the Contains call.
As far as storing the formula on a config entities goes, if you're looking for the capability to store and use any formula, you'll need a full expression parser, along the lines of this calculator example.
Whether you'll be able to do the Reflection required in a sandboxed plugin is another question.
If, however, you have a few set formulas, you can code them all into the plugin and determine which to use at runtime based on the entities' properties and/or config data.

Linq Query Where Contains

I'm attempting to make a linq where contains query quicker.
The data set contains 256,999 clients. The Ids is just a simple list of GUID'S and this would could only contain 3 records.
The below query can take up to a min to return the 3 records. This is because the logic will go through the 256,999 record to see if any of the 256,999 records are within the List of 3 records.
returnItems = context.ExecuteQuery<DataClass.SelectClientsGridView>(sql).Where(x => ids.Contains(x.ClientId)).ToList();
I would like to and get the query to check if the three records are within the pot of 256,999. So in a way this should be much quicker.
I don't want to do a loop as the 3 records could be far more (thousands). The more loops the more hits to the db.
I don't want to grap all the db records (256,999) and then do the query as it would take nearly the same amount of time.
If I grap just the Ids for all the 256,999 from the DB it would take a second. This is where the Ids come from. (A filtered, small and simple list)
Any Ideas?
Thanks
You've said "I don't want to grab all the db records (256,999) and then do the query as it would take nearly the same amount of time," but also "If I grab just the Ids for all the 256,999 from the DB it would take a second." So does this really take "just as long"?
returnItems = context.ExecuteQuery<DataClass.SelectClientsGridView>(sql).Select(x => x.ClientId).ToList().Where(x => ids.Contains(x)).ToList();
Unfortunately, even if this is fast, it's not an answer, as you'll still need effectively the original query to actually extract the full records for the Ids matched :-(
So, adding an index is likely your best option.
The reason the Id query is quicker is due to one field being returned and its only a single table query.
The main query contains sub queries (below). So I get the Ids from a quick and easy query, then use the Ids to get the more details information.
SELECT Clients.Id as ClientId, Clients.ClientRef as ClientRef, Clients.Title + ' ' + Clients.Forename + ' ' + Clients.Surname as FullName,
[Address1] ,[Address2],[Address3],[Town],[County],[Postcode],
Clients.Consent AS Consent,
CONVERT(nvarchar(10), Clients.Dob, 103) as FormatedDOB,
CASE WHEN Clients.IsMale = 1 THEN 'Male' WHEN Clients.IsMale = 0 THEN 'Female' END As Gender,
Convert(nvarchar(10), Max(Assessments.TestDate),103) as LastVisit, ";
CASE WHEN Max(Convert(integer,Assessments.Submitted)) = 1 Then 'true' ELSE 'false' END AS Submitted,
CASE WHEN Max(Convert(integer,Assessments.GPSubmit)) = 1 Then 'true' ELSE 'false' END AS GPSubmit,
CASE WHEN Max(Convert(integer,Assessments.QualForPay)) = 1 Then 'true' ELSE 'false' END AS QualForPay,
Clients.UserIds AS LinkedUsers
FROM Clients
Left JOIN Assessments ON Clients.Id = Assessments.ClientId
Left JOIN Layouts ON Layouts.Id = Assessments.LayoutId
GROUP BY Clients.Id, Clients.ClientRef, Clients.Title, Clients.Forename, Clients.Surname, [Address1] ,[Address2],[Address3],[Town],[County],[Postcode],Clients.Consent, Clients.Dob, Clients.IsMale,Clients.UserIds";//,Layouts.LayoutName, Layouts.SubmissionProcess
ORDER BY ClientRef
I was hoping there was an easier way to do the Contain element. As the pool of Ids would be smaller than the main pool.
A way I've speeded it up for now is. I've done a Stinrg.Join to the list of Ids and added them as a WHERE within the main SQL. This has reduced the time down to a seconds or so now.

pig - need tips after performance tuning gone wrong

I have a Pig script that took around 10 minutes to finish and I thought that there was still room for some performance improvement.
So, I started by putting the JOINs and GROUPs in a nested FOREACH and also putting the previous FILTERs inside the same FOREACH.
I also added using 'replicated'.
The problem now is that instead of taking 10 minutes, it's taking over 30 minutes.
Is there a place that has best practices and performance improvement tips besides PIG's documentation?
So that you can get a better picture, here's some code:
--before
previous_join = JOIN A by id, B by id --for symplification
filtering = FILTER previous_join BY ((year_min > 1995 ? year_min - 1 : year_min) <= list_year and (year_max > 2015 ? year_max - 1 : year_max) >= list_year);
final_filtered = FOREACH filtering GENERATE user_id as user_id, list_year;
--after
final_filtered = FOREACH (JOIN A by id, B by id) {
tmp = FILTER group BY ((A::year_min > 1995 ? A::year_min - 1 : A::year_min) <= B::list_year and (A::year_max > 2015 ? A::year_max - 1 : A::year_max) >= B::list_year and A::premium == 'true');
GENERATE A::user_id AS user_id, B::list_year AS list_year;
};
Am I doing something wrong or is this the wrong approach?
Thanks.
In prior case [before] you are performing filter and projection after the join is performed.
It will be helpful if you calculate time log for each operation and identify the bottleneck operation.
Can you also try splitting your filter statements in multiple relations rather than just one and check the difference in filter timing?
filter_by_min_year = FILTER previous_join BY ((A::year_min > 1995 ? A::year_min - 1 : A::year_min) <= B::list_year);
filter_by_max_year = FILTER filter_by_min_year BY (A::year_max > 2015 ? A::year_max - 1 : A::year_max) >= B::list_year);
Overall you want to find ids(+some more columns) with A::year_min <=B::list_year and A::year_max >= B::list_year
Instead of performing join on raw A & B, you can try using projections on both of them to contain only columns needed for join and later operations.
A-projected = foreach A generate id, year_min, year_max;
B-projected = foreach B generate id, list_year;
C = join A-projected by id, B-projected by id USING 'replicated';
If any of A-projected or B-projected is a small set that can be loaded in memory use replicated join, I am assuming B-projected to be a smaller set than A-projected.
If this doesnt apply to your case, please skip this option.
Also you can try setting the number of reducers to be used for this join by using PARALLEL keyword.
After applying filter you will get a list of required id's that you can use to fetch other information from A or B.
Also consider tweaking MapReduce properties like io.sort.mb, mapred.job.shuffle.input.buffer.percent etc.
Hope this helps.

Azure Table Storage - PartitionKey and RowKey selection to use between query

I am a total newbie with Azure! The purpose is to return the rows based on the timestamp stored in the RowKey. As there is a transaction cost with each query, I want to minimize the number of transactions/queries whilst maintain performance
These are the proposed Partition and Row Keys:
Partition Key: TextCache_(AccountID)_(ParentMessageId)
Row Key: (DateOfMessage)_(MessageId)
Legend:
AccountId - is an integer
ParentMessageId - The parent messageId if there is one, blank if it is the parent
DateOfMessage - Date the message was created - format will be DateTime.Ticks.ToString("d19")
MessageId - the unique Id of the message
I would like to get back from a single query the rows and any childrows that is > or < DateOfMessage_MessageId
Can this be done via my proposed PartitionKeys and RowKeys?
ie.. (in psuedo code)
var results = ctx.PartitionKey.StartsWith(TextCache_AccountId)
&& ctx.RowKey > (TimeStamp)_MessageId
Secondly, if there I have a number of accounts, and only want to return back the first 10, could it be done via a single query
ie.. (in psuedo code)
var results = (
(
ctx.PartitionKey.StartsWith(TextCache_(AccountId1)) &&
&& ctx.RowKey > (TimeStamp1)_MessageId1 )
)
||
(
ctx.PartitionKey.StartsWith(TextCache_(AccountId2)) &&
&& ctx.RowKey > (TimeStamp2)_MessageId2 )
) ...
)
.Take(10)
The short answer to your questions is yes, but there are some things you need to watch for.
Azure table storage doesn't have a direct equivalent of .StartsWith(). If you're using the storage library in combination with LINQ you can use .CompareTo() (> and < don't translate properly) which will mean that if you run a search for account 1 and you ask the query to return 1000 results, but there are only 600 results for account 1, the last 400 results will be for account 10 (the next account number lexically). So you'll need to be a bit smart about how you deal with your results.
If you padded out the account id with leading 0s you could do something like this (pseudo code here as well)
ctx.PartionKey > "TextCache_0000000001"
&& ctx.PartitionKey < "TextCache_0000000002"
&& ctx.RowKey > "123465798"
Something else to bear in mind is that queries to Azure Tables return their results in PartitionKey then RowKey order. So in your case messages without a ParentMessageId will be returned before messages with a ParentMessageId. If you're never going to query this table by ParentMessageId I'd move this to a property.
If TextCache_ is just a string constant, it's not adding anything by being included in the PartitionKey unless this will actually mean something to your code when it's returned.
While you're second query will run, I don't think it will produce what you're after. If you want the first ten rows in DateOfMessage order, then it won't work (see my point above about sort orders). If you ran this query as it is and account 1 had 11 messages it will return only the first 10 messages related to account 1 regardless if whether account 2 had an earlier message.
While trying to minimise the number of transactions you use is good practice, don't be too concerned about it. The cost of running your worker/web roles will dwarf your transaction costs. 1,000,000 transactions will cost you $1 which is less than the cost of running one small instance for 9 hours.

Checking to see if a user has filled in a table in Matlab

Can this if (size(cost,1) == 2 && size(limit,1) == 2) expression be used? Because I want to take the data from cost table and limit table. The cost table is 4 by 3 table and limit table is 4 by 2 table. So i want to take the data (which are input from user) from limit table. I have this code:
if P1 < limit(1,1)
P1 = limit(1,1);
lambdanew = P1*2*cost(1,3) + cost(1,2);
I can execute my program only if the user inserts the data into limit table but if the user did not insert the data, so it will be an error saying this:
Index exceeds matrix dimensions.
Error in ==> fyp_editor>Mybutton_Callback at 100
if P1 < limit(1,1)
So my question is how I can make if statement for the limit table if the user did not enter the data?
Is it limit(0), limit = 0 or limit == 0??
Can you initialize the limit table somehow so you know it exists but that the user didn't enter any information in it? If limit table is 4 by 2, try limit = zeros(4,2). Hope that helps.
If you want to make sure that limit is an array of size (4,2), you can do the following
if ~all(size(limit)==[4 2]))
h = errordlg('please fill in all values for "limit"');
uiwait(h)
return
end
Thus, the user gets an error message popping up, after which the callback stops executing.

Resources