Spring JdbcTemplate is update atomic? - spring

Is the following jdbcTemplate update script is threadsafe? what it does basically is :
balance -= amount;
Here is the code:
String sql = "update player.playerbalance b set b.balance = (b.balance - ?) where b.id = ? and b.balance >= ?";
jdbcTemplate = new JdbcTemplate(dataSource);
int i = jdbcTemplate.update(
sql,
new Object[] {wager, playerBalance.getId(), wager});
What happens if two updates of this kind happens at the same time?
Thanks,

It has nothing to do with thread-safetiness. The call is supposed to be thread-safe.
DBMS is going to be smart enough to make sure that one update finish before the other update of the same record comes in (unless you have set it with very low isolation level). Therefore, if two thread (or process etc) invoke that same method twice (using same balance ID), the same record will be deducted twice.

Related

Oracle Concurrency Problem with competing race condition

I am facing a concurrency problem with Oracle DB. Say I have 3 Objects of type A that need to be processed and only on the final processing of an Object A can I move on to processing Objects of type B. Additionally, the processing of Objects of Type A are occurring in parallel from multiple deployed instances
Example:
Desired Behavior:
ObjectA-1 - Update status -> IsLastObjectA -> false
ObjectA-2 - Update status -> IsLastObjectA -> false
ObjectA-3 - Update status -> IsLastObjectA -> true -> Begin processing Objects of type B
Current Behavior failing
ObjectA-1 - Update status -> IsLastObjectA -> false
ObjectA-2 - Update status (happens in parallel with ObjectA-3) -> IsLastObjectA (at this point all Object As are in complete status) -> true -> Begin processing Objects of type B (This should only occur once)
ObjectA-3 - Update status (happens in parallel with ObjectA-2) -> IsLastObjectA (at this point all Object As are in complete status)-> true -> Begin processing Objects of type B (This should only occur once)
Ideally I want the transactions to happen in a serialized way (similar to that of isolation level Serializable). But not only does this hurt performance but also I don't have permission to increase ini trans params to the recommended 3. Also, a select for update and things of this locking nature cant be used because we only update the status once and there are no similar objects that are processed. They are all processed based on a unique primary key. Therefore, there is never One object A trying to update another. There is only reading the status from all other Object As after its current respective status has been updated.
I have tried different propagation types that oracle allows as well a locking technique and nothing has worked. Serializable seems to be the best option but I dont have the permissions to implement that
In the code snippet below this is a mocked version of the actual code. The endpoint in the controller gets called from a microservice that is listening to a messaging system queue. The service consumes off the message queue (This service is not shown)
#Data
public class ObjectA {
private int status;
private Long id;
}
#Service
// Lets assume, there is a listener before this call that picks up a message off a queue
// maps the json to that of objectA
// then calls this method
public boolean processObjectA(final ObjectA objecta) {
final boolean isLastUpdate;
isLastUpdate = service.updateObjectAndIsLastObjectToProcess(objectA); // for simplicity, lets assume this calls the method in the controller
if(isLastUpdate){
//Call DB and gather all info realted to ObjectBs and begin to process
}
}
public class Controller {
#Autowired
private ObjectService objectService;
#PutMapping("/updatestatus/islastobject")
public boolean isLastObjectToUpdate(
#RequestParam(name = "id") final Long id,
#RequestParam(name = "status") final int statusCode) {
final boolean updateStatus;
final boolean hasLastObjectBeenProcessed;
try {
// Update object to complete status
updateStatus = objectService.updateObject(id, statusCode);
if (updateStatus) {
//Verify if all ObjectA are in complete status
hasLastObjectBeenProcessed = objectService.hasLastObjectBeenProcessed(id);
return hasLastObjectBeenProcessed;
} else {
throw new RuntimeException();
}
} catch (RuntimeException e) {
return false;
}
}
}
ORACLE queries used
//Update ObjectA to compelte status
updateStatus query = update Object_A o set o.status = 9 where o.id = id
// Verifies is all ObjectA are in complete (9) status
hasLastObjectBeenProcessed query = SELECT object.id FROM Object_A o ml WHERE o.status = 9
Assuming two possible statuses on each row ("Active" - this row needs to be processed, and "Completed", this row is done), how about a model like below for your "worker" threads (pseudo-code)
work_to_be_done =
select count(*)
from table
where status = 'Active'
and objtype = 'A'
and rownum = 1;
if work_to_be_done = 0
<move onto objtype = B>
else
open cursor for
select *
from table
where status = 'Active'
and objtype = 'A'
for update skip locked;
for each row in cursor
process row
update status to 'Completed';
end;
First we see if there is any work to be done for "A". If this returns zero, then everyone has completed and committed their work and we're good to move onto "B".
If it returns non-zero, then you have active work to do, but some of that might already be being worked on by other threads. So we do a skip locked to find rows that we can work on. It might be zero, but that's OK, we just loop around (maybe sleep a little) and then start from the top. Eventually either we will find work to do, or we will find that all the work has been done and we can move on

Spring Boot + Hibernate - Insert query getting slow down

I am working on one spring boot application. Here I have 100,000 records that are inserted into db by different process. and its inserting one by one. I can't do batch insert.
So in starting some of the task performing well and not taking too much time ,but application process some and database is growing slowly - 2, insert time is increasing.
How can I speed up the process or avoid to get it slow?
The quickest way for inserts would be to use a prepared Statement.
Inject the jdbcTemplate and use its batchUpdate method and set the batch size. It's lightning fast.
If you think you cannot use the batch insert, which is hard for me to understand, then set the batch size to 1.
However, the most optimal batch size is certainly larger than that and depends on the insert statement. You have to experiment a bit with it.
Here an example for you with a class called LogEntry. Substitute class, table, columns and attributes by your class, table, columns and attributes and place it into your repository implementation.
Also make sure you set the application properties as mentioned here https://stackoverflow.com/a/62414315/12918872
Regarding the id Generator either set a sequence id generator (also shown in that link) or like in my case, just generate it on your own by asking for the maxId of your table at the beginning and then counting up.
#Autowired
private JdbcTemplate jdbcTemplate;
public void saveAllPreparedStatement2(List<LogEntry> logEntries) {
int batchSize = 2000;
int loops = logEntries.size() / batchSize;
for (int j = 0; j <= loops; j++) {
final int x = j;
jdbcTemplate.batchUpdate("INSERT INTO public.logentries(\r\n"
+ " id, col1, col2, col3, col4, col5, col6)\r\n"
+ " VALUES (?, ?, ?, ?, ?, ?, ?);\r\n", new BatchPreparedStatementSetter() {
public void setValues(PreparedStatement ps, int i) throws SQLException {
int counter = x * batchSize + i;
if (counter < logEntries.size()) {
LogEntry logEntry = logEntries.get(counter);
ps.setLong(1, (long) logEntry.getId());
ps.setString(2, (String) logEntry.getAttr1());
ps.setInt(3, (int) logEntry.getAttr2());
ps.setObject(4, logEntry.getAttr3(), Types.INTEGER);
ps.setLong(5, (long) logEntry.getAttr4());
ps.setString(6, (String) logEntry.getAttr5());
ps.setObject(7, logEntry.getAttr6(), Types.VARCHAR);
}
}
public int getBatchSize() {
if (x * batchSize == (logEntries.size() / batchSize) * batchSize) {
return logEntries.size() - (x * batchSize);
}
return batchSize;
}
});
}
}
Some advices for you :
It is not normal if you say the inserting time is getting increasing if more records are inserted. From my experience , most probably it is due to some logic bugs in your program such that you are processing more unnecessary data when you are inserting more records. So please revise your inserting logic first.
Hibernate cannot batch insert entities if the entity are using IDENTITY to generate is ID . You have to change it to use SEQUENCE to generate the ID with the pooled or pooled-lo algorithm.
Make sure you enable the JDBC batching feature in the hibernate configuration
If you are using PostgreSQL , you can add reWriteBatchedInserts=true in the JDBC connection string which will provide 2-3x performance gain.
Make sure each transaction will insert a batch of entities and then commit but not each transaction only insert one entity.
For more details about points (2), (3) and (4) , you can refer to my previous answers at this.

can somebody tell me what error i have in this BPM

This code will auto generate a new part number. This is a Post-processing BPM for BO GetNewPart
int iPartnum = 0;
string cPartid = string.Empty;
Erp.Tables.Company Company;
foreach (var ttpart_xRow in ttPart)
{
var ttpartRow = ttpart_xRow;
Company = (from Company_Row in Db.Company
where Company_Row.Company == Session.CompanyID
select Company_Row).FirstOrDefault();
iPartnum = (decimal)Company["AutoGenerate_c"] + 1;
cPartid = System.Convert.ToString(iPartnum);
ttpartRow.PartNum = cPartid;
Services.Lib.UpdateTableBuffer._UpdateTableBuffer(Company,"AutoGenerate_c", iPartnum);
}
Is it just not working or is there an error message?
Services.Lib.UpdateTableBuffer._UpdateTableBuffer(Company,"AutoGenerate_c", iPartnum);
I have personally never used or even seen this Lib item so I can't vouch for it. I would update the object manually inside of a transaction scope because I doubt GetNewPart ever touches that database and therefore probably doesn't create a transaction.
using (System.Transactions.TransactionScope txScope = IceDataContext.CreateDefaultTransactionScope())//start the transaction
{
//Your Logics go here
Db.Validate();
txScope.Complete();//commit the transaction
}
As a side note, I try to keep these sorts of things off of the company record because nearly every process in the system touches it and I don't want a process to lock it up or cause weird race conditions. I generally like to reserve a record that will only get touched for this specific purpose so I have a UDCodeType/UDCode for this sort of thing.

JDBC method to get child tables and all its descendants

Is there a JDBC method to get child tables and all of its descendants.
getExportedKeys returns only the direct child and not all its descendants. If I keep calling getExportedKeys() method recursively , it takes around 3 minutes for 50 tables or so (which is really slow)
Could someone please help me out for a solution.
Main objective is to avoid using queries since I would be dealing with different databases.
Recursive function used:
private static void getChildTables(DatabaseMetaData dbmd,Set<String> dependencies,String tableName,String schemaname ) throws SQLException{
ResultSet rs3 = dbmd.getExportedKeys(null,schemaname, tableName);
while (rs3.next()){
if(dependencies.contains(rs3.getString(7)))
continue;
dependencies.add(rs3.getString(7));
String childTable=rs3.getString(7);
getChildTables(dbmd, dependencies, childTable, null);
rs3.close();
rs3 = dbmd.getExportedKeys(null,schemaname, tableName);
}

EntityFramework - SaveChanges not saving but SQL seen in Profiler

I have read through most every post on EF-SaveChanges and do not believe my answer lies in any of those posts.
I am using C#, .NET 4, EF 4.3.1, SQL Server 2008R2, VS 2k11 Beta, AutoMapper.
Here is my code:
using (Model.AnimalRescueEntities context = new Model.AnimalRescueEntities())
{
using (TransactionScope transaction = new TransactionScope())
{
context.Connection.Open();
//Retrieve the event
eventDB = context.Events.Single(e => e.ID == eventRegVM.EventID);
eventOrgBaseDB = context.Entity_Base.Single(b => b.ID == eventDB.Entity_Organisation.ID);
eventRegVM.Event = Mapper.Map<Model.Event, EventsViewModel>(eventDB);
eventRegVM.Event.Entity_Organisation.Entity_Base = Mapper.Map<Model.Entity_Base, Entity_BaseViewModel>(eventOrgBaseDB);
//saves Event_Registration
eventRegDB = Mapper.Map<Event_RegistrationViewModel, Model.Event_Registration>(eventRegVM);
eventRegDB.Event = eventDB;
eventRegDB.EventID = eventDB.ID;
eventRegDB.Event.Entity_Organisation = context.Entity_Organisation.Single(o => o.ID == eventOrgBaseDB.ID);
eventRegDB.Event.Entity_Organisation.Entity_Base = eventOrgBaseDB;
//Add the link between EVENT and REGISTRATION
context.Event_Registration.AddObject(eventRegDB);
int numChanges = context.SaveChanges();
var regs = context.Event_Registration.Where(r => r.ID != null).ToList();
}
}
I have SQL Profiler running in the background and when SaveChanges is called I see this SQL code (numChanges is 1):
exec sp_executesql N'declare #generated_keys table([ID] uniqueidentifier)
insert [dbo].[Event_Registration]([EventID], [DateSubmitted], [HasPaid], [PaymentMethod], [Comments], [AmountPaid])
output inserted.[ID] into #generated_keys
values (#0, #1, null, #2, null, #3)
select t.[ID]
from #generated_keys as g join [dbo].[Event_Registration] as t on g.[ID] = t.[ID]
where ##ROWCOUNT > 0',N'#0 uniqueidentifier,#1 datetime2(7),#2 int,#3 decimal(19,4)',#0='1D841F75-AEA1-4ED1-B3F0-4E3994D7FC0D',#1='2012-07-04 14:59:45.5239309',#2=0,#3=0
regs will contain three exist rows and my new row. However, I cannot run a SELECT statement in SQL Server and see my new row. Running this many times will get me the same result - three existing rows in the database and a new fourth row that will never make it to the database.
eventRegDB also contains the GUID created for the primary key, ID; I assume SQL server does this but not 100% sure of that.
I have taken the above TSQL and run it in a query window against my database - I get new rows in my Event_Registration table after that - that is how the three existing rows were created.
I see now exceptions or other errors generated and cannot find any reason this would not save to the database. Any ideas? If you want to see the schema for the SQL, how to recreate the database, the code (any or all), then ask - this is all hosted on http://animalrescue.codeplex.com/ but I haven't stored this code, yet.
You are missing transaction.Complete() so your transaction is never committed. When the using block for TransactionScope completes your transaction is rolled back.

Resources