I have a transactional method where objects are inserted. The debugger shows that upon eventsDAO.save(..) the actual insert doesn't take place, but there is only a sequence fetch. The first time I see insert into events_t .. in the debugger is when there's a reference to the just-inserted Event.
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class, readOnly = false)
public void insertEvent(..) {
EventsT eventsT = new EventsT();
// Fill it out...
EventsT savedEventsT = eventsDAO.save(eventsT); // No actual save happens here
// .. Some other HQL fetches or statements ...
// Actual Save(Insert) only happens after some actual reference to this EventsT (below)
// This is also HQL
SomeField someField = eventsDAO.findSomeAttrForEventId(savedEventsT.getId());
}
But I also see that this only holds true if all the statements are HQL (non-native).
As soon as I put a Native-SQL Select somewhere before any actual reference to this table, even if it does not touch the table in any way, it forces an immediate flush and I see the statement insert into events_t ... on the console at that exact point.
If I don't touch the table EventsT with my Native SQL Select in any way, why does the flushing happen at that point?
According to the hibernate documentation:
6.1. AUTO flush
By default, Hibernate uses the AUTO flush mode which triggers a flush in the following circumstances:
prior to committing a Transaction
prior to executing a JPQL/HQL query that overlaps with the queued entity actions
before executing any native SQL query that has no registered synchronization
So, this is expected behaviour. See also this section. It shows how you can use a synchronization.
Related
I am creating a Spring Batch process (Spring Boot 2) that reads a file and writes it to a Database. It processes it one record at a time. Read from file, process it, and write (or update) to the Database.
If a record for the same ID exists in the DB, the process has to update the end date of the existing record in DB, and create a new record with new start date. Below is the code:
public class Processor implements ItemProcessor<CelebVO, CelebVO> {
#Autowired
EndorseTableRepository endorseTableRepository;
#Override
#Transactional
public CelebVO process(CelebVO celebVO) {
CelebEndorsement celebEndorsement = endorseTableRepository.findAllByCelebIDAndBrandID(celebVO.getCelebID(),celebVO.getBrandID());
if (celebEndorsement == null) {
CelebEndorsement newEndorsement = new CelebEndorsement(celebVO);
endorseTableRepository.save(newEndorsement);
} else {
celebEndorsement.setEndDate(celebVO.getEffDt.minusDays(1));
endorseTableRepository.save(celebEndorsement);
// create a new row with new start date
CelebEndorsement newEndorsement = new CelebEndorsement(celebVO);
newEndorsement.setStartDate(celebVO.getEffDt());
endorseTableRepository.save(newEndorsement);
}
return celebVO;
}
}
Below is the input txt file (CelebVO):
CelebID BrandID EffDt
J Lo Pepsi 2021-01-05
J Lo Pepsi 2021-05-30
Now, lets suppose we are starting with an empty EndorseTable. When the process picks up the file and reads the records, it will see there are no records for CelebID 'J Lo'. So it will insert a row to the DB.
Now, the process reads the second row and process it. It should see that there is already a record in the table for J Lo. So it should put an endDate to that records and then create a new record.
After this file is processed we should see two records in the table.
But that is not what is happening. Though I do a repository.save() for the first record, it is still not commited to the table. So when the process reads the second row, it doesn't find any rows in the table. It ends up writing only one record to the table.
I tried a repository.saveAndFlush(). That doesn't help.
My chunk size is 1
I tried removing #Transactional. But that breaks the code. So I kept it there.
The chunk-oriented processing model of Spring Batch commits a transaction per chunk, not per record. So in your case, if the insert and the update happen to be in the same chunk, the processor won't see the change of the previous record as the transaction is not committed yet at that point.
Adding #Transactional on your processor's method is incorrect, because the processor will already be executed within the scope of a transaction driven by Spring Batch. What you are trying to do would work if you set the commit interval to 1, but this would impact the performance of your step.
I had to modify the Entity class. I replaced
#ManyToOne(cascade = CascadeType.ALL)
with
#ManyToOne(cascade = {CascadeType.MERGE, CascadeType.DETACH})
and it worked.
I have a requirement to read the first enabled account from DB2 database table and immediately update the column to disable it. While server 1 is reading and updating the column, no other server should be able to read the same row since I want one account to be used by only one server at a time.
This is what I have so far..
Account.java
Account{
private Long id;
private Character enabled;
.............
}
AccountRepository.java
public interface AccountRepository extends JpaRepository<Account, Long>{
Account findFirstByEnabled(new Character('Y'));
}
AccountServiceImpl.java
#Service
public class AccountServiceImpl {
#Autowrired
private AccountRepository accntRepository;
#Transactional
public Account findFirstAvaialbleAccount(){
Account account = accntRepository.findFirstByEnabled(new Character('Y'));
if(account != null)
{
account.setEnabled(new Character('N')); //put debug point here
account.save(proxyAccount);
}
return account;
}
}
But this isn't working.. I've put a debug pointer in the findFirstAvaialbleAccount() method. What I was expecting is, if the debug pointer reaches that line and waiting for me to resume execution, if I run select query directly on the database, the sql shouldn't execute. It should only execute after I resume the execution on the server so that transaction is completed. But instead, running the select query directly on the database gave me the complete result set immediately. What am I missing here? I'm using DB2 if it matters.
Answering my own question... I had incorrect Select SQL running against the database. If I run the select sql with "select.. for update", then the execution waits until I hit resume on the server and transaction is complete.
SQL 1 - this executes immediately even though the transaction from server isn't complete.
select * from MYTABLE where ENABLED = 'Y';
SQL 2- this waits until the transaction from server is complete (it will probably timeout if I don't hit resume quick enough)
select * from MYTABLE where ENABLED = 'Y'
fetch first 1 rows only with rs use and keep update locks;
I'm trying to write a Groovy/Grails 3 function that looks up a database object, locks it, and then saves it (releasing the lock automatically).
If the function is called multiple times, it should wait until the lock is released, and then run the update. How can I accomplish this?
def updateUser(String name) {
User u = User.get(1)
// if locked, wait until released somehow?
u.lock()
u.name = name
u.save()
}
updateUser('bob')
updateUser('fred') // sees lock from previous call, waits until released, then updates
u.save(flush:true)
Flushing the Hibernate session should complete the transaction and release the lock on the database level.
Generally speaking, pessimistick locking only works in a transactional context.
So make sure to put the updateUser method in a service that is annotated with #Transactional.
Calling get() and then lock() results in 2 sql statements being executed (one for getting the object, another for locking it).
Using User.lock(), a single select ... for udpate query is issued instead.
#Transactional
class UserService {
def updateUser(String name) {
User u = User.lock(1) // blocks until lock is free
u.name = name
u.save()
}
}
We have a class in salesforce that is called form a trigger. When using Apex Data Loader this trigger throws an error oppafterupdate: System.LimitException: Too many SOQL queries: 101
I commented out a line of code that calls the following static method in a class we wrote and there are no more errors with respect to the governing limit. So I can verify the method below is the culprit.
I'm new to this, but I know that Apex code should be bulkified, and DML (and SOQL) statements should not be used inside of loops. What you want to do is put objects in a collection and use DML statements against the collection.
So I modified the method below; I declared a list, I added Task objects to the list, and I ran a DML statement on the list. I commented out the update statement inside the loop.
//close all tasks linked to an opty or lead
public static void closeTasks(string sId) {
List<Task> TasksToUpdate = new List<Task>{}; //added this
List<Task> t = [SELECT Id, Status, WhatId from Task WHERE WhatId =: sId]; //opty
if (t.isEmpty()==false) {
for (Task c: t) {
c.Status = 'Completed';
TasksToUpdate.add(c); //added this
//update c;
}
}
update TasksToUpdate; //Added this
}
Why am I still getting the above error when I run the code in our sandbox? I thought I took care of this issue but apparently there is something else here? Please help.. I need to be pointed in the right direction.
Thanks in advance for your assistance
You have "fixed" the update part but the code still fails on the too many SELECTs.
We would need to see your trigger's code but it seems to me you're calling your function in a loop in that trigger. So if say 200 Opportunities are updated, your function is called 200 times and in the function's body you have 1 SOQL... Call it more than 100 times and boom, headshot.
Try to modify the function to pass a collection of Ids:
Set<Id> ids = trigger.newMap().keyset();
betterCloseTasks(ids);
And the improved function could look like this:
public static void betterCloseTasks(Set<Id> ids){
List<Task> tasksToClose = [SELECT Id
FROM Task
WHERE WhatId IN :ids AND Status != 'Completed'];
if(!tasksToClose.isEmpty()){
for(Task t : tasksToClose){
t.Status = 'Completed';
}
update tasksToClose;
}
}
Now you have 1 SOQL and 1 update operation no matter whether you update 1 or hundreds of opportunities. It can still fail on some other limits like max 10000 updated records in one transaction but that's a battle for another day ;)
Why isn't the exception triggered? Linq's "Any()" is not considering the new entries?
MyContext db = new MyContext();
foreach (string email in {"asdf#gmail.com", "asdf#gmail.com"})
{
Person person = new Person();
person.Email = email;
if (db.Persons.Any(p => p.Email.Equals(email))
{
throw new Exception("Email already used!");
}
db.Persons.Add(person);
}
db.SaveChanges()
Shouldn't the exception be triggered on the second iteration?
The previous code is adapted for the question, but the real scenario is the following:
I receive an excel of persons and I iterate over it adding every row as a person to db.Persons, checking their emails aren't already used in the db. The problem is when there are repeated emails in the worksheet itself (two rows with the same email)
Yes - queries (by design) are only computed against the data source. If you want to query in-memory items you can also query the Local store:
if (db.Persons.Any(p => p.Email.Equals(email) ||
db.Persons.Local.Any(p => p.Email.Equals(email) )
However - since YOU are in control of what's added to the store wouldn't it make sense to check for duplicates in your code instead of in EF? Or is this just a contrived example?
Also, throwing an exception for an already existing item seems like a poor design as well - exceptions can be expensive, and if the client does not know to catch them (and in this case compare the message of the exception) they can cause the entire program to terminate unexpectedly.
A call to db.Persons will always trigger a database query, but those new Persons are not yet persisted to the database.
I imagine if you look at the data in debug, you'll see that the new person isn't there on the second iteration. If you were to set MyContext db = new MyContext() again, it would be, but you wouldn't do that in a real situation.
What is the actual use case you need to solve? This example doesn't seem like it would happen in a real situation.
If you're comparing against the db, your code should work. If you need to prevent dups being entered, it should happen elsewhere - on the client or checking the C# collection before you start writing it to the db.