Spring JdbcTemplate returns empty result when there should be a valid result - spring

I'm using SimpleJdbcDaoSupport object to access DB resources. I have a query which is frequently executed against the database to locate a record with a specific key. for some reason after executing the same query several times I start to get an empty result even though the record exists in the database.
Any ideas what can cause this behavior?
daoSupport.getJdbcTemplate().query(this.getConsumerTokenQueryStatement(),params, this.rowMapper);
public static class TokenServicesRowMapper implements RowMapper {
public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
DefaultLobHandler lobHandler = new DefaultLobHandler();
return lobHandler.getBlobAsBytes(rs, 1);
}
}

If this is not related to your code one reason can be the fact that another transaction is doing something (like an update) to the row you search and due do the isolation between transactions you cannot see your row. One transaction can change but not commit your row yet while in the same time the other one is searching for it but as it can only see committed rows it does not see your row.

Related

#Cacheable duplicates inner list of DTO returned from JOOQ query

I'm new to JOOQ and Spring caching and using version 3.10.6. I'm trying to cache a query result so I don't need to go to database every time. The fetching of the query goes smoothly there is no problem in that but when you execute this query again, it goes to the cache which has duplicate records in the inner lists. Also every time this query is called and it falls to the cache, the duplications grow in number. Now I can put a Set instead of a List but I want to know why this duplication occurs.
Here is my JooqRepo method
#Cacheable(CachingConfig.OPERATORS)
public List<MyDto> getAllOperatorsWithAliases() {
return create.select(Tables.MY_TABLE.ID)
.select(Tables.MY_TABLE.NAME)
.select(Tables.MY_INNER_TABLE.ID)
.select(Tables.MY_INNER_TABLE.ALIAS)
.select(Tables.MY_INNER_TABLE.PARENT_ID)
.select(Tables.MY_INNER_TABLE.IS_MAIN)
.from(Tables.MY_TABLE)
.join(Tables.MY_INNER_TABLE)
.on(Tables.MY_TABLE.ID.eq(Tables.MY_INNER_TABLE.PARENT_ID))
.fetch(this::createMyDtoFromRecord);
}
private MyDto createMyDtoFromRecord(Record record) {
MyInnerDto myInnerDto = new MyInnerDto();
myInnerDto.setId(record.field(Tables.MY_INNER_TABLE.ID).getValue(record));
myInnerDto.setAlias(record.field(Tables.MY_INNER_TABLE.ALIAS).getValue(record));
myInnerDto.setParentId(record.field(Tables.MY_INNER_TABLE.PARENT_ID).getValue(record));
myInnerDto.setIsMain(record.field(Tables.MY_INNER_TABLE.IS_MAIN).getValue(record) == 1);
MyDto myDto = new MyDto();
myDto.setId(record.field(Tables.MY_TABLE.ID).getValue(record));
myDto.setName(record.field(Tables.MY_TABLE.NAME).getValue(record));
myDto.setInnerDtos(Collections.singletonList(myInnerDto));
return myDto;
}
and here are the Dtos
#Data
public class MyDto {
private Long id;
private String name;
private List<MyInnerDto> innerDtos;
}
#Data
public class MyInnerDto {
private Long id;
private String alias;
private Long parentId;
private Boolean isMain;
}
The first call MyDto1 has the list innerDtos of size 1 and with each call that falls to the cache this number goes up by 3 and I think the reason of it is because there are 3 parent dtos being returned in the query.
I've tried adding #EqualsAndHashCode to these dtos but when I add it the query now returns an empty list.
I'm sorry if this was asked before but I couldn't find it.
I found the problem and it was not related to JOOQ but it was about #Cacheable and using in memory caches.
I was using in memory cache and getting rid of the duplicates inside the service layer via putting the contents of the query inside a Map<Long, MyDto> to collect the MyInnerDto's under the same id. But the problem here is; in memory caches return the object itself meanwhile caches like Redis returns a copy of that object. So when I changed the cache object, it was directly changed inside the cache as well, hence the duplication issue.
To get rid of this problem here's the revised version of the query:
#Cacheable(CachingConfig.OPERATORS)
public List<MyDto> getAllOperatorsWithAliases() {
Map<MyDto, List<MyInnerDto>> result = create.select(Tables.MY_TABLE.ID)
.select(Tables.MY_TABLE.NAME)
.select(Tables.MY_INNER_TABLE.ID)
.select(Tables.MY_INNER_TABLE.ALIAS)
.select(Tables.MY_INNER_TABLE.PARENT_ID)
.select(Tables.MY_INNER_TABLE.IS_MAIN)
.from(Tables.MY_TABLE)
.join(Tables.MY_INNER_TABLE)
.on(Tables.MY_TABLE.ID.eq(Tables.MY_INNER_TABLE.PARENT_ID))
.fetchGroups(
r -> r.into(Tables.MY_TABLE).into(MyDto.class),
r -> r.into(Tables.MY_INNER_TABLE).into(MyInnerDto.class)
);
result.forEach(MyDto::setInnerDtos);
return new ArrayList<>(result.keySet());
}

How to access PostgreSQL RETURNING value in Spring Boot DAO?

I want to return the auto-generated id of entity. PostgeSQL is able to automaticaly selects certain column via RETURNING, but I have a hard time trying to find how to retrieve this value in Spring Boot.
I would want something like:
public int createUser(User user) {
String sql = "INSERT INTO user (name, surname) VALUES (?,?) RETURNING id";
return jdbcTemplate.update(sql,
user.getName(),
user.getSurname(),
resultSet -> resultSet.getInt("id")
);
}
I know it's straightforward in Hibernate, then whether you use a Repository class or an EntityManager, the save method returns the saved entity, so you can just do:
int id = userRepository.save(user).getId();
Or is there a reason you want to persist it the way you do?

Spring transaction not getting rolled back

I am using spring transactions and hibernate to insert data into the oracle database table:
Here is the scenarion I am facing problem with:
I have two tables which have one to one mapping in hibernate. And I am inserting data in these two tables using below method calls.Transaction propagates from one method to another.So that insertion of data in both the tables happens in one transaction.
Problem: is that , while inserting data in second table, if an exception like "constraintvoilationexception---can not insert null into a particular column", is thrown,....then ideally the data should not be inserted in any of the tables i.e transaction should roll back,,,...But this is not happening ...when an exception is thrown while inserting data in the second table....records do get inserted in the first table, which ideally should not happen i.e whole transaction should be rolled back...
Can you please help,...where I am wrong while applying #Transactional , or there is some other reason for this ( may be from database side, not sure though)
#Transactional(readOnly = false, propagation = Propagation.REQUIRED)
public void methodA(){
// inserting data in table 1;
methodB();
}
#Transactional(readOnly = false, propagation = Propagation.REQUIRED)
public void methodB{
// inserting data in table 2;
}
Defines zero (0) or more exception classes, which must be subclasses of Throwable, indicating which exception types must cause a transaction rollback. Details Here
#Transactional(readOnly = false, propagation = Propagation.REQUIRED,rollbackFor = Exception.class)
public void methodA(){
try{
// inserting data in table 1;
methodB();
}
catch(Exception ex)
{
}
}
public void methodB{
// inserting data in table 2;
}

Spring boot manually commit transaction

In my Spring boot app I'm deleting and inserting a large amount of data into my MySQL db in a single transaction. Ideally, I want to only commit the results to my database at the end, so all or nothing. I'm running into issues where my deletions will be committed before my insertions, so during that period any calls to the db will return no data (not good). Is there a way to manually commit transaction?
My main logic is:
#Transactional
public void saveParents(List<Parent> parents) {
parentRepo.deleteAllInBatch();
parentRepo.resetAutoIncrement();
//I'm setting the id manually before hand
String sql = "INSERT INTO parent " +
"(id, name, address, number) " +
"VALUES ( ?, ?, ?, ?)";
jdbcTemplate.batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
Parent parent = parents.get(i);
ps.setInt(1, parent.getId());
ps.setString(2, parent.getName());
ps.setString(3, parent.getAddress());
ps.setString(4, parent.getNumber());
}
#Override
public int getBatchSize() {
return parents.size();
}
});
}
ParentRepo
#Repository
#Transactional
public interface ParentRepo extends JpaRepository<Parent, Integer> {
#Modifying
#Query(
value = "alter table parent auto_increment = 1",
nativeQuery = true
)
void resetAutoIncrement();
}
EDIT:
So I changed
parentRepo.deleteAllInBatch();
parentRepo.resetAutoIncrement();
to
jdbcTemplate.update("DELETE FROM output_stream");
jdbcTemplate.update("alter table output_stream auto_increment = 1");
in order to try avoiding jpa's transaction but each operation seems to be committing separately no matter what I try. I have tried TransactionTemplate and implementing PlatformTransactionManager (seen here) but I can't seem to get these operations to commit together.
EDIT: It seems the issue I was having was with the alter table as it will always commit.
I'm running into issues where my deletions will be committed before my insertions, so during that period any calls to the db will return no data
Did you configure JPA and JDBC to share transactions?
If not, then you're basically using two different mechanisms to access the data (EntityManager and JdbcTempate), each of them maintaining a separate connection to the database. What likely happens is that only EntityManager joins the transaction created by #Transactional; the JdbcTemplate operation executes either without a transaction context (read: in AUTOCOMMIT mode) or in a separate transaction altogether.
See this question. It is a little old, but then again, using JPA and Jdbc together is not exactly a common use case. Also, have a look at the Javadoc for JpaTransactionManager.

Oracle DB queries through JPA/Hibernate from java.util.Date

I am currently working on a migration of a Spring Boot application from MariaDB to OracleDB. The Spring/Java backend uses Hibernate/JPA to generate queries for the MariaDB database, and as such in theory the migration should be fairly painless. Change a dialect and you're done. In practice, it turns out that the hibernate dialect for OracleDB 12C makes some odd assumptions when it comes to binding types to database types. The backend still uses the old java.util.Date type for all of its dates, which Hibernate seems to want desperately to cast to either a Long (even more outdated as far as I could find) or a BLOB type of some sort. BLOBs are great of course, but it seems much more intuitive to map a Date to a DATE.
Because the row is currently set to expect a DATE, I get the following error whenever I try to access the row:
InvalidDataAccessResourceUsageException: could not extract ResultSet
ORA-00932: inconsistent datatypes: expected - got BLOB
I have tried using the JPA Converter feature to manually cast these Date objects to something Hibernate wouldn't mess up, but this resulted in Hibernate expecting a VARBINARY as this article describes:
https://dzone.com/articles/leaky-abstractions-or-how-bind
#Converter(autoApply = false)
public class DateDATEAttributeConverter implements AttributeConverter<Date, DATE> {
#Override
public DATE convertToDatabaseColumn(Date date){
return new DATE(); // conversion to be done later
}
#Override
public Date convertToEntityAttribute(DATE date) {
return new Date(); // conversion to be done later
}
}
Using this very minimal converter, and running through the code step-by-step with a debugger, shows that everything seems to be properly attached to the preparedstatement, but is then refused by Hibernate with an
org.hibernate.exception.SQLGrammarException: could not extract ResultSet
error.
Afterwards I decided to try making a customer UserType like in the above article, and described in further detail here:
https://www.baeldung.com/hibernate-custom-types
I currently cast the java.util.Date to an Oracle DATE through the use of this custom type. The #Type annotation is used to make sure the relevant field is converted using this CustomType implementation. Sadly, after implementing all this, the same error as before returns. It seems somewhere beneath the hood there is still a conversion/binding going on that I haven't managed to influence.
#Override
public Object nullSafeGet(
ResultSet rs,
String[] names,
SessionImplementor session,
Object owner
)
throws SQLException {
LOGGER.debug("nullSafeGet: " + names[0]);
return rs.getTimestamp(names[0]);
}
#Override
public void nullSafeSet(
PreparedStatement st,
Object value,
int index,
SessionImplementor session
)
throws SQLException {
if(Objects.isNull(value)){
st.setNull(index, Types.DATE);
}else{
Date dateUtil = (Date)value;
LocalDate localDate = dateUtil.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
java.sql.Date dateSQL = java.sql.Date.valueOf(localDate);
DATE date = new DATE(dateSQL);
LOGGER.debug("nullSafeSet: " + date);
st.setObject(index, date);
}
}
Is there any established method to get around this? I have searched around online for quite a bit, but I didn't get much further than these two articles, or being told to stop using old types such as Date. Sadly with big old projects and new deadlines that is not a preferable option.

Resources