why can't I use "truncate" in spring boot - spring

I wanna use "truncate table" statement instead of "delete" statement in spring boot project cause I need reset the auto increment id in mysql. Here is my code:
#PersistenceContext protected EntityManager em;
#Override
public void removeAllShopeeCategory() {
StringBuilder query = new StringBuilder("truncate table ShopeeCategoryDto shopeecategory");
Query q = this.em.createQuery(query.toString());
q.executeUpdate();
}
but there is an exception like this:
nested exception is java.lang.IllegalArgumentException: org.hibernate.hql.internal.ast.QuerySyntaxException: unexpected token: truncate near line 1, column 1
other operation has worked, such as insert, update or select, what's the reason and what should I modify it?

Please use https://docs.oracle.com/javaee/7/api/javax/persistence/EntityManager.html#createNativeQuery-java.lang.String- with native sql queries.

entityManager.createNativeQuery("TRUNCATE TABLE " + tableName + " CASCADE")
.executeUpdate()

Related

How to implement ALTER TABLE Query using Spring Data Jpa or Hibernate

Im inserting a CSV file in a database table using Spring Batch and Spring Data (and Hibernate).
each time I insert the CSV I have to delete the previous data in the table using the data-jpa deleteAll() method. the problem is that the ids of the table are incremented automatically and continuously (#GeneratedValue(strategy = GenerationType.IDENTITY)) after each delete/insert statement.
I want that after each delete the ids start on 1. the only way that I found to do that is by altering the index (i know its not the best way, so your suggestions are welcomed)
the Question is :
is there any method to run this SQL request
ALTER TABLE res AUTO_INCREMENT=1;
in Java object using Spring Data or Hibernate?
Thanks
Is it possible to generate id on java side and do not use embedded db autoincrement feature?
So the best way will be to generate id explicitly and set it to entity.
Other cases are:
Truncate Table
TRUNCATE TABLE table_name;
This will reset the auto increment on the table as well as deleting all records from that table.
Drop and Recreate
Table DROP TABLE table_name;
CREATE TABLE table_name { ... };
So I think, second is what are you looking for
Instead of Altering the table, I have customized the way that Hibernate Generates the Ids.
instead of using :
#GeneratedValue(strategy = GenerationType.IDENTITY)
I have implemented a custom id generator :
#GenericGenerator(name = "sequence_id", strategy =
"com.xyz.utils.CustomIdGenerator",
parameters = {
#org.hibernate.annotations.Parameter(
name = "table_name", value = "myTable")
})
#GeneratedValue(generator = "sequence_id")
the CustomIdGenerator class :
public class CustomIdGenerator implements IdentifierGenerator,Configurable{
private String table_name ;
#Override
public Serializable generate(SharedSessionContractImplementor session, Object object)
throws HibernateException {
Connection connection = session.connection();
try {
Statement statement=connection.createStatement();
ResultSet rs=statement.executeQuery("select count(id) as Id from "
+table_name );
if(rs.next())
{
int id=rs.getInt(1)+1;
Integer generatedId = new Integer(id);
System.out.println("Generated Id: " + generatedId);
return generatedId;
}
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
#Override
public void configure(Type type, Properties params, ServiceRegistry serviceRegistry)
throws MappingException {
setTable_name(params.getProperty("table_name")); }
//getters and setters
}
the problem of this solution is the execution of select for each id so it seems generating load on the DBMS and its slow.
and the rs is looping twice for the first id ()
any suggestion for optimization is welcomed

Spring boot manually commit transaction

In my Spring boot app I'm deleting and inserting a large amount of data into my MySQL db in a single transaction. Ideally, I want to only commit the results to my database at the end, so all or nothing. I'm running into issues where my deletions will be committed before my insertions, so during that period any calls to the db will return no data (not good). Is there a way to manually commit transaction?
My main logic is:
#Transactional
public void saveParents(List<Parent> parents) {
parentRepo.deleteAllInBatch();
parentRepo.resetAutoIncrement();
//I'm setting the id manually before hand
String sql = "INSERT INTO parent " +
"(id, name, address, number) " +
"VALUES ( ?, ?, ?, ?)";
jdbcTemplate.batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
Parent parent = parents.get(i);
ps.setInt(1, parent.getId());
ps.setString(2, parent.getName());
ps.setString(3, parent.getAddress());
ps.setString(4, parent.getNumber());
}
#Override
public int getBatchSize() {
return parents.size();
}
});
}
ParentRepo
#Repository
#Transactional
public interface ParentRepo extends JpaRepository<Parent, Integer> {
#Modifying
#Query(
value = "alter table parent auto_increment = 1",
nativeQuery = true
)
void resetAutoIncrement();
}
EDIT:
So I changed
parentRepo.deleteAllInBatch();
parentRepo.resetAutoIncrement();
to
jdbcTemplate.update("DELETE FROM output_stream");
jdbcTemplate.update("alter table output_stream auto_increment = 1");
in order to try avoiding jpa's transaction but each operation seems to be committing separately no matter what I try. I have tried TransactionTemplate and implementing PlatformTransactionManager (seen here) but I can't seem to get these operations to commit together.
EDIT: It seems the issue I was having was with the alter table as it will always commit.
I'm running into issues where my deletions will be committed before my insertions, so during that period any calls to the db will return no data
Did you configure JPA and JDBC to share transactions?
If not, then you're basically using two different mechanisms to access the data (EntityManager and JdbcTempate), each of them maintaining a separate connection to the database. What likely happens is that only EntityManager joins the transaction created by #Transactional; the JdbcTemplate operation executes either without a transaction context (read: in AUTOCOMMIT mode) or in a separate transaction altogether.
See this question. It is a little old, but then again, using JPA and Jdbc together is not exactly a common use case. Also, have a look at the Javadoc for JpaTransactionManager.

Spring data jpa #Query - Cannot create an "AS" alias for a result map

I map the result of the following JPQL-Query directly to a SpecialCustomDto object instead of the used javax.persistency entity object MyEntity. But I do not know, how to access the COUNT(DISTINCT e.attributeB), which will be mapped to the SpecialCustomDto.
This is the query.
#Repository
public interface MyEntityRepository extends JpaRepository<MyEntity, Long> {
#Query("SELECT new com.test.SpecialCustomDto(e.attributeA, COUNT(DISTINCT e.attributeB)) as specialCustomDto "
+ "FROM MyEntity e WHERE 5 = specialCustomDto.count GROUP BY e.attributeA")
List<SpecialCustomDto> getSpecialCustomDtos();
}
As soon as I start the spring-boot application, Hibernate throws me following error:
Caused by: java.lang.IllegalArgumentException: org.hibernate.hql.internal.ast.QuerySyntaxException: unexpected token: as near line 1, column...
I don't know how to access the aggregated COUNT(DISTINCT e.attributeB) element of the newly created SpecialCustomDto. Without the additional WHERE-clause, the mapping works as expected.
Aggregate functions can be used as condition using HAVING. The same as in native SQL.
SELECT new com.test.SpecialCustomDto(e.attributeA, COUNT(e.attributeB))
FROM MyEntity e
GROUP BY e.attributeA
HAVING COUNT(e.attributeB) = 5
Remove the alias, move the condition to a HAVING-clause since it operates on an aggregate-value and just put the count-expression in there.
public interface MyEntityRepository extends JpaRepository<MyEntity, Long> {
#Query("SELECT new com.test.SpecialCustomDto(e.attributeA, COUNT(DISTINCT e.attributeB)) "
+ "FROM MyEntity e "
+ "GROUP BY e.attributeA "
+ "HAVING COUNT(DISTINCT e.attributeB) = 5")
List<SpecialCustomDto> getSpecialCustomDtos();
}
Note: The #Repository is superfluous.

Properly evaluate placeholders in materialised view from JdbcTemplate

Here is my case:
I have the following sql file (my_view.sql - containing the definition of a materialised view, Oracle dialect) returning all the products having expire_date > sysdate:
CREATE MATERIALIZED VIEW my_view
BUILD DEFERRED
REFRESH COMPLETE ON DEMAND
AS
SELECT *
FROM product
WHERE expire_date > sysdate
Now in the application code I have a Spring Service using this view:
#Service
public class MyService {
private final JdbcTemplate jdbcTemplate;
#Value("${expire_date}")
private String expireDate;// property will be injected at runtime by Spring, but how to pass this string to be evaluated in the sql script through jdbcTemplate
public MyService(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
public void callMaterialisedView() {
try (Connection zs1DbConnection =
jdbcTemplate.getDataSource().getConnection()) {
jdbcTemplate.execute("BEGIN dbms_mview.refresh('my_view', 'c');END;");
}
}
}
My question: Is it possible to make expire_date configurable and pass it from the application code as a placeholder to the sql script?
Making it configurable is easy - i can use Spring #Value annotation to inject a concrete value to my application code. What I miss is how (if possible at all) to pass this value from jdbcTemplate to the script to be evaluated properly.
In the final variant, I imagine the script to look like (expire_date being passed from jdbcTempalte):
CREATE MATERIALIZED VIEW my_view
BUILD DEFERRED
REFRESH COMPLETE ON DEMAND
AS
SELECT *
FROM product
WHERE expire_date > to_date(${expire_date})
The materialized view does not accept a parameter, but you can create a dummy table with one column and insert/update your parameter value in table.
It's an alternative solution.
In MV SQL you can write something like " where expire_date > (select dt_col from dummy_tab) " as below:
CREATE MATERIALIZED VIEW my_view
BUILD DEFERRED
REFRESH COMPLETE ON DEMAND
AS
SELECT *
FROM product
WHERE expire_date > (select dt_col from dummy_tab);

Spring Framework + Eclipselink + Exception translation mechanism

This question is similar to the one asked almost two years ago here. I am looking for some information how to handle exceptions thrown from Eclipselink 2.5 when some common database problems arise. What I am getting is ugly org.springframework.transaction.TransactionSystemException which does not give me any information about what fails. Let's take as an example simple entity:
#Entity
class Insect(_name: String) {
def this() = this(null)
#Id
#Column(unique = true)
var name: String = _name
}
and equally simple repository:
#Repository
#Transactional
class InsectDaoImpl extends InsectDao {
#PersistenceContext
var entityManager: EntityManager = _
override def addInsect(insect: Insect) = entityManager.persist(insect)
}
Executing following code:
insectDao.addInsect(new Insect("hornet"))
insectDao.addInsect(new Insect("hornet"))
Gives me:
org.springframework.transaction.TransactionSystemException: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "CONSTRAINT_INDEX_8 ON PUBLIC.INSECT(NAME)"; SQL statement:
INSERT INTO INSECT (NAME) VALUES (?) [23505-172]
Error Code: 23505
Call: INSERT INTO INSECT (NAME) VALUES (?)
bind => [1 parameter bound]
Query: InsertObjectQuery(pl.zientarski.model.Insect#1df202be)
Yack! The exception itself does not say anything constructive about the source of the problem. Internal exception is first of all database-specific, secondly only the exception message explains what's wrong. For comparison, the same setup with Hibernate gives this exception:
org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint ["PRIMARY_KEY_8 ON PUBLIC.INSECT(NAME)"; SQL statement:
insert into Insect (name) values (?) [23505-172]]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement
Exception type instantaneously gives hint about what is going on.
The way Eclipselink works is not a surprise when you look into the source code of org.springframework.orm.jpa.vendor.EclipseLinkJpaDialect.translateExceptionIfPossible(RuntimeException ex) First of all it is not even overridden to support Eclipselink specific exceptions, secondly the default implementation inherited from DefaultJpaDialect expects exceptions deriving from javax.persistence.PersistenceException.
I wanted to ask you what are your best practises for working around this problem. Or maybe I just can't see the solution that is right there. Please let me know your suggestions.
The tests are made with Eclipselink 2.5 and Spring 3.2.3
Thanks.

Resources