I have a Spring Boot Entity with a "version" variable.
class TeamEntity
...
#Version
private Long version;
...
Each time I do a save operation, the version is incremented by 1. For testing purposes I need to execute a save operation without incrementaion of Version.
Also it will be helpfull to modify the version by my self and do a save operation.
Is it possible?
Related
I have a spring boot application (based off spring-boot-starter-data-jpa. I have an absolute minimum of configuration going on, and only a single table and entity.
I'm using CrudRepository<Long, MyEntity> with a couple of findBy methods which all work. And I have a derived deleteBy method - which doesn't work. The signature is simply:
public interface MyEntityRepository<Long, MyEntity> extends CrudRespository<> {
Long deleteBySystemId(String systemId);
// findBy methods left out
}
The entity is simple, too:
#Entity #Table(name="MyEntityTable")
public class MyEntity {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(name="MyEntityPID")
private Long MyEntityPID;
#Column(name="SystemId")
private String systemId;
#Column(name="PersonIdentifier")
private String personIdentifier;
// Getters and setters here, also hashCode & equals.
}
The reason the deleteBy method isn't working is because it seems to only issue a "select" statement to the database, which selects all the MyEntity rows which has a SystemId with the value I specify. Using my mysql global log I have captured the actual, physical sql and issued it manually on the database, and verified that it returns a large number of rows.
So Spring, or rather Hibernate, is trying to select the rows it has to delete, but it never actually issues a DELETE FROM statement.
According to a note on Baeldung this select statement is normal, in the sense that Hibernate will first select all rows that it intends to delete, then issue delete statements for each of them.
Does anyone know why this derived deleteBy method would not be working? I have #TransactionManagementEnabled on my #Configuration, and the method calling is #Transactional. The mysql log shows that spring sets autocommit=0 so it seems like transactions are properly enabled.
I have worked around this issue by manually annotating the derived delete method this way:
public interface MyEntityRepository<Long, MyEntity> extends CrudRespository<> {
#Modifying
#Query("DELETE FROM MyEntity m where m.systemId=:systemId")
Long deleteBySystemId(#Param("systemId") String systemId);
// findBy methods left out
}
This works. Including transactions. But this just shouldn't have to be, I shouldn't need to add that Query annotation.
Here is a person who has the exact same problem as I do. However the Spring developers were quick to wash their hands and write it off as a Hibernate problem so no solution or explanation to be found there.
Oh, for reference I'm using Spring Boot 2.2.9.
tl;dr
It's all in the reference documentation. That's the way JPA works. (Me rubbing hands washing.)
Details
The two methods do two different things: Long deleteBySystemId(String systemId); loads the entity by the given constraints and ends up issuing EntityManager.delete(…) which the persistence provider is about to delay until transaction commits. I.e. code following that call is not guaranteed that the changes have already been synced to the database. That in turn is due to JPA allowing its implementations to actually do just that. Unfortunately that's nothing Spring Data can fix on top of that. (More rubbing, more washing, plus a bit of soap.)
The reference documentation justifies that behavior with the need for the EntityManager (again a JPA abstraction, not something Spring Data has anything to do with) to trigger lifecycle events like #PreDelete etc. which users expect to fire.
The second method declaring a modifying query manually is declaring a query to be executed in the database, which means that entity lifecycles do not fire as the entities do not get materialized upfront.
However the Spring developers were quick to wash their hands and write it off as a Hibernate problem so no solution or explanation to be found there.
There's detailed explanation why it works the way it works in the comments to the ticket. There are solutions provided even. Workarounds and suggestions to bring this up with the part of the stack that has control over this behavior. (Shuts faucet, reaches for a towel.)
I've faced an issue that on rare occasions (it might take dozens of restarts) Spring doesn't initialize all properties correctly.
I define the bean of CbKafkaConsumerConfig (my custom bean) type and check its state in the thread that was created by a method that is marked as #EventListener(ApplicationReadyEvent.class), so I expect it to be completely initialized by this point. However, this is what I see:
Values that I expected to be filled are left with placeholders.
Here's how they are defined in application.properties file. (And I've checked the spelling - it's correct, otherwise it would fail every time, not occasionally)
config-bean-prefix.msg-topics=${cb.kafka.tc-topic}
config-bean-prefix.unexpected-error-topic=${cb.kafka.unexpected-errors-topic}
These properties are defined in Vault and I expected them to be fetched and set with the power of Spring Cloud Vault. Here you can see that Vault is present as a property source AND that these properties are populated there.
At the same time, in the context there are other beans of the same type CbKafkaConsumerConfig that are referring to these properties and yet it resolved fine for them.
Here's how the bean is defined
#Bean({"myBean"})
#ConfigurationProperties(
prefix = "config-bean-prefix"
)
public CbKafkaConsumerConfig myBeanConsumer() {
return new CbKafkaConsumerConfig();
}
And the bean itself:
#Data
public class CbKafkaConsumerConfig extends CbKafkaBaseConfig {
#NotNull
#Size(
min = 1
)
private Collection<String> msgTopics;
#NotNull
private String unExpectedErrorTopic;
}
We're using Spring Boot 2.2.x however this issue is also present for Spring Boot 2.1.x.
It's not specific for this type of beans, other might fail as well while being correctly set in Vault. What could be the reason of such unpredictable behavior and what I should look into?
Turns out by default spring cloud vault is not simply fetching properties on start, every so often it's updating them. While updating, there's a short time window when properties were already deleted from property source in the context, but not filled with the new ones and it might actually happen during context initialization (super questionable behavior in my opinion) causing some beans being corrupted.
If you don't want properties to be updated in runtime just set spring.cloud.vault.config.lifecycle.enabled to false
I'm using Spring Boot (1.4.4.REALEASE) with Spring Data in order to manage a MySql Database. I've got the following case:
We update one revision performed in one equipment using the RevisionService.
RevisionService saves the revision and calls the EquipmentService to update the equipment status.
The updateEquipmentStatus does a call to a Db stored procedure in order to evaluate the equipment with its revisions altogether and update the field.
I've tried some options but don't achieve to get the updated status for the equipment. The updateEquipmentStatus method keeps writing the previous status for the equipment (not considering the current revision being stored in the transaction). The code is written this way:
RevisionService
#Service
public class RevisionService{
#org.springframework.transaction.annotation.Transactional
public Long saveRevision(Revision rev){
//save the revision using JPA-Hibernate
repo.save(rev);
equipmentService.updateEquipmentStatus(idEquipment);
}
}
EquipmentService
#Service
public class EquipmentService{
#org.springframework.transaction.annotation.Transactional
public Long updateEquipmentStatus(Long idEquipment){
repo.updateEquipmentStatus(idEquipment);
}
}
EquipmentRepo
#Repository
public interface EquipmentRepo extends CrudRepository<Equipment, Long> {
#Modifying
#Procedure(name = "pupdate_equipment_status")
void updateEquipmentStatus(#Param("id_param") Long idEquipment);
}
As far as I understand, as both methods are annotated with Spring's transactional, the updateEquipmentStatus method should be executed in the scope of the current transaction. I've also tried with different options for the #Transactional annotation from updateEquipmentStatus, such as #Transactional(isolation=Isolation.READ_UNCOMMITTED) (which shouldn't be required, because I'm using the same transaction) and #Transactional(propagation=Propagation.REQUIRES_NEW), but keeps not considering the current status. That's how my stored procedure is saved into the MySql DB:
CREATE DEFINER=`root`#`localhost` PROCEDURE `pupdate_equipment_status`(IN `id_param` INT)
LANGUAGE SQL
NOT DETERMINISTIC
MODIFIES SQL DATA
SQL SECURITY DEFINER
COMMENT ''
BEGIN
/*Performs the update considering tequipment and trevision*/
/*to calculate the equipment status, no transaction is managed here*/
END
I also want to clarify that if I execute some modification in the equipment itself (which affects only tequipment), the status is being properly updated. InnoDb is the engine being used for all the tables.
UPDATE
Just changed the repo method to use a nativeQuery instead and the same problem keeps happening, so the Db procedure being involved should be discarded:
#Modifying
#Query(nativeQuery = true, value= "update tequipment set equipment_status = (CASE WHEN (...))")
void updateEquipmentStatus(#Param("id_param") Long idEquipment);
UPDATE2
Having done more tests and added a log with TransactionSynchronizationManager.getCurrentTransactionName() in the methods, that's the concrete issue:
Changes done in the equipment service are properly picked by the updating function (When something in tequipment changes, the status in tequipment is calculated properly).
Changes done in the revision service (trevision) result in an outdated value in tequipment (it doesn't matter if Spring does it in a different transaction using REQUIRES_NEW or not). Spring seems to create a new transaction properly when using REQUIRES_NEW in establishEquipmentStatus, because the current transaction name changes, but the native query doesn't have the latest values (because of the transaction before not being commited?). Also tried removing #Transactional from establishEquipmentStatus so the same transaction is used, but the issue keeps happening.
I would like to highlight that the query used to update equipment status has a case expression with multiple subqueries using trevision.
Adding the following code fixes it (programatically flushing the transaction state to the Database):
#Service
public class EquipmentService{
#PersistenceContext
private EntityManager entityManager;
#org.springframework.transaction.annotation.Transactional
public Long updateEquipmentStatus(Long idEquipment){
entityManager.flush();
repo.updateEquipmentStatus(idEquipment);
}
}
Still it would be great to find a declarative way to do it..
Changing to read uncommitted is the right idea but you'd also need to flush the entitymanager before your stored procedure is called. See this thread:
How to make the queries in a stored procedure aware of the Spring Transaction?
Personally I'd do it all in Spring unless you are absolutely forced to use a stored procedure.
I have started to use SDN 3.0.0 M1 with Neo4j 2.0 (via rest interface) and I want use an existing graph.db with existing datas.
I have no problem to find node created through SDN via hrRepository.save(myObject); but I can't fetch any existing node (not created through SDN), via hrRepository.findAll(); or any other method, despite I have manually added a property __type__ in this existing nodes.
I use a very simple repository to test that :
#Component
public interface HrRepository extends GraphRepository<Hr> {
Hr findByName(String name);
#Query("match (hr:hr) return hr")
EndResult <Hr> GetAllHrByLabels();
}
And the named query GetAllHrByLabels work perfectly.
Is an existing way to use standard methods (findAll() , findByName()) on existing datas without redefine Cypher query ?
I recently ran into the same problem when upgrading from SDN 2.x to 3.0. I was able to get it working by first following the steps in this article: http://maxdemarzi.com/2013/06/26/neo4j-2-0-is-coming/ to create and enable Neo4j Labels on the existing data.
From there, though, I had to get things working for SDN 3. As you encountered, to do this, you need to set the metadata correctly. Here's how to do that:
Consider a #NodeEntity called Person, that inherits from AbstractNodeEntity (imports and extraneous code removed for brevity):
AbstractNodeEntity:
#NodeEntity
public abstract class AbstractNodeEntity {
#GraphId private Long id;
}
Person:
#NodeEntity
#TypeAlias("Person") // <== This line added for SDN 3.0
public class Person extends AbstractNodeEntity {
public String name;
}
As you know, in SDN 2.x, a __type__ property is created automatically that stores the class name used by SDN to instantiate the node entity when it's read from Neo4j. This is still true, although in SDN 3.0 it's now specified using the #TypeAlias annotation, as seen in the example above. SDN 3.0 also adds new metadata in the form of Neo4j Labels representing the class hierarchy, where the node's class is prepended with an underscore (_).
For existing data, you can add these labels In Cypher (I just used the new web-based Browser utilty in Neo4j 2.0.1) like this:
MATCH (n {__type__:'Person'}) SET n:`_Person`:`AbstractNodeEntity`;
Just wash/rinse/repeat for other #NodeEntity types you have.
There is also a Neo4j Label that gets created called SDN_LABEL_STRATEGY but it isn't applied to any nodes, at least in my data. SDN 3 must have created it automatically, as I didn't do so manually.
Hope this helps...
-Chris
Using SDN over REST is probably not the best idea performance-wise. Just that you know.
Data not created with SDN won't have the necessary meta information.
You will have to iterate over the nodes manually and use
template.postEntityCreation(Node,Class);
on each of them to add the type information. Where class is your SDN annotated entity class.
something like:
for (Node n : template.query("match(n) where n.type = 'Hr' return n").to(Node.class))
template.postEntityCreation(n,Hr.class);
We have an application where we use Struts, Spring, and hibernate.
Previously, we were using mysql databse for running test suites using testng framework.
Now we want to use “in memory” database of HSQLDB.
We have made all the required code changes to use HSQLDB in “in memory” mode.
For ex.
Datasource url = jdbc:hsql:mem:TEST_DB
Username = sa
Password =
Driver = org.hsqldb.jdbcDriver
Hibernate dialect= org.hibernate.dialect.HSQLDialect
Hibernate.hbm2ddl.aoto = create
#Autowired
private DriverManagerDataSource dataSource;
private static Connection dbConnection;
private static IDatabaseConnection dbUnitConnection;
private static IDataSet dataSet;
private MockeryHelper mockeryHelper;
public void startUp() throws Exception {
mockeryHelper = new MockeryHelper();
if (dbConnection == null) {
dbConnection = dataSource.getConnection();
dbUnitConnection = new DatabaseConnection(dbConnection);
dbUnitConnection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new HsqldbDataTypeFactory());
dataSet = new XmlDataSet(new FileInputStream("src/test/resources/test-data.xml"));
}
DatabaseOperation.CLEAN_INSERT.execute(dbUnitConnection, dataSet);
}
We have done required code changes to our base class where we do startup and teardown of database before and after each test.
We use test-data.xml file from where we insert test data to created database using testng framework. Now my questions are
1.when I run test case, database gets created and data is also inserted correctly. However, my respective daos return empty object list when I try to retrieve them from interceptors of struts.
2.We use HSQLDB version 1.8.0.10. Same configurations are made for other project. In that project, most of the test cases are running with success, however for some of them sorting order of data is incorrect.
We discovered that HSQLDB is case sensitive for sorting. And there is one property sql.ignore_case, when set to true, sorting becomes case insensitive. But this is not working for us.
Can someone please help in this?
Thanks in adavance.
I'm afraid sql.ignore_case is not available in your HSQLDB version, as it's not even in the last stable (2.2.9), contrary to what the docs say. However latest snapshots, as stated in this thread, do include it. I'm not using 1.8 my self, but executing SET IGNORECASE TRUE before any table creation may work for you, it does in 2.2.9. If you really need 1.8, a third option may be to pick the relevant code from latest source, add it to 1.8 source and recompile, no idea how hard/easy this could be.