I need to populate data from a text file to a relational DB and use the data from the DB to construct a data structure, Tire. The DB is a memory-based one.
As the conventional approach, I have the data population code in a CommandLineRunner method of the #SpringBootApplication class and the data retrieval code in a #PostConstruct method of a service class. That, however, doesn't work as I thought because #PostConstruct method is executed before the CommandLineRunner.
To solve the problem, I move the data retrieval code to the CommandLineRunner method as well. The approach creates a code coupling, however, because the data structure is only used inside the service class. Another approach, I can think of, is to have a lazy data initialization for the data structure. Due to the data size, near 110k entries, the first time running time of the data structure usage will be very slow.
Any better approach?
You could leverage the application start event:
// you can put this in any wired class or even in the Application class
#EventListener
public void onApplicationEvent(ContextRefreshedEvent event) {
//Now you are sure Command line runner is done.
}
Related
I have a Spring Boot application, where I need to get data from a table when the app initializes.
I have a repository with the following code:
#Repository
public interface Bookepository extends JpaRepository<Book, Integer> {
Proveedor findByName(String name);
#Cacheable("books")
List<Proveedor> findAll();
}
Then from my service:
#Service
public class ServiceBooks {
public void findAll(){
booksRepo.findAll();
}
public void findByName(String name){
booksRepo.findByName(name);
}
}
And then I have a class that implements CommandLineRunner:
#Component
public class AppRunner implements CommandLineRunner {
private final BookRepository bookRepository;
public AppRunner(BookRepository bookRepository) {
this.bookRepository = bookRepository;
}
#Override
public void run(String... args) throws Exception {
bookRepository.findAll());
}
}
So here,when the application initializes, it queries to the Books table and caches the result. Inside the application each time I call find.all(), the cache is working, and I get the data from my cache.
So here are my 2 questions:
About Redis, I am not using Redis and I am doing database cache without any problem. So, where does Redis fit into this approach? I don't understand why everybody uses Redis when cache is working without needing other libraries.
When I call findByName(name), is there any chance to execute that query over the data I already have cached? I know I can have a cache on that method, but the cache will save data each time I search a particular name. If a name is searched for the first time, it will go to the database for that value. I don't want that, I would like that Spring performs the query using the data from the first cache where I have all Books.
The answers to your question
Redis avoids the DB call as it stores your response in Memory. You can use #cacheable even in controller or service. If you use #cacheable in controller, your request will not even execute the controller method, if it is already cached.
for FindByName, Redis provides a nice way to store the data based on keys.
Refer the link Cache Keys.
Once you request by Name, it will get the data from DB, the next time you request with same name, it will get from cache based on the key.
Coming back to your question, NO you should not do a search on your cached data, as caches are highly volatile, you cannot trust the data from cache. also searching through the cached data might affect the performance and you might need to write lines of unneeded additional code.
Spring boot manages the cache per application or per service. When you are using multiple instance of a service or app then certainly you'll want to manage the cache centrally. Because per service cache is not usable in this case because what one app caches in its own spring boot is logically not accessible by another apps.
So here Redis comes into picture. If you use Redis, then each instance of service will connect to the same Redis cache and get the same result.
I am using JPA with Spring and saving an entity in a test. In the process of writing a test to validate that an entity's relationship with another entity is correctly set up, I have come across a problem that I come across frequently. I have a test method (set to rollback) that:
Creates entity
Saves entity
Flushes
Retrieves entity
Validates entity
The problem is that when I look at the Hibernate logs, I only see a single insert to the database where I'd expect to see an insert and then a select.
I know this is because Hibernate's trying to save me some time and knows that it's got the entity with the ID I'm trying to retrieve but that bypasses an important step: I want to make sure that the entity actually made it to the database and looks like what I thought it should. What's the best way to deal with this so I can test that the entity is actually in the database?
Note: I assume this involves somehow detaching the entity or telling Hibernate to clear its cache but I'm not sure how to do that when all I have access to is a JpaRepository object.
Some code:
public interface UserRepository extends JpaRepository<User, Long> {
//...
}
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = JpaConfig.class, // JpaConfig just loads our config stuff
loader = AnnotationConfigContextLoader.class)
#TransactionConfiguration(defaultRollback = true)
public class UserRepositoryTest {
#Test
#Transactional
public void testRoles() {
User user = new User("name", "email#email.com");
// eventually more here to test entity-to-entity relationship
User savedUser = userRepository.save(user);
userRepository.flush();
savedUser = userRepository.findOne(savedUser.getId());
Assert.assertNotNull(savedUser);
// more validation here
}
}
You basically want to test Hibernate's functionality instead of your own code. My first suggestion: don't do it! It is already tested and validated many times.
If you really want to test it, there are a couple of options:
Execute a query (rather than a get. The query will get executed (you should see it in the log) and the result interpreted. The object you get back would still be the same object you saved, since that is in the session.
You can evict the object from the session and then get it again. If you use SessionFactory.getCurrentSession(), you'll get the same season that the repository is using. With that you can evict the object.
You have two strategies:
issue a native SQL query therefor bypassing any JPA cache.
ensure the persistence context is cleared before reloading.
For (1) you can change your tests to extend the following Spring class which, in addition to automatically beginning/rolling back a transaction at the start/end of each test, will give you access to a Spring JdbcTemplate you can use to issue the native SQL.
http://docs.spring.io/spring-framework/docs/2.5.6/api/org/springframework/test/context/junit4/AbstractTransactionalJUnit4SpringContextTests.html
http://docs.spring.io/spring-framework/docs/2.5.6/api/org/springframework/jdbc/core/simple/SimpleJdbcTemplate.html
For (2) you can clear the persistence context by doing the following (where the EntityManagerFactory is injected into your test:
EntityManagerFactoryUtils.getTransactionalEntityManager(entityManagerFactory).clear();
See the following base test class which I normally use and demonstrates the above and also allows for populating the database with known data before each test (via DBUnit).
https://github.com/alanhay/spring-data-jpa-bootstrap/blob/master/src/test/java/uk/co/certait/spring/data/repository/AbstractBaseDatabaseTest.java
(In fact in the above I am actually creating a new JdbcTemplate by injecting a datasource. Can't remember why...)
I am testing service methods that are placed in controllers in spring applications. It seems to me that my problem is that one essential service method call is done in class which is anotated #Component and the method call is inside #PostConstruct anotated method.
#Component
public final class Helper
#PostConstruct
public void initialize() {
stuff = service.getNessesaryStuff();
}
The contents of the stuff are coming from a database and they vary in each test. That's why database is populated with needed data before each test. Without the data the other service methods are not working. I have checked that the right data goes to database exactly like intended, but it does not affect right away. I need to run same test couple of times before it gets the right data and passes.
Any fix?
When running the application, Helper class is created itself. When running tests, I need to make a bean of it to avoid NullPointerExceptions.
I would like to know if the following is considered safe.
Usual Spring service class that accesses a bunch of DAOS / hibernate entities:
#Transactional
public class MyService {
...
public SomeObject readStuffFromDB(String key) {
...
//return some records from the DB via hibernate entity etc
}
A class in the application that has the service wired in:
public class ServiceHolder {
private MyService myService;
private SomeOtherObject multiThreadedMethod() {
...
//calls myService.readStuffFromDB() and uses the results
//to return something useful
}
multiThreadedMethod will be called from multiple threadpool threads. I would like to know if the multiThreadedMethod is safe in its calls to myService.
It is NOT making any modifications to the DB - only reading.
What happens if two threads call myService.readStuffFromDB() at exactly the same time? Will a concurrent modification exception be thrown from somewhere?
I've been running it with no issues but I'm not 100% sure it will always work.
Yes you will call the same object in the same time as long as your service bean is defined as singleton (which is default and proper), but you should not rely on local variables in you services. So the methods should be written that way they can work independently (you don't need a mutual exclusion here). If you called db and tried do any operations nothing would happen because every thread would receive a new instance of entity manager. If you modified db in the same time and any type of db exception was thrown you would get a rollback exception which is perfectly fine.
entityManager.persist() will do more or less entityManager.getEntityManagerAssignedToCurrentThread().persist()
It is a proxy not real object. So you are safe :)
I'm a beginner in hibernate 4 & Spring 3.2 stuffs.
I have read some tutorials and discussion on stack but i don't find a clear answer to my questions. And i think the best way to understand is to ask and share knowledges !
Here we go!
So you create each time a Pojo, a Dao , a Service class, with methods annotated transactionnal. That's ok. I'm using Sessionfactory to handle my transaction. I'm looking for good practices.
1- If you want to use Delete Method and Save Method from the same Service, how will you do to make it works in a same transaction. When i look at the log, each method are executed in different transactions.
This SampleServiceImpl:
#Transactional
public void save(Sample sample){
sampleDao.save(sample);
}
#Transactional
public void delete(Sample sample){
sampleDao.delete(sample);
}
// A solution could be that , but not very clean...there should be an another way, no?
#Transactional
public void action(Sample sample){
sampleDao.save(sample);
sampleDao.delete(sample);
}
2- If you want to use Delete Method and Save Method from different Services class, how will you do to make it works in a same transaction. Because each method in each service class is handled by a Transactionnal annotation. Do you create a global Service calling all subservice in one method annoted Transactional
SampleServiceImpl:
#Transactional
public void save(Sample sample){
sampleDao.save(sample);
}
ParcicipantServiceImpl
#Transactional
public void save(Participant participant){
participantDao.save(participant);
}
// A solution could be that , but not very clean...there should be an another way, no?
GlobalServiceImpl
#Transactional
public void save(Participant participant,Sample sample){
participantDao.save(participant);
sampleDao.save(sample);
}
3- And the last question but not the least .If you want to use several Methods from severals service in one global transaction. Imagine you want to fill up 5 or more table in one execution of a standalone program. How is it possible because each Service to have his proper transactional method, so each time you called this method, there is a transaction.
a- I have successfully arrive to fill up two tables in a sample transaction using Mkyong tutorial and cascade property in the mapping. So i see how to make it works for one table directly joined to one another or more tables.
b- But if you have a 3 tables Participant -> Samples -> Derived Products. How will you fill up the three tables in a same transaction.
I don't know if i'm clear. But i would appreciated some help or example on that from advanced users.
Thanks a lot for you time.
Your solution is fine, maybe this works if you want to using nested transactional methods(note I saw this solution couple days ago and didn't test it):
< tx:annotation-driven mode="aspectj" / >
< context:load-time-weaver aspectj-weaving="on"/ >
#Transactional
public void action(Sample sample){
save(sample);
delete(sample);
}
Transaction should propagate.
GlobalServiceImpl
#Transactional
public void save(Participant participant,Sample sample){
participantDao.save(participant);
sampleServiceImpl.save(sample);
}
The approch you are following is cleaner approch,
ServiceOpjects are ment to contain business logic. Hence they will always manuplate through data objects.
What we do in practise is create a another layer that uses dataObjects and and other functional call of same layer. Then this all business layer is called via service layer having annotation #transactional.
Can you please mention why you think this approch is dirty??