Understanding Redis inside Spring Boot - spring

I have a Spring Boot application, where I need to get data from a table when the app initializes.
I have a repository with the following code:
#Repository
public interface Bookepository extends JpaRepository<Book, Integer> {
Proveedor findByName(String name);
#Cacheable("books")
List<Proveedor> findAll();
}
Then from my service:
#Service
public class ServiceBooks {
public void findAll(){
booksRepo.findAll();
}
public void findByName(String name){
booksRepo.findByName(name);
}
}
And then I have a class that implements CommandLineRunner:
#Component
public class AppRunner implements CommandLineRunner {
private final BookRepository bookRepository;
public AppRunner(BookRepository bookRepository) {
this.bookRepository = bookRepository;
}
#Override
public void run(String... args) throws Exception {
bookRepository.findAll());
}
}
So here,when the application initializes, it queries to the Books table and caches the result. Inside the application each time I call find.all(), the cache is working, and I get the data from my cache.
So here are my 2 questions:
About Redis, I am not using Redis and I am doing database cache without any problem. So, where does Redis fit into this approach? I don't understand why everybody uses Redis when cache is working without needing other libraries.
When I call findByName(name), is there any chance to execute that query over the data I already have cached? I know I can have a cache on that method, but the cache will save data each time I search a particular name. If a name is searched for the first time, it will go to the database for that value. I don't want that, I would like that Spring performs the query using the data from the first cache where I have all Books.

The answers to your question
Redis avoids the DB call as it stores your response in Memory. You can use #cacheable even in controller or service. If you use #cacheable in controller, your request will not even execute the controller method, if it is already cached.
for FindByName, Redis provides a nice way to store the data based on keys.
Refer the link Cache Keys.
Once you request by Name, it will get the data from DB, the next time you request with same name, it will get from cache based on the key.
Coming back to your question, NO you should not do a search on your cached data, as caches are highly volatile, you cannot trust the data from cache. also searching through the cached data might affect the performance and you might need to write lines of unneeded additional code.

Spring boot manages the cache per application or per service. When you are using multiple instance of a service or app then certainly you'll want to manage the cache centrally. Because per service cache is not usable in this case because what one app caches in its own spring boot is logically not accessible by another apps.
So here Redis comes into picture. If you use Redis, then each instance of service will connect to the same Redis cache and get the same result.

Related

Spring Batch/Data JPA application not persisting data to db

I'm having really weird issue. I need to say that my code working perfectly fine in my local but not persisting some datas in our pod (k8 environment).
I have different datasources to work with in this batch. Everything running fine. Job Repository is map-based and using ResourcelessTransactionManager for it. I configured it like this
#Configuration
#EnableBatchProcessing
public class BatchConfigurer extends DefaultBatchConfigurer {
#Override
public void setDataSource(DataSource dataSource){
}
}
I also use different platformtransactionmanager then spring batch (issue). So I set my spring allow bean overriding to true in my properties. The platform transaction manager in my configurer is right binded one, I debugged it.
I have custom writer for one of my step. Updating records in multiple tables which in multiple dbs (different datasources, in brief)
public class MyWriter implements ItemWriter<MyDTO> {
#Autowired
private MyFirstRepo myfirstRepo; //table in first datasource
#Autowired
private MySecondRepo mySecondRepo; //table in second datasource
#Override
public void write(List<? extends MyDTO> myDtoList) throws Exception {
//some logic
mySecondRepo.delete(deletableEntity)
//some logic
mySecondRepo.saveAll(updatableEntities)
//some logic
myfirstRepo.saveAll(updatableEntities)
}
}
Since I have multiple datasources, I defined multiple transaction managers, and to give transaction manager to my step I defined chained transaction manager that includes that managers.
#Bean
public Step myStep(#Qualifier("chainedTransactionManager") ChainedTransactionManager chainedTransactionManager) {
return getCommonStepBuilder("myStep")
.transactionManager(chainedTransactionManager)
.<MyDTO,MyDTO>chunk(200)
.reader(myPaginingReader())
.writer(myWriter())
.taskExecutor(myTaskExecutor())
.throttleLimit(15)
.build();
}
chained transaction manager config (both of these transaction manager is JpaTransactionManager):
#Configuration
public class TransactionManagerConfig {
#Primary
#Bean(name = "chainedTransactionManager")
public ChainedTransactionManager transactionManager(
#Qualifier("firstTransactionManager") PlatformTransactionManager firstTransactionManager,
#Qualifier("secondTransactionManager")PlatformTransactionManager secondTransactionManager) {
return new ChainedTransactionManager(firstTransactionManager,secondTransactionManager);
}
}
So my first two jpa operations in writer working just fine( operations that made over MySecondRepo) but the last operation is not persisting data to db. It doesn't throws any errors, job completing succesfully but it doesn't update my records in table.
I must mention second time, it does update in my local actually. Just not updating in our app that lives on k8 environment (dockerized microservice). Which is making it so confusing. Any idea why is it happening?
Edit: I created another writer bean for myfirstRepo.saveAll(updatableEntities) (as jdbc batch item writer, executing same logic) and add two of these writer to composite one. Now it's working as expected. But I have a lot of concerns now since I don't know what caused it. Any idea?
Edit 2: I came across with this thread. I was using JdbcPagingItemReader, does entities fetched with this component are in managed state? Entites inside mySecondRepo.delete(deletableEntity) and
mySecondRepo.saveAll(updatableEntities) are fetched inside writer by using hibernate but myfirstRepo.saveAll(updatableEntities) entities are the ones that came from reader.
It all makes sense if it is the case but even it is then why it was working fine in local?
mySecondRepo.saveAll(updatableEntities) are fetched inside writer by using hibernate but myfirstRepo.saveAll(updatableEntities) entities are the ones that came from reader.
Fetching items in the item writer is the cause of your issue. This is incorrect, it is not expected to read items in the item writer. That's why items coming from the reader are saved, but not the ones fetched in the writer.
What you should know is that all writers in the composite are running in the scope of a single transaction, driven by the transaction manager of the step. So if you are writing data to multiple datasources, you need to make sure the transaction manager is coordinating the transaction between all datasources. ChainedTransactionManager is deprecated, you can use a JtaTransactionManager for your case.

Spring service with in-memory list

I want to have a service which keeps a list inmemory so I don't need to access the database everytime. The service is accessed by a controller. Is this a valid approach or am I missing something? What about concurrent access here (from the controller)? Is this (stateful service) an anti-pattern?
#Service
public class ServiceCached {
private List<SomeObject> someObjects;
#PostConstruct
public void initOnce() {
someObjects = /** longer running loading methodd **/
}
public List<SomeObject> retrieveObjects() {
return someObjects;
}
}
Thanks!
I wouldn't call it an anti-pattern, but in my opinion loading the list from the database in a #PostConstruct method is not a good idea as you slow down the start up of your application, I'd rather use a lazy loading mechanism, but this would potentially introduce some concurrent access issues that would need to be handled.
In your example concurrent access from the controller should not be a problem as the list is loaded from a #PostConstruct method and the controller would depend on this service, therefore this service would need to be fully constructed before it is injected into the controller, therefore the list would already be loaded.
Preferably I'd suggest using Spring Caching: Caching Data with Spring, Documentation, Useful guide
Usage example:
#Cacheable("books")
public Book getByIsbn(String isbn) {
simulateSlowService();
return new Book(isbn, "Some book");
}
This way you do not need to take care of loading and evicting the objects. Once set up, the caching framework will take care of this for you.

Flush MyBatis Cache externally (outside of mapper)

I'm using MyBatis with second level cache activated via <cache/> in xml mapper files.
Suppose I want to interact with the underlying DB/DataSource decoupled from MyBatis, for instance via direct jdbcTemplate.
How can I assure, that the MyBatis cache gets flushed appropriateley when I Insert/Update/Delete via jdbcTemplate on a table for that MyBatis holds cached query results.
In other words, how can I force MyBatis to flush its cache from outside of MyBatis mappers for certain cache namespace?
I'm aware of #Options(flushCache=true) annotation, but this seems not to work outside of mapper interfaces.
you can get cache from configuration and then get by namespace and clear it.
#Resource
SqlSessionFactory sqlSessionFactory;
public void clearCacheByNamespace(){
Configuration config = sqlSessionFactory.getConfiguration();
Cache cache = config.getCache("com.persia.dao.UserInfoMapper");
if(cache != null){
cache.clear();
}
}
Hi i have used another approach, because we used spring. Use autowire the Session implementation and call appropriate method
public class SomeServerClass{
#Autowired
private org.mybatis.spring.SqlSessionTemplate sqlSessionTemplate;
private void someClearMethod(){
sqlSessionTemplate.clearCache();
}
}
If I use interface org.apache.ibatis.session.SqlSession it refers to same instance

Accessing Spring #Transactional service from multiple threads

I would like to know if the following is considered safe.
Usual Spring service class that accesses a bunch of DAOS / hibernate entities:
#Transactional
public class MyService {
...
public SomeObject readStuffFromDB(String key) {
...
//return some records from the DB via hibernate entity etc
}
A class in the application that has the service wired in:
public class ServiceHolder {
private MyService myService;
private SomeOtherObject multiThreadedMethod() {
...
//calls myService.readStuffFromDB() and uses the results
//to return something useful
}
multiThreadedMethod will be called from multiple threadpool threads. I would like to know if the multiThreadedMethod is safe in its calls to myService.
It is NOT making any modifications to the DB - only reading.
What happens if two threads call myService.readStuffFromDB() at exactly the same time? Will a concurrent modification exception be thrown from somewhere?
I've been running it with no issues but I'm not 100% sure it will always work.
Yes you will call the same object in the same time as long as your service bean is defined as singleton (which is default and proper), but you should not rely on local variables in you services. So the methods should be written that way they can work independently (you don't need a mutual exclusion here). If you called db and tried do any operations nothing would happen because every thread would receive a new instance of entity manager. If you modified db in the same time and any type of db exception was thrown you would get a rollback exception which is perfectly fine.
entityManager.persist() will do more or less entityManager.getEntityManagerAssignedToCurrentThread().persist()
It is a proxy not real object. So you are safe :)

Spring caching - auto update cached setter

I am really new to spring caching.
I saw that spring caching annotations are based mostly on annotating methods.
My question is if i have a dao class that has the following method:
public User getUserById(long id);
And lets say i cache this method.
and have another dao method (with no annotation) like:
public void updateUser(User u);
Now imagine this scenario:
1) someone invokes the getUserById(user1Id); //(cache of size 1 now has user1)
2) someone else invokes the updateUser(User1) ; // lets say a simple name change
3) someone else invokes the getUserById(user1Id);
My question :
Assuming no other actions were taken, Will the 3rd invocation receives a deprecated data? (with the old name)?
If so , how to solve this simple use case?
Yes, the third invocation will return a stale data.
To overcome this, you should trigger a cache eviction after the update operation, by annotating your update method with a #CacheEvict annotation:
#CacheEvict(value = "users", key = "#user.id")
void updateUser(User user) {
...
}
Where value = "users" is the same cache name you had used for getUserById() method, and User class has an id property of type Long (which is used as the users cache key)
You need to remove the stale items from cache. The Spring framework helps with several caching related annotations (you could annotate the update-method with #CacheEvict for example). Spring has a good documentation on caching by the way.

Resources