I'm using MyBatis with second level cache activated via <cache/> in xml mapper files.
Suppose I want to interact with the underlying DB/DataSource decoupled from MyBatis, for instance via direct jdbcTemplate.
How can I assure, that the MyBatis cache gets flushed appropriateley when I Insert/Update/Delete via jdbcTemplate on a table for that MyBatis holds cached query results.
In other words, how can I force MyBatis to flush its cache from outside of MyBatis mappers for certain cache namespace?
I'm aware of #Options(flushCache=true) annotation, but this seems not to work outside of mapper interfaces.
you can get cache from configuration and then get by namespace and clear it.
#Resource
SqlSessionFactory sqlSessionFactory;
public void clearCacheByNamespace(){
Configuration config = sqlSessionFactory.getConfiguration();
Cache cache = config.getCache("com.persia.dao.UserInfoMapper");
if(cache != null){
cache.clear();
}
}
Hi i have used another approach, because we used spring. Use autowire the Session implementation and call appropriate method
public class SomeServerClass{
#Autowired
private org.mybatis.spring.SqlSessionTemplate sqlSessionTemplate;
private void someClearMethod(){
sqlSessionTemplate.clearCache();
}
}
If I use interface org.apache.ibatis.session.SqlSession it refers to same instance
Related
I'm having really weird issue. I need to say that my code working perfectly fine in my local but not persisting some datas in our pod (k8 environment).
I have different datasources to work with in this batch. Everything running fine. Job Repository is map-based and using ResourcelessTransactionManager for it. I configured it like this
#Configuration
#EnableBatchProcessing
public class BatchConfigurer extends DefaultBatchConfigurer {
#Override
public void setDataSource(DataSource dataSource){
}
}
I also use different platformtransactionmanager then spring batch (issue). So I set my spring allow bean overriding to true in my properties. The platform transaction manager in my configurer is right binded one, I debugged it.
I have custom writer for one of my step. Updating records in multiple tables which in multiple dbs (different datasources, in brief)
public class MyWriter implements ItemWriter<MyDTO> {
#Autowired
private MyFirstRepo myfirstRepo; //table in first datasource
#Autowired
private MySecondRepo mySecondRepo; //table in second datasource
#Override
public void write(List<? extends MyDTO> myDtoList) throws Exception {
//some logic
mySecondRepo.delete(deletableEntity)
//some logic
mySecondRepo.saveAll(updatableEntities)
//some logic
myfirstRepo.saveAll(updatableEntities)
}
}
Since I have multiple datasources, I defined multiple transaction managers, and to give transaction manager to my step I defined chained transaction manager that includes that managers.
#Bean
public Step myStep(#Qualifier("chainedTransactionManager") ChainedTransactionManager chainedTransactionManager) {
return getCommonStepBuilder("myStep")
.transactionManager(chainedTransactionManager)
.<MyDTO,MyDTO>chunk(200)
.reader(myPaginingReader())
.writer(myWriter())
.taskExecutor(myTaskExecutor())
.throttleLimit(15)
.build();
}
chained transaction manager config (both of these transaction manager is JpaTransactionManager):
#Configuration
public class TransactionManagerConfig {
#Primary
#Bean(name = "chainedTransactionManager")
public ChainedTransactionManager transactionManager(
#Qualifier("firstTransactionManager") PlatformTransactionManager firstTransactionManager,
#Qualifier("secondTransactionManager")PlatformTransactionManager secondTransactionManager) {
return new ChainedTransactionManager(firstTransactionManager,secondTransactionManager);
}
}
So my first two jpa operations in writer working just fine( operations that made over MySecondRepo) but the last operation is not persisting data to db. It doesn't throws any errors, job completing succesfully but it doesn't update my records in table.
I must mention second time, it does update in my local actually. Just not updating in our app that lives on k8 environment (dockerized microservice). Which is making it so confusing. Any idea why is it happening?
Edit: I created another writer bean for myfirstRepo.saveAll(updatableEntities) (as jdbc batch item writer, executing same logic) and add two of these writer to composite one. Now it's working as expected. But I have a lot of concerns now since I don't know what caused it. Any idea?
Edit 2: I came across with this thread. I was using JdbcPagingItemReader, does entities fetched with this component are in managed state? Entites inside mySecondRepo.delete(deletableEntity) and
mySecondRepo.saveAll(updatableEntities) are fetched inside writer by using hibernate but myfirstRepo.saveAll(updatableEntities) entities are the ones that came from reader.
It all makes sense if it is the case but even it is then why it was working fine in local?
mySecondRepo.saveAll(updatableEntities) are fetched inside writer by using hibernate but myfirstRepo.saveAll(updatableEntities) entities are the ones that came from reader.
Fetching items in the item writer is the cause of your issue. This is incorrect, it is not expected to read items in the item writer. That's why items coming from the reader are saved, but not the ones fetched in the writer.
What you should know is that all writers in the composite are running in the scope of a single transaction, driven by the transaction manager of the step. So if you are writing data to multiple datasources, you need to make sure the transaction manager is coordinating the transaction between all datasources. ChainedTransactionManager is deprecated, you can use a JtaTransactionManager for your case.
I am facing a strange issue - I have hazelcast and redis in my project. Suddenly all #Cacheable annotations are putting entries only to hazelcast cache, even if the particular cache name is configured via redis cache builder:
#Bean
fun redisCacheManagerBuilderCustomizer(): RedisCacheManagerBuilderCustomizer? {
return RedisCacheManagerBuilderCustomizer { builder: RedisCacheManagerBuilder ->
builder
.withCacheConfiguration(
MY_CACHE,
RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofDays(3))
)
}
}
Using cache:
#Cacheable(cacheNames = [CacheConfig.MY_CACHE])
#Cacheable(value= [CacheConfig.MY_CACHE])
Both does not work and forwards requests to hazelcast only. How to solve this? Using different cacheManager?
Typically, only 1 caching provider is in use to cache data, such as in the service or data access tier of your Spring [Boot] application using Spring's Cache Abstraction and infrastructure components, such as the CacheManager and caching annotations.
When multiple caching providers (e.g. Hazelcast and Redis) are on the classpath of your Spring Boot application, then it might be necessary to declare which caching provider (e.g. Redis) you want to [solely] use for caching purposes. With this arrangement, Spring Boot allows you to declare your intentions using the spring.cache.type property as explained in the ref doc, here (see first Tip). Valid values of this property are defined by the enumerated values in the CacheType enum.
However, if you want to cache data using multiple caching providers at once, then you need to explicitly declare your intentions using this approach as well.
DISCLAIMER: It has been awhile since I have traced through Spring Boot auto-configuration where caching is concerned, and how it specifically handles multiple caching providers on the application classpath, especially when a specific caching provider has not been declared, such as by explicitly declaring the spring.cache-type property. However, and again, this may actually be your intention, to use multiple caching providers in a single #Cacheable (or #CachePut) service or data access operation. If so, continue reading...
To do so, you typically use 1 of 2 approaches. These approaches are loosely described in the core Spring Framework's ref doc, here.
1 approach is to declare the cacheNames of the caches from each caching provider along with the CacheManager, like so:
#Service
class CustomerService {
#Cacheable(cacheNames = { "cacheOne", "cacheTwo" }, cacheManager="compositeCacheManager")
public Customer findBy(String name) {
// ...
}
}
In this case, "cacheOne" would be the name of the Cache managed by caching provider one (e.g. Redis), and "cacheTwo" would be the name of the Cache managed by caching provider two (i.e. "Hazelcast").
DISCLAIMER: You'd have to play around, but it might be possible to simply declare a single Cache name here (e.g. "Customers"), where the caches (or cache data structures in each caching provider) are named the same, and it would still work. I am not certain, but it seems logical this would work as well.
The key (no pun intended) to this example, however, is the declaration of the CacheManager using the cacheManager attribute of the #Cacheable annotation. As you know, the CacheManager is the Spring SPI infrastructure component used to find and manage Cache objects (caches from the caching providers) used for caching purposes in your Spring managed beans (such as CustomerService).
I named this CacheManager deliberately, "compositeCacheManager". Spring's Cache Abstraction provides the CompositeCacheManager implementation, which as the name suggests, composes multiple CacheManagers for use in single cache operation.
Therefore, you could do the following in you Spring [Boot] application configuration:
#Configuration
class MyCachingConfiguration {
#Bean
RedisCacheManager cacheManager() {
// ...
}
#Bean
HazelcastCacheManager hazelcastCacheManager() {
// ...
}
#Bean
CompositeCacheManager compositeCacheManager(RedisCacheManager redis, HazelcastCacheManager hazelcast) {
return new CompositeCacheManager(redis, hazelcast);
}
}
NOTE: Notice the RedisCacheManager is the "default" CacheManager declaration and cache provider (implementation) used when no cache provider is explicitly declared in a caching operation, since the bean name is "cacheManager".
Alternatively, and perhaps more easily, you can choose to implement the CacheResolver interface instead. The Javadoc is rather self-explanatory. Be aware of the Thread-safety concerns.
In this case, you would simply declare a CacheResolver implementation in your configuration, like so:
#Configuration
class MyCachingConfiguration {
#Bean
CacheResolver customCacheResolver() {
// return your custom CacheResolver implementation
}
}
Then in your application service components (beans), you would do:
#Service
class CustomerService {
#Cacheable(cacheNames = "Customers", cacheResolver="customCacheResolver")
public Customer findBy(String name) {
// ...
}
}
DISCLAIMER: I have not tested either approach I presented above here, but I feel reasonably confident this should work as expected. It may need some slight modifications, but should generally be the approach(es) you should follow.
If you have any troubles, please post back in the comments and I will try to follow up.
What is the proper way to use DslContext. Does having bean autoconfiguration, or calling DSL.using() method directly before execution; Do they vary in performance ?
#Autowired DataSource dataSource;
#PostConstruct
#Bean(name = "ExecutorDslContext")
#Scope(value = ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public DSLContext executorDslContext() {
return DSL.using(dataSource, SQLDialect.MEMSQL);
}
Should the Scope be Prototype or Singleton ?
What's the impact of using above mentioned bean for execution vs
Result<Record> result = DSL.using(...).select().from(TABLE);
The correct way is to use org.springframework.boot:spring-boot-starter-jooq and inject DslContext.
https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#features.sql.jooq
Regarding:
calling DSL.using() method directly before execution; Do they vary in performance ?
Calling DSL.using() will create a new DefaultConfiguration every time. This isn't a big overhead per se, but you won't profit from reflection caching and some other caches that can be shared among sessions that re-use the same configuration instance. In general, there's no need to create a new Configuration instance for every single query, so ideally just inject the shared and pre-configured DSLContext as suggested by Simon
Additional information can be found in the manual's sections
Performance considerations
Thread safetey
I have a Spring Boot application, where I need to get data from a table when the app initializes.
I have a repository with the following code:
#Repository
public interface Bookepository extends JpaRepository<Book, Integer> {
Proveedor findByName(String name);
#Cacheable("books")
List<Proveedor> findAll();
}
Then from my service:
#Service
public class ServiceBooks {
public void findAll(){
booksRepo.findAll();
}
public void findByName(String name){
booksRepo.findByName(name);
}
}
And then I have a class that implements CommandLineRunner:
#Component
public class AppRunner implements CommandLineRunner {
private final BookRepository bookRepository;
public AppRunner(BookRepository bookRepository) {
this.bookRepository = bookRepository;
}
#Override
public void run(String... args) throws Exception {
bookRepository.findAll());
}
}
So here,when the application initializes, it queries to the Books table and caches the result. Inside the application each time I call find.all(), the cache is working, and I get the data from my cache.
So here are my 2 questions:
About Redis, I am not using Redis and I am doing database cache without any problem. So, where does Redis fit into this approach? I don't understand why everybody uses Redis when cache is working without needing other libraries.
When I call findByName(name), is there any chance to execute that query over the data I already have cached? I know I can have a cache on that method, but the cache will save data each time I search a particular name. If a name is searched for the first time, it will go to the database for that value. I don't want that, I would like that Spring performs the query using the data from the first cache where I have all Books.
The answers to your question
Redis avoids the DB call as it stores your response in Memory. You can use #cacheable even in controller or service. If you use #cacheable in controller, your request will not even execute the controller method, if it is already cached.
for FindByName, Redis provides a nice way to store the data based on keys.
Refer the link Cache Keys.
Once you request by Name, it will get the data from DB, the next time you request with same name, it will get from cache based on the key.
Coming back to your question, NO you should not do a search on your cached data, as caches are highly volatile, you cannot trust the data from cache. also searching through the cached data might affect the performance and you might need to write lines of unneeded additional code.
Spring boot manages the cache per application or per service. When you are using multiple instance of a service or app then certainly you'll want to manage the cache centrally. Because per service cache is not usable in this case because what one app caches in its own spring boot is logically not accessible by another apps.
So here Redis comes into picture. If you use Redis, then each instance of service will connect to the same Redis cache and get the same result.
I'm using Spring + Spring Data JPA with Hibernate and I need to perform some large and expensive database operations.
How I can use a StatelessSession to perform these kind of operations?
A solution is to implement a Spring factory bean to create this StatelessSession and inject it in your custom repositories implementation:
public class MyRepositoryImpl implements MyRepositoryCustom {
#Autowired
private StatelessSession statelessSession;
#Override
#Transactional
public void myBatchStatements() {
Criteria c = statelessSession.createCriteria(User.class);
ScrollableResults itemCursor = c.scroll();
while (itemCursor.next()) {
myUpdate((User) itemCursor.get(0));
}
itemCursor.close();
return true;
}
}
Check out the StatelessSessionFactoryBean and the full Gist here. Using Spring 3.2.2, Spring Data JPA 1.2.0 and Hibernate 4.1.9.
Thanks to this JIRA and the guy who attached StatelessSessionFactoryBean code. Hope this helps somebody, it worked like a charm for me.
To get even better performance results you can enable jdbc batch statements on the SessionFactory / EntityManager by setting the hibernate.jdbc.batch_size property on the SessionFactory configuration (i.e.: LocalEntityManagerFactoryBean).
To have an optimal benefit of the jdbc batch insert / updates write as much entities of the same type as possible. Hibernate will detect when you write another entity type and flushes the batch automatically even when it has not reached the configured batch size.
Using the StatelessSession behaves basically the same as using something like Spring's JdbcTemplate. The benefit of using the StatelessSession is that the mapping and translation to SQL is handled by Hibernate. When you use my StatelessSessionFactoryBean you can even mix the Session and the StatelessSession mixed in one transaction. But be careful of modifying an Entity loaded by the Session and persisting it with the StatelessSession because it will result into locking problems.