Multitenancy support when using Apache Camel and Hibernate (in a Spring application) - spring

I have a Spring application which uses the Hibernate schema strategy and a TenantContext class to store the tenant identifier (same design shown here: https://vladmihalcea.com/hibernate-database-schema-multitenancy/).
Everything works fine when dealing with synchronous HTTP requests handled by Spring.
Besides that, I have some Camel routes which are triggered by chron jobs. They use the JPA component to read from or write to a datasource. The Exchange object knows the tenant identifier. How to transfer that information to Hibernate?
I was thinking about using a listener or interceptor to get the tenant id from the Exchange and set the TenantContext object at every step in the route. The TenantContext will then be used by the Hibernate CurrentTenantIdentifierResolver class to resolve the tenant.
How should the TenantContext look like? Is it ThreadLocal a viable option? What about async threads?
In general, do you have any good solution to support Hibernate multitenancy when using Camel?

In general, it depends on your use case ;)
But we have done something similar, using a ThreadLocal in the CurrentTenantIdentifierResolver.
Then in the places where we need to set the tenant (e.g. at the start of the job being executed by the cronjob, in your case), we have an instance of tenantIdentifierResolver used like so:
tenantIdentifierResolver.withTenantId(tenant, () -> {
try {
doWhatever(param1, param2, etc);
} catch (WhateverException we) {
throw new MyBusinessException(we);
}
});

Related

Is it possible to set Redis key expiration time when using Spring Integration RedisMessageStore

Dears, I'd like to auto-expire Redis keys when using org.springframework.integration.redis.store.RedisMessageStore class in Spring Integration. I see some methods related to "expiry callback" but I could not find any documentation or examples yet. Any advice will be much appreciated.
#Bean
public MessageStore redisMessageStore(LettuceConnectionFactory redisConnectionFactory) {
RedisMessageStore store = new RedisMessageStore(redisConnectionFactory, "TEST_");
return store;
}
Spring Boot: 2.6.3, spring integration and spring-boot-starter-data-redis.
The RedisMessageStore does not have any expiration features. And technically it must not. The point of this kind of store is too keep data until it is used. Look at it as persistent storage. The RedisMetadataStore is based on the RedisProperties object, so it also cannot use expiration feature for particular entry.
You probably talk about a MessageGroupStoreReaper, which really calls a MessageGroupStore.expireMessageGroups(long timeout), but that's already an artificial, cross-store implementation provided by the framework. The logic relies on the group.getTimestamp() and group.getLastModified(). So, still not that auto-expiration Redis feature.
The MessageGroupStoreReaper is a process needed to be run in your application: nothing Redis-specific.
See more info in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#reaper

Spring Boot: Retrieve config via rest call upon application startup

I d like to make a REST call once on application startup to retrieve some configuration parameters.
For example, we need to retrieve an entity called FleetConfiguration from another server. I d like to do a GET once and save the keep the data in memory for the rest of the runtime.
What s the best way of doing this in Spring? using Bean, Config annotations ..?
I found this for example : https://stackoverflow.com/a/44923402/494659
I might as well use POJOs handle the lifecycle of it myself but I am sure there s a way to do it in Spring without re-inventing the wheel.
Thanks in advance.
The following method will run once the application starts, call the remote server and return a FleetConfiguration object which will be available throughout your app. The FleetConfiguration object will be a singleton and won't change.
#Bean
#EventListener(ApplicationReadyEvent.class)
public FleetConfiguration getFleetConfiguration(){
RestTemplate rest = new RestTemplate();
String url = "http://remoteserver/fleetConfiguration";
return rest.getForObject(url, FleetConfiguration.class);
}
The method should be declared in a #Configuration class or #Service class.
Ideally the call should test for the response code from the remote server and act accordingly.
Better approach is to use Spring Cloud Config to externalize every application's configuration here and it can be updated at runtime for any config change so no downtime either around same.

How to update a GemFire Region based on changes in some other Region

My retail application has various contexts like receive, transfer etc. The requests to these contexts are handled by RESTful microservices developed using Spring Boot. The persistence layer is Cassandra. This is shared by all services as we couldn't do a vertical scaling for microservices at DB level as the services are tightly coupled conceptually.
We want vertical scaling at GemFire end by creating different Regions for different contexts.
For example, a BOX table in Cassandra will be updated by Region Box-Receive(receive context) and Region Box-Transfer(transfer context) via CacheWriter.
Our problem is how to maintain data sync between these two Regions?
Please suggest any other approach also for separation at GemFire end.
gemfire version-
<dependency>
<groupId>com.gemstone.gemfire</groupId>
<artifactId>gemfire</artifactId>
<version>8.2.6</version>
</dependency>
One alternative approach, since you are using Spring Boot would be to do the following:
First annotation your #SpringBootApplication class with #EnableGemfireCacheTransactions...
Example:
#SpringBootApplication
#EnableGemfireCacheTransactions
#EnableGemfireRepositories
class YourSpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(YourSpringBootApplication.class, args);
}
...
}
The #EnableGemfireCacheTransactions annotation enables Spring Data GemFire's GemfireTransactionManager, which integrates GemFire's CacheTransactionManager with Spring Transaction Management infrastructure which then allows you to do this...
Now, just annotate your #Service application component transactional service methods with core Spring's #Transactional annotation, like so...
#Service
class YourBoxReceiverTransferService {
#Transactional
public <return-type> update(ReceiveContext receiveContext,
TransferContext transferContext {
...
receiveContextRepository.save(receiveContext);
transferContextRepository.save(transferContext);
...
}
}
As you can see here, I also used Spring Data (GemFire's) Repository infrastructure to manage the persistence operations (e.g. CRUD), which will be used appropriately in the transactional scoped-context setup by Spring.
2 advantages with the Spring approach, over using GemFire's public API, which unnecessarily couples you to GemFire (a definite code smell, particularly in a Spring context), is...
You don't have to place a bunch of boilerplate, crap code in to your application components, which does not belong there!
Using Spring's Transaction Management infrastructure, it is extremely easy to change your Transaction Management Strategy, such as by switching from GemFire's local-only cache transactions, to say, Global, JTA-based Transactions if the need every arises (such as, oh, well, now I need to send a message over a JMS message queue after the GemFire Region's and Cassandra BOX Table are updated to notify some downstream process that the Receiver/Transfer context has been updated). With Spring's Transaction Management infrastructure, you do not need to change a single line of application code to change transaction management strategies (such as local to global, or global to local, etc).
Hope this helps!
-John
You can use transactions. Something like this should work:
txMgr = cache.getTransactionManager();
txMgr.begin();
boxReceive.put();
...
boxtransfer.put();
txMgr.commit();
This will work provided you co-locate the box-receive and the box-transfer region and use the same key, or use a PartitionResolver to colocate the data.

Mule connector config needs dynamic attributes

I have develop a new Connector. This connector requires to be configured with two parameters, lets say:
default_trip_timeout_milis
default_trip_threshold
Challenge is, I want read ${myValue_a} and ${myValue_a} from an API, using an HTTP call, not from a file or inline values.
Since this is a connector, I need to make this API call somewhere before connectors are initialized.
FlowVars aren't an option, since they are initialized with the Flows, and this is happening before in the Mule app life Cycle.
My idea is to create an Spring Bean implementing Initialisable, so it will be called before Connectors are init, and here, using any java based libs (Spring RestTemplate?) , call API, get values, and store them somewhere (context? objectStore?) , so the connector can access them.
Make sense? Any other ideas?
Thanks!
mmm you could make a class that will create the properties in the startup and in this class obtain the API properties via http request. Example below:
public class PropertyInit implements InitializingBean,FactoryBean {
private Properties props = new Properties();
#Override
public Object getObject() throws Exception {
return props;
}
#Override
public Class getObjectType() {
return Properties.class;
}
}
Now you should be able to load this property class with:
<context:property-placeholder properties-ref="propertyInit"/>
Hope you like this idea. I used this approach in a previous project.
I want to give you first a strong warning on doing this. If you go down this path then you risk breaking your application in very strange ways because if any other components depend on this component you are having dynamic components on startup, you will break them, and you should think if there are other ways to achieve this behaviour instead of using properties.
That said the way to do this would be to use a proxy pattern, which is a proxy for the component you recreate whenever its properties are changed. So you will need to create a class which extends Circuit Breaker, which encapsulates and instance of Circuit Breaker which is recreated whenever its properties change. These properties must not be used outside of the proxy class as other components may read these properties at startup and then not refresh, you must keep this in mind that anything which might directly or indirectly access these properties cannot do so in their initialisation phase or your application will break.
It's worth taking a look at SpringCloudConfig which allows for you to have a properties server and then all your applications can hot-reload those properties at runtime when they change. Not sure if you can take that path in Mule if SpringCloud is supported yet but it's a nice thing to know exists.

Spring:: If a fork a new thread will it be enforced in transaction by Spring

We are using declarative spring transaction attribute for database integrity. Some of our code call webservice which do bunch of stuffs in sharepoint. The problem is when webservices take longer time users get deadlock from spring which is holding up backend.
If I make a new thread inside a function which has spring transaction declarative attribute will that be ignored from spring?
[Transaction(TransactionPropagation.Required, ReadOnly = false)]
public void UploadPDFManual(/*parameters*/)
{
//DO some data base related things
if (revisionPDFBytes != null)
{
//my sharepoint call which calls webservice
Task.Factory.StartNew(() => DocumentRepositoryUtil.CreateSharepointDocument(docInfo)); // I draw a new thread from ASPNET worker thread pool.
}
}
Anything other options I should go for?
You don't need doing it in a transaction. Transaction makes a database save an object properly. That's it. All other stuff must be done after the transaction commit. In Java, you can make it with Spring's transaction synchronization or JMS. Take a look at the accepted answer over here.
More useful info specific for .NET (see 17.8).

Resources