How to update a GemFire Region based on changes in some other Region - microservices

My retail application has various contexts like receive, transfer etc. The requests to these contexts are handled by RESTful microservices developed using Spring Boot. The persistence layer is Cassandra. This is shared by all services as we couldn't do a vertical scaling for microservices at DB level as the services are tightly coupled conceptually.
We want vertical scaling at GemFire end by creating different Regions for different contexts.
For example, a BOX table in Cassandra will be updated by Region Box-Receive(receive context) and Region Box-Transfer(transfer context) via CacheWriter.
Our problem is how to maintain data sync between these two Regions?
Please suggest any other approach also for separation at GemFire end.
gemfire version-
<dependency>
<groupId>com.gemstone.gemfire</groupId>
<artifactId>gemfire</artifactId>
<version>8.2.6</version>
</dependency>

One alternative approach, since you are using Spring Boot would be to do the following:
First annotation your #SpringBootApplication class with #EnableGemfireCacheTransactions...
Example:
#SpringBootApplication
#EnableGemfireCacheTransactions
#EnableGemfireRepositories
class YourSpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(YourSpringBootApplication.class, args);
}
...
}
The #EnableGemfireCacheTransactions annotation enables Spring Data GemFire's GemfireTransactionManager, which integrates GemFire's CacheTransactionManager with Spring Transaction Management infrastructure which then allows you to do this...
Now, just annotate your #Service application component transactional service methods with core Spring's #Transactional annotation, like so...
#Service
class YourBoxReceiverTransferService {
#Transactional
public <return-type> update(ReceiveContext receiveContext,
TransferContext transferContext {
...
receiveContextRepository.save(receiveContext);
transferContextRepository.save(transferContext);
...
}
}
As you can see here, I also used Spring Data (GemFire's) Repository infrastructure to manage the persistence operations (e.g. CRUD), which will be used appropriately in the transactional scoped-context setup by Spring.
2 advantages with the Spring approach, over using GemFire's public API, which unnecessarily couples you to GemFire (a definite code smell, particularly in a Spring context), is...
You don't have to place a bunch of boilerplate, crap code in to your application components, which does not belong there!
Using Spring's Transaction Management infrastructure, it is extremely easy to change your Transaction Management Strategy, such as by switching from GemFire's local-only cache transactions, to say, Global, JTA-based Transactions if the need every arises (such as, oh, well, now I need to send a message over a JMS message queue after the GemFire Region's and Cassandra BOX Table are updated to notify some downstream process that the Receiver/Transfer context has been updated). With Spring's Transaction Management infrastructure, you do not need to change a single line of application code to change transaction management strategies (such as local to global, or global to local, etc).
Hope this helps!
-John

You can use transactions. Something like this should work:
txMgr = cache.getTransactionManager();
txMgr.begin();
boxReceive.put();
...
boxtransfer.put();
txMgr.commit();
This will work provided you co-locate the box-receive and the box-transfer region and use the same key, or use a PartitionResolver to colocate the data.

Related

changing custom annotations on the fly from Spring Cloud Config Server. Is it possible?

Context: I need to provide a way to change parameter values during production on lower performance cost as possible.
Goal: I want change annotation values on fly and apply it at once on all microservices instances.
Personal background and limitations: I know I can use Spring Cloud Config to change parameters on the fly like explained in this article and I Know there is some challenges and pitfalls involved on changing annotations on the fly also like discussed in stackoveflow question.
I know that Spring Cloud Config can be used for setting up a centralized configuration applied to all microservice instances during boot/start. I have used it a bit. I am wondering if I can use it for centralizing parameters that can affect customized annotations on fly.
An imagined solution is:
... whenever I need somepRopertyValue
#Value("${config.somePropertyValue}")
private String somePropertyValue;
#Bean
public String somePropertyValue(){
return somePropertyValue;
}
A config client in all microservices endpoint that must be call not only when the application start but whenever somePropertyValue managed in Spring Cloud Config Server bootstrap.properties is updated:
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
#SpringBootApplication
public class SpringConfigClientApplication {
public static void main(String[] args) {
SpringApplication.run(SpringConfigClientApplication.class, args);
}
}
#RefreshScope
#RestController
class MessageRestController {
#Value("${server.somePropertyValue:Unable to connect to config server}")
private String somePropertyValue;
#RequestMapping("/server/somePropertyValue")
String getSomePropertyValue() {
return this.somePropertyValue;
}
}
And somehow somePropertyValue is maintened in Spring Cloud Config and if change during production time it affects on demand everywhere somePropertyValue is annoted in all microservice instances.
I am currently reaching this behaviour by adding a kafka consumer in all SpringBoot microservices that listen/observe a topic and when it receives a new messagge it changes on the fly the parameter value. It seems so odd that I created a Kafka dependency in all company microservices. Since I have used Spring Config for a bit similar scenario I am wondering if there is a better alternative using some out-of-box Spring approach. Also performance is highly important in my case and a bit delay on syncronize all parameters isn't an issue. By delay I mean that two or three seconds to update parameters in all microservices isn't an issue.
There are two ways to do that:
i- There's a refresh endpoint, and you can actually call that for a service, and it'll actually refresh its configurations without restarting itself, which is pretty neat. e.g. MS-A is listing on 8080 then do a POST request at this endpoint:
localhost:8080/refresh.
NOTE: Spring Actuator actually adds a RefreshEndpoint to the app automatically when we annotate a controller in MS-A with #RefreshScope.
ii- What you can also do is use Spring Cloud Bus, and broadcast an event, and then every service listens on that and refreshes itself. That's handy if you have dozens of services all using the Config Server, and you don't want to go one by one and hit a /refresh endpoint as we have did in 1st approach. You just want to broadcast a message to a bus and have all these things automatically pick it up.
Reference: Both concepts I've learnt while taking course at Pluralsight

Multitenancy support when using Apache Camel and Hibernate (in a Spring application)

I have a Spring application which uses the Hibernate schema strategy and a TenantContext class to store the tenant identifier (same design shown here: https://vladmihalcea.com/hibernate-database-schema-multitenancy/).
Everything works fine when dealing with synchronous HTTP requests handled by Spring.
Besides that, I have some Camel routes which are triggered by chron jobs. They use the JPA component to read from or write to a datasource. The Exchange object knows the tenant identifier. How to transfer that information to Hibernate?
I was thinking about using a listener or interceptor to get the tenant id from the Exchange and set the TenantContext object at every step in the route. The TenantContext will then be used by the Hibernate CurrentTenantIdentifierResolver class to resolve the tenant.
How should the TenantContext look like? Is it ThreadLocal a viable option? What about async threads?
In general, do you have any good solution to support Hibernate multitenancy when using Camel?
In general, it depends on your use case ;)
But we have done something similar, using a ThreadLocal in the CurrentTenantIdentifierResolver.
Then in the places where we need to set the tenant (e.g. at the start of the job being executed by the cronjob, in your case), we have an instance of tenantIdentifierResolver used like so:
tenantIdentifierResolver.withTenantId(tenant, () -> {
try {
doWhatever(param1, param2, etc);
} catch (WhateverException we) {
throw new MyBusinessException(we);
}
});

GemFire - Spring Boot Configuration

I am working on a project that has a requirement of Pivotal GemFire.
I am unable to find a proper tutorial about how to configure gemFire with Spring Boot.
I have created a partitioned Region and I want to configure Locators as well, but I need only server-side configuration as client is handled by someone else.
I am totally new to Pivotal GemFire and really confused. I have tried creating a cache.xml but then somehow a cache.out.xml gets created and there are many issues.
#Priyanka-
Best place to start is with the Guides on spring.io. Specifically, have a look at...
"Accessing Data with GemFire"
There is also...
"Cache Data with GemFire", and...
"Accessing GemFire Data with REST"
However, these guides focus mostly on "client-side" application concerns, "data access" (over REST), "caching", etc.
Still, you can use Spring Data GemFire (in a Spring Boot application even) to configure a GemFire Server. I have many examples of this. One in particular...
"Spring Boot GemFire Server Example"
This example demonstrates how to bootstrap a Spring Boot application as a GemFire Server (technically, a peer node in the cluster). Additionally, the GemFire properties are specified Spring config and can use Spring's normal conventions (property placeholders, SpEL expression) to configure these properties, like so...
https://github.com/jxblum/spring-boot-gemfire-server-example/blob/master/src/main/java/org/example/SpringBootGemFireServer.java#L59-L84
This particular configuration makes the GemFire Server a "GemFire Manager", possibly with an embedded "Locator" (indicated by the start-locator GemFie property, not to be confused with the "locators" GemFire property which allows our node to join and "existing" cluster) as well as a GemFire CacheServer to serve GemFire cache clients (with a ClientCache).
This example creates a "Factorials" Region, with a CacheLoader (definition here) to populate the "Factorials" Region on cache misses.
Since this example starts an embedded GemFire Manager in the Spring Boot GemFire Server application process, you can even connect to it using Gfsh, like so...
gfsh> connect --jmx-manager=localhost[1099]
Then you can run "gets" on the "Factorial" Region to see it compute factorials of the numeric keys you give it.
To see more advanced configuration, have a look at my other repos, in particular the Contacts Application RI (here).
Hope this helps!
-John
Well, I had the same problem, let me share with you what worked for me, in this case I'm using Spring Boot and Pivotal GemFire as cache client.
Install and run GemFire
Read the 15 minutes quick start guide
Create a locator(let's call it locator1) and a server(server1) and a region(region1)
Go to the folder where you started the 'Gee Fish'(gfsh) and then go to the locator's folder and open the log file, in that file you can get the port your locator is using.
Now let's see the Spring boot side:
In you Application with the main method add the #EnablegemFireCaching annotation
In the method(wherever it is) you want to cache, add the #Cacheable("region1") annotation.
Now let's create a configuration file for the caching:
//this is my working class
#Configuration
public class CacheConfiguration {
#Bean
ClientCacheFactoryBean gemfireCacheClient() {
return new ClientCacheFactoryBean();
}
#Bean(name = GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME)
PoolFactoryBean gemfirePool() {
PoolFactoryBean gemfirePool = new PoolFactoryBean();
gemfirePool.addLocators(Collections.singletonList(new ConnectionEndpoint("localhost", HERE_GOES_THE_PORT_NUMBER_FROM_STEP_4)));
gemfirePool.setName(GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME);
gemfirePool.setKeepAlive(false);
gemfirePool.setPingInterval(TimeUnit.SECONDS.toMillis(5));
gemfirePool.setRetryAttempts(1);
gemfirePool.setSubscriptionEnabled(true);
gemfirePool.setThreadLocalConnections(false);
return gemfirePool;
}
#Bean
ClientRegionFactoryBean<Long, Long> getRegion(ClientCache gemfireCache, Pool gemfirePool) {
ClientRegionFactoryBean<Long, Long> region = new ClientRegionFactoryBean<>();
region.setName("region1");
region.setLookupEnabled(true);
region.setCache(gemfireCache);
region.setPool(gemfirePool);
region.setShortcut(ClientRegionShortcut.PROXY);
return region;
}
That's all!, also do not forget to serialize(implements Serializable) the class is being cached(The class your cached method is returning)

Dynamically configuring spring state machine

Some queries on spring state machine.
Can we have more than one state machine in a single spring project,
where in one state machine serves for one work flow (may be a CD
player work flow) and the other for a turnstile?
Can I dynamically load the configuration in my config class, for instance from a big data source having JSON formatted data, where we stores our states, events, transitions etc.
One of my requirement is I may be having a frequently changing worklow or model, which I needs to configured in my spring project. How can I effectively do that with spring state machine.
1) You can have multiple machines. #EnableStateMachine has id property for a bean name. You can expose config as #EnableStateMachineFactory. If you want to work outside of javaconfig there is a manual builder model for it.
2/3) There is a public configuration api between javaconfig and statemachine. One user(outside of javaconfig) of this config model is uml based modeling which uses eclipse's uml xml file to load the config. Uml is your best bet as we don't have other build-in configuration hooks at this moment. contributions welcome ;)
You can configure the State machine dynamically using Builder. Builder is using same configuration interfaces behind the scenes that the #Configuration model using adapter classes.
Example:
StateMachine<String, String> buildMachine1() throws Exception {
Builder<String, String> builder = StateMachineBuilder.builder();
builder.configureStates()
.withStates()
.initial("S1")
.end("SF")
.states(new HashSet<String>(Arrays.asList("S1","S2","S3","S4")));
return builder.build();
}
Link to official docs: Dynamic Spring State Machine

JSF-SPRING-HIBERNATE architecture- Backing bean related best practice

I am developing a web project and after much research I have decided to go ahead with JSF+Primefaces, Spring and Hibernate approach. While designing the architecture of my project I have finalized the following approach :
Actor --> JSF+PrimeFaces page --- > Backing Bean -- > Service Bean -- > Dao -- > Hibernate
Service Bean and DAO are spring beans with dependency injection.
My concern now is now with respect to backing bean:
I plan to use multiple backing beans for UI page depending upon the type of Page I need to render.
Now for example: For a new user registration page i have UserProfile.xhtml which uses UserBackingBean. UserBackingBean has UserServiceBean injected by spring. UserServiceBean has UserDao injected by Spring.
Now in UserBackingBean when the user enters the form data from UserProfile.xhtml I will have to populate the User.java domain(ORM) object.
a) What is the best practice for this ? Should I initilize the User.java in the constructor on UserBackingBean ? Is this the proper approach ? Please suggest if there is any other way out ?
b) Also please suggest on the above architecture I have decided upon for my project ? Is it the proper approach ?
The general rule I follow is that transaction boundaries are marked in the service beans therefore I don't like to modify hibernate POJO outside of a service because I don't know if there is a transaction already running. So from the backing bean I would call the service layer pass in the parameters that the service layer needs to build up the hibernate pojo and save it, update it, ... etc.
Another way to do this would be have your backing bean implement an interface defined by the service layer and then pass the backing bean to the service layer. For example.
public interface UserInfoRequest {
public String getName();
}
#Service
public class SomeSpringService {
#Transactional(.....)
public void registerNewUser(UserInfoRequest request)
{
}
}
public class SomeBackingBean implements UserInfoRequest {
private SomeService someSpringService;
public void someMethodBoundToSJF()
{
this.someSpringService.registerNewUser(this);
}
}
Regarding your last question I am not a fan of JSF, I think JSF is fundamentally flawed because it is a server component based framework. So my argument against JSF is a generic argument against server side component based frameworks.
The primary flaw with server side component based frameworks is that you don't control what the component will output which means you are stuck with the look of the component, if you want something that looks different you have to write your own component or you have to modify an existing component. Web browsers are currently evolving very quickly adding new features which can really improve the quality of an application UI but to you those features you have to write HTML, CSS, and JavaScript directly and the server side components make that harder.
Client side component architectures are here and much better than doing components on the server side. Here is my recommend stack.
Client Side Architecture:
jquery.js - Basic libary to make all browser look the same to JavaScript
backbone.js + underscore.js - High level client side component based architecture
handlebars.js - for the client side templates
Twitter bootstrap - to get a decent starter set of CSS & widgets
You write code in HTML, CSS and JavaScript organized as backbone views that talk to server side
models using AJAX. You have complete control over the client side user experience with enough
structure to really make nice reusable code.
Server Side Architecture:
Annotation Driven Spring MVC, Services and Dao (#Controller, #Service, #Repository)
Spring component scanning with autowiring by type (#Autowired, #Inject)
AspectJ Load Time Weaving or Compile Time Weaving
Hibernate
Tomcat 7
JSP as the view technology for Spring MVC (yes it cluncuky but you wont be creating
too many jsp pages, mostly for usng <% #inculde > directive
Tooling:
- Spring Tool suite
- JRebel (so that you don't have to start and stop the server) it really works really worth the money
- Tomcat 7

Resources