Hazelcast with springboot - spring-boot

I was looking into hazelcast and found some good integrations with springboot. However, I want to understand if that is it or if we need the hazelcast servers to make a prod ready implementation. Can some one point out a resouce where I can look at the setup

You can run Hazelcast in either an embedded mode -- where the Hazelcast cluster nodes are colocated with the application clients -- or in a client-server mode, where the Hazelcast cluster is separate from the application clients. Both can be used for production. Embedded is generally easier to get up and running quickly. Client-Server might be better if you want to be able to tune and scale the cluster independently of the application clients.
See https://support.hazelcast.com/hc/en-us/articles/115004441586-What-s-the-difference-between-client-server-vs-embedded-topologies-
The only change in application code to switch between architectures is the line of code that instantiates the client
Hazelcast.newHazelcastInstance(); // creates an embedded client instance
while
Hazelcast.newHazelcastClient(); // creates a server client instance
I'd recommend the reference manual as the definitive source on configuration options and how to achieve what you need
https://docs.hazelcast.org/docs/latest/manual/html-single/

I would recommend to go through the reference manual. But I would also like to share how I have deployed the hazelcast instances on production servers and how I have used it.
Step 1: Create a xml config file.
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast
xsi:schemaLocation="http://www.hazelcast.com/schema/config https://hazelcast.com/schema/config/hazelcast-config-3.9.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>"your_service_name"</name>
<password>"service_password_chosen"</password>
</group>
<properties>
<property name="hazelcast.partition.count">83</property>
</properties>
<management-center enabled="true" update-interval="3">--url--</management-center>
<network>
<join>
<multicast enabled="false"/>
<aws enabled="false"></aws>
<tcp-ip enabled="true">
<member>"internal ip of your instance"</member>
<member>"internal ip of other instance</member>
</tcp-ip>
</join>
</network>
<map name="*.ttl3hr">
<max-size policy="USED_HEAP_PERCENTAGE">3</max-size>
<eviction-policy>LFU</eviction-policy>
<statistics-enabled>true</statistics-enabled>
<backup-count>0</backup-count>
<async-backup-count>1</async-backup-count>
<read-backup-data>true</read-backup-data>
<time-to-live-seconds>10800</time-to-live-seconds> <!--3 hours-->
</map>
Step 2: Add #EnableCaching and add a bean in the file annotated with #Configuration.
#Bean
public CacheManager cacheManager(HazelcastInstance hazelcastInstance) {
return new com.hazelcast.spring.cache.HazelcastCacheManager(hazelcastInstance);
}
Step 3: Then you can annotate your methods with #Cacheable annotation.
#Cacheable(cacheNames = "your_cache_name")
public POJO foo(Parameter1 parameter1,
Parameter2 parameter2) {
return pojoRepository.findByParameters(parameter1, parameter2);
}

Related

Binding datasource to application when using springBootApplication in Liberty?

When deploying "regular" web apps to Liberty, I was used to binding the global datasource configured in Liberty's server.xml to the individual application by using a child element within the element, like this:
<application context-root="helloApp" location="..." name="helloApp" type="war">
<application-bnd>
<data-source id="db2appDs" name="jdbc/datasource" binding-name="jdbc/theDB"/>
</application-bnd>
...
</application>
<dataSource id="db2ds" jndiName="jdbc/theDB" type="javax.sql.DataSource">
...
</dataSource>
When configuring my first Spring Boot application to deploy to Liberty, I am trying to use the new <springBootApplication> element for it - but I don't seem to be able to add a binding for the datasource I want to use the same way, as this element doesn't seem to support such a child. (It seems to want only <classloader> as a child).
I've seen people suggest I use an #Resource annotation that includes both application-local JDNI name and global JNDI name for the datasorce - but that defeats the purpose, since I don't want to know what the global name is until deploy time.
Is there another way to do this, like we used to before? Or are applications deployed through <springBootApplication> expected to know the global JNDI name of the datasource(s) they want?
Application-defined datasources are not supported for <springBootApplication/>’s. While your application may certainly access a Liberty datasource using its global JNDI name, you should configure the spring.datasource.jndi-name property within your Spring Boot application as described in section 29.1.3 of the Spring Boot features reference. For your example try spring.datasource.jndi-name=jdbc/theDB.

Using Hazelcast for both Spring Session and 2 Level Cache (LC2) with Hibernate

So I want to use Hazelcast in my web application for both 2 level caching (hibernate layer) and spring session, the setup is very simple I want to be able to use NearCache configurations with a server running on the network.
I first ran into a compatibility problem recent version of Hazelcast 4.* is not yet supported in spring session, so I was happy to use the supported version: 3.12.6...
below is my hazelcast-client.xml and I have properly configured it to be used by spring.hazelcast.config=file://... when I start my application the Hazelcast instance is not created, so I decided well I should create the ClientConfig and HazelcastInstance beans myself using the code below:
#Bean()
ClientConfig clientConfig() throws IOException{
try(final InputStream stream = Files.newInputStream(Paths.get("proper-path/hazelcast-client.xml"))){
com.hazelcast.client.config.XmlClientConfigBuilder builder = new com.hazelcast.client.config.XmlClientConfigBuilder(stream);
return builder.build();
}
}
#Bean()
HazelcastInstance hazelcastInstance(final ClientConfig clientConfig){
return HazelcastClient.newHazelcastClient(clientConfig);
}
Now i have a problem, I can't add the code below so I can use it with sessions:
final MapAttributeConfig attributeConfig = new MapAttributeConfig()
.setName(HazelcastIndexedSessionRepository.PRINCIPAL_NAME_ATTRIBUTE)
.setExtractor(PrincipalNameExtractor.class.getName());
hazelcastInstance.getConfig().getMapConfig(HazelcastIndexedSessionRepository.DEFAULT_SESSION_MAP_NAME)
.addMapAttributeConfig(attributeConfig)
.addMapIndexConfig(new MapIndexConfig(HazelcastIndexedSessionRepository.PRINCIPAL_NAME_ATTRIBUTE, false));
hazelcast-client.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast-client xmlns="http://www.hazelcast.com/schema/client-config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/client-config
http://www.hazelcast.com/schema/client-config/hazelcast-client-config-3.12.xsd">
<group>
<name>GroupName</name>
<password>pass</password>
</group>
<instance-name>${application.container.name}</instance-name>
<properties>
<property name="hazelcast.client.shuffle.member.list">true</property>
<property name="hazelcast.client.heartbeat.timeout">60000</property>
<property name="hazelcast.client.heartbeat.interval">5000</property>
<property name="hazelcast.client.event.thread.count">5</property>
<property name="hazelcast.client.event.queue.capacity">1000000</property>
<property name="hazelcast.client.invocation.timeout.seconds">120</property>
</properties>
<client-labels>
<label>web-app</label>
<label>${application.container.name}</label>
</client-labels>
<network>
<cluster-members>
<address>127.0.0.1</address>
</cluster-members>
<outbound-ports>
<ports>5801</ports>
</outbound-ports>
<smart-routing>true</smart-routing>
<redo-operation>true</redo-operation>
<connection-timeout>60000</connection-timeout>
<connection-attempt-period>3000</connection-attempt-period>
<connection-attempt-limit>2</connection-attempt-limit>
<socket-options>
<tcp-no-delay>false</tcp-no-delay>
<keep-alive>true</keep-alive>
<reuse-address>true</reuse-address>
<linger-seconds>3</linger-seconds>
<buffer-size>128</buffer-size>
</socket-options>
</network>
<executor-pool-size>40</executor-pool-size>
<native-memory enabled="false" allocator-type="POOLED">
<size unit="MEGABYTES" value="128"/>
<min-block-size>1</min-block-size>
<page-size>1</page-size>
<metadata-space-percentage>40.5</metadata-space-percentage>
</native-memory>
<load-balancer type="random"/>
<flake-id-generator name="default">
<prefetch-count>100</prefetch-count>
<prefetch-validity-millis>600000</prefetch-validity-millis>
</flake-id-generator>
<connection-strategy async-start="true" reconnect-mode="ASYNC">
<connection-retry enabled="true">
<initial-backoff-millis>2000</initial-backoff-millis>
<max-backoff-millis>60000</max-backoff-millis>
<multiplier>3</multiplier>
<fail-on-max-backoff>true</fail-on-max-backoff>
<jitter>0.5</jitter>
</connection-retry>
</connection-strategy>
</hazelcast-client>
You need to configure the map on the server side, which means you need to have some of the Spring classes on member's classpath. But keep in mind that you need this configuration only when HazelcastIndexedSessionRepository#findByIndexNameAndIndexValue is used.
Also in client mode, do not forget to deploy necessary classes and enable user code deployment for members. Otherwise session updates will fail:
// member
config.getUserCodeDeploymentConfig().setEnabled(true)
.setClassCacheMode(UserCodeDeploymentConfig.ClassCacheMode.ETERNAL);
// client
clientConfig.getUserCodeDeploymentConfig().setEnabled(true).addClass(Session.class)
.addClass(MapSession.class).addClass(SessionUpdateEntryProcessor.class);
But I recommend to include spring-session-hazelcast on member's cp rather than user code deployment. This will satisfy both of the needs above.
Finally, if hazelcast-client.xml exists in one of the known paths in your project (e.g. under resources/), the client will be created with this configuration. You do not need to create ClientConfig bean in that case.

Is there a way to pass application specific properties in Websphere?

We have a websphere application server where multiple applications are deployed. All applications use a common property(Key) but have different Value. For example :
spring.profiles.active=test in one application, spring.profiles.active=UAT in some other application.
Is it possible to pass these different values to the applications during start-up in Websphere ?
If we set these values in JVM options in the Generic JVM Arguments text box then it will become same for all the applications which we don't want.
Set these properties at application level in websphere so that when applications are started -
For application 1 - spring.profiles.active=test
For application 2 - spring.profiles.active=UAT
This document indicates that you can set the spring.profiles.active property in a WebApplicationInitializer per web application. Each application could then read its own specifically named property from System properties. Alternatively if using Liberty (the question didn't specify between traditional WebSphere vs Liberty), then you could use MicroProfile Config to define a property with a common name that is defined differently per application via appProperties, for example as shown in this knowledge center article. But you would still need the WebApplicationInitializer to read the value from MicroProfile Config.
An example would be something like the following:
Config config = ConfigProvider.getConfig();
servletContext.setInitParameter(
"spring.profiles.active",
config.getValue("ProfilesActive", String.class));
server.xml:
<server>
<featureManager>
<feature>mpConfig-1.3</feature>
.. other features
</featureManager>
<application location="app1.war">
<appProperties>
<property name="ProfilesActive" value="test"/>
</appProperties>
</application>
<application location="app2.war">
<appProperties>
<property name="ProfilesActive" value="UAT"/>
</appProperties>
</application>
</server>

What's the differences between this Infinispan cache factories for Hibernate second level cache?

I'm trying to figure out the difference between this factories, used in hibernate.cache.region.factory_class property.
Example:
<property name="hibernate.cache.region.factory_class" value="org.hibernate.cache.infinispan.JndiInfinispanRegionFactory" />
<property name="hibernate.cache.infinispan.cachemanager" value="java:jboss/infinispan/container/hibernate" />
There are 4 possible options.
The 2 options that I know something about is:
org.hibernate.cache.infinispan.InfinispanRegionFactory: for standalone aplications (not in a cluster, I think).
org.hibernate.cache.infinispan.JndiInfinispanRegionFactory: this is bounded to a JNDI in the property hibernate.cache.infinispan.cachemanager.
And I don't have any idea about these 2:
org.jboss.as.jpa.hibernate5.infinispan.SharedInfinispanRegionFactory: ?
org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory: ?
We have a cluster configured on Wildfly 10.1.0 using domain mode. We want to share the entity cache among the nodes and we are having some doubts about how to do that.
If you're using Wildfly, you don't have to worry about setting the region factory class because Wildfly uses Infinispan as second-level cache provider by default. It's all explained here.
All you have to do is enable hibernate.cache.use_second_level_cache and you're good to go. See examples in the docu.
I agree with Galder, +1!
Regarding the purpose of [org.jboss.as.jpa.hibernate5.infinispan.SharedInfinispanRegionFactory][1] + [org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory][2], these classes extend the Hibernate ORM [hibernate-infinispan][3] implementation classes, the purpose being to start the internal WildFly Infinispan cache services used for the JPA second level caching. They also deal with configuration as well. The below links may become outdated over time, as I think we [3] code might move to the Infinispan project (eventually).
A little more of the related code is at [HibernateSecondLevelCache.java][4], which backs up what Galder said. You can see that the WildFly JPA container automatically is setting the region factory class for you (if caching is enabled via [HibernatePersistenceProviderAdaptor.java][5].
I'm not sure if the code links are helpful to you, I thought they might be. :)
As a stackoverflow newbie, I am not allowed to post with more than 2 links, which is why [3] - [5] are invalid links.
Scott
[1] https://github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/infinispan/SharedInfinispanRegionFactory.java
[2] https://github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/infinispan/InfinispanRegionFactory.java
[3] github.com/hibernate/hibernate-orm/tree/master/hibernate-infinispan
[4] github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/HibernateSecondLevelCache.java
[5] github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/HibernatePersistenceProviderAdaptor.java#L91
The configuration envolves Infinispan, Hibernate and JGroups.
Using domain-mode on Wildfly10 you'll need this configuration on your application EAR:
<property name="hibernate.cache.use_second_level_cache" value="true"/>
Your server group need to use a profile that has the ha resources(high availability) such as full-ha or ha profiles. These profiles have the Infinispan and JGroups default configuration.
Then, you need to have the private 'Network Interfaces' on ALL Hosts' configuration that share the cache. JGroups uses the private Edit the domain/configuration/host.xml or use the wildfly console admin to add this configuration (200.0.0.171 needs to be replaced by the server's IP):
<interfaces>
...
<interface name="private">
<inet-address value="${jboss.bind.address.private:200.0.0.171}"/>
</interface>
<!-- .... -->
</interfaces>
For example, supposing you have a HostController HC1(with server-1 and server-2) and HC2(with server-3 and server-4)
Starting all the servers and Host Controllers, you'll see on your server.log:
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000078: Starting JGroups channel hibernate
....
....
Received new cluster view for channel hibernate: [HC1:server-1, HC2-server-2, HC2-server-3, HC2-server-4]

Reconfigure mongodb connection on each request

I'm using an external module on a webapp, meaning I can't change that code, that uses Spring Data. It starts using some properties like ${mongoURI} and ${mongoDB} to connect to a database and feed its repositories, which you access through a Service wiring it to your Controllers.
The problem I'm trying to solve is that we have a database per user, therefore I need a different connection for each user, but I'm not sure how Spring supports it.
I import this service like this:
<!-- This is in the imported jar -->
<import resource="classpath:/service-mongo-context.xml" />
<!-- This is in my folder structure -->
<context:property-placeholder location="classpath:/mongodb-config.properties" />
And then in my Controller I just:
#Autowired private Service service;
I would like to know if it is possible to rebuild the entire service each time you receive a connection passing a different ${mongoURI} and ${mongoDB}, perhaps changing variables in #Autowired.

Resources