Enables Master/Replica operations with spring-boot-starter-data-redis-reactive - spring-boot

I'm using spring-boot-starter-data-redis-reactive and #SpringBootApplication annotation to auto configure redis connection. I have set up a redis cluster with 1 master and 2 slaves. I have the following config in the application.properties file
spring.redis.cluster.nodes=master-node:6379,slave1-node:6379,slave2-node:6379
I want to configure it so that all writes go to master, and all reads go to slaves (slave preferred).
I found that it is using Lettuce driver under the hood. In order to achieve this, I need to add .readFrom(SLAVE_PREFERRED) into the LettuceClientConfiguration. Looked at the org\springframework\boot\autoconfigure\data\redis\LettuceConnectionConfiguration.class, I don't see a way to add this config. Any idea how to achieve this?

You need to use the LettuceClientConfigurationBuilderCustomizer
public LettuceClientConfigurationBuilderCustomizer lettuceClientConfigurationBuilderCustomizer() {
return builder -> builder.readFrom(ReadFrom.REPLICA);
}

Related

How to disable Rabbit health check via Configuration

I would like to disable the rabbit health check in my default RabbitMockConfiguration.
We have a Configuration that is imported via #Import. Unfortunately the Configuration does not prevent the health check from being added to the health indicator as that happens once spring-rabbit is in the classpath.
We have the workaround, that we add a properties file in every service using that Configuration, which disables the property management.health.rabbit.enabled, but for us it would be much nicer to be able to disable that heathcheck on configuration level.
I thought about the tests with #TestPropertySource(properties = ["management.health.rabbit.enabled=false"]), but I could not find an equivalent to use for the a #Configuration, as #PropertySource expects a location for a properties file and does not accept single properties.
Any idea what we can do?
Spring boot version: 2.2.4
Spring amqp version: 2.2.3
Spring Version: 5.2.3
If you want to change the behaviour of the health check, I'd rather override the health check so that it states Rabbit is in mock mode.
To do so, just create a HealthIndicator bean named rabbitHealthIndicator:
#Bean
public HealthIndicator rabbitHealthIndicator() {
return () -> Health.up().withDetail("version", "mock").build();
}
This has the effect of switching the production one and exposes the fact the app is running with a mock.
I guess you should add 'ApplicationListener' and add the implementation to 'src/main/resources/META-INF/spring.factories' to your module with MockReddisConfiguration. This is described in more detail here

How to test two Infinispan with JUnit?

I want to create test to make sure two instances of Infinispan cache are communication well.
In first step I create two applicationContexts using two different application-test.properties
In logs I can see two instances of cache are created.
In debug I can see also two different instances of CacheManager | DefaultCacheManager.
Everything looks fine - but when I add some valued to one instance second one instance of Cache (Infinispan) is not notified about that.
Any advice?
Currently you can use NoSQLUnit https://github.com/lordofthejars/nosql-unit#infinispan-engine which gives support for testing and managing lifecycle of Infinispan.
In next weeks we are going to integrate this to Arquillian APE as well.
If you have any question don't hesitate to ping me, my twitter is #alexsotob
if you have problem with start two Infinispan's caches on local machine try to use real host name or IP instead 'localhost' or '127.0.0.1'
if you have problem with multiple JUnit tests and Infinispan's caches try to stop transport after each test - like:
#After
public void tearDown() {
applicationContext.getBean(CacheManager.class).getTransport().stop();
}
Create a file infinispan.xml
< infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<namedCache name="xml-configured-cache">
<eviction strategy="LIRS" maxEntries="10" />
</namedCache>
< / infinispan >
Init cache with file configuration:
Cahe c = new DefaultCacheManager("infinispan.xml").getCache("xml-configured-cache");
That's all!

GemFire - Spring Boot Configuration

I am working on a project that has a requirement of Pivotal GemFire.
I am unable to find a proper tutorial about how to configure gemFire with Spring Boot.
I have created a partitioned Region and I want to configure Locators as well, but I need only server-side configuration as client is handled by someone else.
I am totally new to Pivotal GemFire and really confused. I have tried creating a cache.xml but then somehow a cache.out.xml gets created and there are many issues.
#Priyanka-
Best place to start is with the Guides on spring.io. Specifically, have a look at...
"Accessing Data with GemFire"
There is also...
"Cache Data with GemFire", and...
"Accessing GemFire Data with REST"
However, these guides focus mostly on "client-side" application concerns, "data access" (over REST), "caching", etc.
Still, you can use Spring Data GemFire (in a Spring Boot application even) to configure a GemFire Server. I have many examples of this. One in particular...
"Spring Boot GemFire Server Example"
This example demonstrates how to bootstrap a Spring Boot application as a GemFire Server (technically, a peer node in the cluster). Additionally, the GemFire properties are specified Spring config and can use Spring's normal conventions (property placeholders, SpEL expression) to configure these properties, like so...
https://github.com/jxblum/spring-boot-gemfire-server-example/blob/master/src/main/java/org/example/SpringBootGemFireServer.java#L59-L84
This particular configuration makes the GemFire Server a "GemFire Manager", possibly with an embedded "Locator" (indicated by the start-locator GemFie property, not to be confused with the "locators" GemFire property which allows our node to join and "existing" cluster) as well as a GemFire CacheServer to serve GemFire cache clients (with a ClientCache).
This example creates a "Factorials" Region, with a CacheLoader (definition here) to populate the "Factorials" Region on cache misses.
Since this example starts an embedded GemFire Manager in the Spring Boot GemFire Server application process, you can even connect to it using Gfsh, like so...
gfsh> connect --jmx-manager=localhost[1099]
Then you can run "gets" on the "Factorial" Region to see it compute factorials of the numeric keys you give it.
To see more advanced configuration, have a look at my other repos, in particular the Contacts Application RI (here).
Hope this helps!
-John
Well, I had the same problem, let me share with you what worked for me, in this case I'm using Spring Boot and Pivotal GemFire as cache client.
Install and run GemFire
Read the 15 minutes quick start guide
Create a locator(let's call it locator1) and a server(server1) and a region(region1)
Go to the folder where you started the 'Gee Fish'(gfsh) and then go to the locator's folder and open the log file, in that file you can get the port your locator is using.
Now let's see the Spring boot side:
In you Application with the main method add the #EnablegemFireCaching annotation
In the method(wherever it is) you want to cache, add the #Cacheable("region1") annotation.
Now let's create a configuration file for the caching:
//this is my working class
#Configuration
public class CacheConfiguration {
#Bean
ClientCacheFactoryBean gemfireCacheClient() {
return new ClientCacheFactoryBean();
}
#Bean(name = GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME)
PoolFactoryBean gemfirePool() {
PoolFactoryBean gemfirePool = new PoolFactoryBean();
gemfirePool.addLocators(Collections.singletonList(new ConnectionEndpoint("localhost", HERE_GOES_THE_PORT_NUMBER_FROM_STEP_4)));
gemfirePool.setName(GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME);
gemfirePool.setKeepAlive(false);
gemfirePool.setPingInterval(TimeUnit.SECONDS.toMillis(5));
gemfirePool.setRetryAttempts(1);
gemfirePool.setSubscriptionEnabled(true);
gemfirePool.setThreadLocalConnections(false);
return gemfirePool;
}
#Bean
ClientRegionFactoryBean<Long, Long> getRegion(ClientCache gemfireCache, Pool gemfirePool) {
ClientRegionFactoryBean<Long, Long> region = new ClientRegionFactoryBean<>();
region.setName("region1");
region.setLookupEnabled(true);
region.setCache(gemfireCache);
region.setPool(gemfirePool);
region.setShortcut(ClientRegionShortcut.PROXY);
return region;
}
That's all!, also do not forget to serialize(implements Serializable) the class is being cached(The class your cached method is returning)

Develop programmatically a Jgroup Channel for Infinispan in a Cluster

I'm working with infinispan 8.1.0 Final and Wildfly 10 in a cluster set up.
Each server is started running
C:\wildfly-10\bin\standalone.bat --server-config=standalone-ha.xml -b 10.09.139.215 -u 230.0.0.4 -Djboss.node.name=MyNode
I want to use Infinispan in distributed mode in order to have a distributed cache. But for mandatory requirements I need to build a JGroups channel for dynamically reading some properties from a file.
This channel is necessary for me to build a cluster-group based on TYPE and NAME (for example Type1-MyCluster). Each server who wants to join a cluster has to use the related channel.
Sailing the net I have found some code like the one below:
public class JGroupsChannelServiceActivator implements ServiceActivator {
#Override
public void activate(ServiceActivatorContext context) {
stackName = "udp";
try {
channelServiceName = ChannelService.getServiceName(CHANNEL_NAME);
createChannel(context.getServiceTarget());
} catch (IllegalStateException e) {
log.log(Level.INFO, "channel seems to already exist, skipping creation and binding.");
}
}
void createChannel(ServiceTarget target) {
InjectedValue<ChannelFactory> channelFactory = new InjectedValue<>();
ServiceName serviceName = ChannelFactoryService.getServiceName(stackName);
ChannelService channelService = new ChannelService(CHANNEL_NAME, channelFactory);
target.addService(channelServiceName, channelService)
.addDependency(serviceName, ChannelFactory.class, channelFactory).install();
}
I have created the META-INF/services/....JGroupsChannelServiceActivator file.
When I deploy my war into the server, the operation fails with this error:
"{\"WFLYCTL0180: Services with missing/unavailable dependencies\" => [\"jboss.jgroups.channel.clusterWatchdog is missing [jboss.jgroups.stack.udp]\"]}"
What am I doing wrong?
How can I build a channel the way I need?
In what way I can tell to infinispan to use that channel for distributed caching?
The proposal you found is implementation dependent and might cause a lot of problems during the upgrade. I wouldn't recommend it.
Let me check if I understand your problem correctly - you need to be able to create a JGroups channel manually because you use some custom properties for it.
If that is the case - you could obtain a JGroups channel as suggested here. But then you obtain a JChannel instance which is already connected (so this might be too late for your case).
Unfortunately since Wildfly manages the JChannel (it is required for clustering sessions, EJB etc) the only way to get full control of JChannel creating process is using Infinispan embedded (library) mode. This would require adding infinispan-embedded into your WAR dependencies. After that you can initialize it similarly to this test.

Spring Websocket mutiple broker relay addresses?

I have a cluster of RabbitMQ servers. I want to load balance my StompBrokerRelay requests from my spring boot application (with websockets) to the nodes across the cluster, BUT i don't see where I can set a list of addresses with the MessageBrokerRegistry. Right now the configuration looks like this:
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config
.enableStompBrokerRelay("/exchange")
.setAutoStartup(true)
.setVirtualHost(BROKER_VHOST)
.setRelayHost(BROKER_HOST)
.setRelayPort(BROKER_PORT)
.setClientLogin(BROKER_CLIENT_LOGIN)
.setClientPasscode(BROKER_CLIENT_PASSWORD)
.setSystemLogin(BROKER_SYSTEM_LOGIN)
.setSystemPasscode(BROKER_SYSTEM_PASSWORD);
}
Is there some way to .setRelayHosts() or do I need to look for another framework or, heaven forbid, try to finagle this stuff into working with multiple hosts.
It's not possible right now. Spring websocket is sort of half-baked.
Check https://docs.spring.io/spring-framework/docs/current/reference/html/web.html#websocket-stomp-handle-broker-relay-configure. If you wish to supply multiple addresses, on each attempt to connect, you can configure a supplier of addresses, instead of a fixed host and port. Code Snippet also included at the end of the section.

Resources