Spring Boot microservices - dependency - spring-boot

There are two microservices deployed with docker compose. A dependecy between services is defined in docker compose file by depends_on property. Is it possible to achieve the same effect implicitly, inside the spring boot application?
Let's say the microservice 1 depends on microservice 2. Which means, microsearvice 1 doesn't boot up before microservice 2 is healthy or registered on Eureka server.

By doing some research, I found a solution to the problem.
Spring Retry resolves dependency on Spring Cloud Config Server. Maven dependency spring-retry should be added into the pom.xml, and the properties below into the .properties file:
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-interval=2000
spring.cloud.config.retry.max-attempts=10
The following configuration class is used to resolve dependency on other microservices.
#Configuration
#ConfigurationProperties(prefix = "depends-on")
#Data
#Log
public class DependsOnConfig {
private List<String> services;
private Integer periodMs = 2000;
private Integer maxAttempts = 20;
#Autowired
private EurekaClient eurekaClient;
#Bean
public void dependentServicesRegisteredToEureka() throws Exception {
if (services == null || services.isEmpty()) {
log.info("No dependent services defined.");
return;
}
log.info("Checking if dependent services are registered to eureka.");
int attempts = 0;
while (!services.isEmpty()) {
services.removeIf(this::checkIfServiceIsRegistered);
TimeUnit.MILLISECONDS.sleep(periodMs);
if (maxAttempts.intValue() == ++attempts)
throw new Exception("Max attempts exceeded.");
}
}
private boolean checkIfServiceIsRegistered(String service) {
try {
eurekaClient.getNextServerFromEureka(service, false);
log.info(service + " - registered.");
return true;
} catch (Exception e) {
log.info(service + " - not registered yet.");
return false;
}
}
}
A list of services that the current microservice depends on are defined in .properties file:
depends-on.services[0]=service-id-1
depends-on.services[1]=service-id-2
A bean dependentServicesRegisteredToEureka is not being initialized until all services from the list register to Eureka. If needed, annotation #DependsOn("dependentServicesRegisteredToEureka") can be added to beans or components to prevent attempting an initialization before dependentServicesRegisteredToEureka initialize.

Related

Leverage Spring boot Redis Auto configure logic for RedisConnectionFactory

Spring boot auto configures RedisConnectionFactory if spring-data-redis exists on classpath and RedisConnectionFactory is initialized in LettuceConnectionConfiguration if Lettuce-core available on classpath.
I've only one Redis store as of now, so leveraging Spring boot auto configuration.
Now I'm adding two redis stores, one redis store used as default and other is used when specified with parameter cacheManager = "secondayCacheManager" in #Cacheable annotation so, application should've capability to cache/cache-get on both redis stores.
To configure both Redis Stores, we've to configure both the primary and secondary RedisConnectionFactory and cacheManager using custom configuration. (because spring doesn't auto configure RedisConnectionFactory if it already exists in any custom configuration)
Now the above is custom configuration and missing lot of logic that is happening while configuring RedisConnectionFactory in LettuceConnectionConfiguration.
Auto configure logic for LettuceConnectionConfiguration is package private so, cannot be called directly from custom configuration.
We would like to leverage the auto configure logic in
LettuceConnectionConfiguration while configuring the custom
RedisConnectionFactory for both primary and secondary redis caches.
Is there a way to achieve this?
Reason being we would like keep the redis connection configurations as it is done by spring boot auto configure.
Currently using below code to configure both the primary and secondary RedisConnectionFactory with Pool configuration and some code copy pasted from LettuceConnectionConfiguration class.
public static LettuceConnectionFactory buildLettuceConnectionFactory(RedisProperties properties, ClientResources clientResources) {
RedisStandaloneConfiguration standaloneConfiguration = new RedisStandaloneConfiguration(properties.getHost(), properties.getPort());
standaloneConfiguration.setDatabase(properties.getDatabase());
if (properties.getPassword() != null) {
standaloneConfiguration.setPassword(RedisPassword.of(properties.getPassword()));
}
if (properties.getUsername() != null) {
standaloneConfiguration.setUsername(properties.getUsername());
}
LettucePoolingClientConfiguration poolingClientConfiguration = LettucePoolingClientConfiguration.builder()
.poolConfig(buildGenericObjectPoolConfig(properties))
.shutdownTimeout(properties.getLettuce().getShutdownTimeout())
.clientOptions(createClientOptions(properties))
.clientResources(clientResources)
.build();
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(
standaloneConfiguration, poolingClientConfiguration);
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory;
}
private static GenericObjectPoolConfig buildGenericObjectPoolConfig(RedisProperties properties) {
RedisProperties.Pool pool = properties.getLettuce().getPool();
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
if (Objects.nonNull(pool)) {
poolConfig.setMaxIdle(pool.getMaxIdle());
poolConfig.setMinIdle(pool.getMinIdle());
poolConfig.setMaxTotal(pool.getMaxActive());
poolConfig.setMaxWaitMillis(pool.getMaxWait().toMillis());
}
return poolConfig;
}
private static ClientOptions createClientOptions(RedisProperties properties) {
ClientOptions.Builder builder = initializeClientOptionsBuilder(properties);
Duration connectTimeout = properties.getConnectTimeout();
if (connectTimeout != null) {
builder.socketOptions(SocketOptions.builder().connectTimeout(connectTimeout).build());
}
return builder.timeoutOptions(TimeoutOptions.enabled()).build();
}
private static ClientOptions.Builder initializeClientOptionsBuilder(RedisProperties properties) {
if (properties.getCluster() != null) {
ClusterClientOptions.Builder builder = ClusterClientOptions.builder();
Refresh refreshProperties = properties.getLettuce().getCluster().getRefresh();
Builder refreshBuilder = ClusterTopologyRefreshOptions.builder()
.dynamicRefreshSources(refreshProperties.isDynamicRefreshSources());
if (refreshProperties.getPeriod() != null) {
refreshBuilder.enablePeriodicRefresh(refreshProperties.getPeriod());
}
if (refreshProperties.isAdaptive()) {
refreshBuilder.enableAllAdaptiveRefreshTriggers();
}
return builder.topologyRefreshOptions(refreshBuilder.build());
}
return ClientOptions.builder();
}

Problem with connection to Neo4j test container using Spring boot 2 and JUnit5

Problem with connection to Neo4j test container using Spring boot 2 and JUnit5
int test context. Container started successfully but spring.data.neo4j.uri property has a wrong default port:7687, I guess this URI must be the same when I call neo4jContainer.getBoltUrl().
Everything works fine in this case:
#Testcontainers
public class ExampleTest {
#Container
private static Neo4jContainer neo4jContainer = new Neo4jContainer()
.withAdminPassword(null); // Disable password
#Test
void testSomethingUsingBolt() {
// Retrieve the Bolt URL from the container
String boltUrl = neo4jContainer.getBoltUrl();
try (
Driver driver = GraphDatabase.driver(boltUrl, AuthTokens.none());
Session session = driver.session()
) {
long one = session.run("RETURN 1",
Collections.emptyMap()).next().get(0).asLong();
assertThat(one, is(1L));
} catch (Exception e) {
fail(e.getMessage());
}
}
}
But SessionFactory is not created for the application using autoconfiguration following to these recommendations - https://www.testcontainers.org/modules/databases/neo4j/
When I try to create own primary bean - SessionFactory in test context I get the message like this - "URI cannot be returned before the container is not loaded"
But Application runs and works perfect using autoconfiguration and neo4j started in a container, the same cannot be told about the test context
You cannot rely 100% on Spring Boot's auto configuration (for production) in this case because it will read the application.properties or use the default values for the connection.
To achieve what you want to, the key part is to create a custom (Neo4j-OGM) Configuration bean. The #DataNeo4jTest annotation is provided by the spring-boot-test-autoconfigure module.
#Testcontainers
#DataNeo4jTest
public class TestClass {
#TestConfiguration
static class Config {
#Bean
public org.neo4j.ogm.config.Configuration configuration() {
return new Configuration.Builder()
.uri(databaseServer.getBoltUrl())
.credentials("neo4j", databaseServer.getAdminPassword())
.build();
}
}
// your tests
}
For a broader explanation have a look at this blog post. Esp. the section Using with Neo4j-OGM and SDN.

Feign with RibbonClient and Consul discovery without Spring Cloud

I was trying to setup Feign to work with RibbonClient, something like MyService api = Feign.builder().client(RibbonClient.create()).target(MyService.class, "https://myAppProd");, where myAppProd is an application which I can see in Consul. Now, if I use Spring annotations for the Feign client (#FeignClient("myAppProd"), #RequestMapping), everything works as Spring Cloud module will take care of everything.
If I want to use Feign.builder() and #RequestLine, I get the error:
com.netflix.client.ClientException: Load balancer does not have available server for client: myAppProd.
My first initial thought was that Feign was built to work with Eureka and only Spring Cloud makes the integration with Consul, but I am unsure about this.
So, is there a way to make Feign work with Consul without Spring Cloud?
Thanks in advance.
In my opinion, it's not feign work with consul, its feign -> ribbon -> consul.
RibbonClient needs to find myAppProd's serverList from its LoadBalancer.
Without ServerList, error: 'does not have available server for client'.
This job has been done by SpringCloudConsul and SpringCloudRibbon project, of course you can write another adaptor, it's just some glue code. IMHO, you can import this spring dependency into your project, but use it in non-spring way . Demo code:
just write a new feign.ribbon.LBClientFactory, that generate LBClient with ConsulServerList(Spring's class).
public class ConsulLBFactory implements LBClientFactory {
private ConsulClient client;
private ConsulDiscoveryProperties properties;
public ConsulLBFactory(ConsulClient client, ConsulDiscoveryProperties consulDiscoveryProperties) {
this.client = client;
this.properties = consulDiscoveryProperties;
}
#Override
public LBClient create(String clientName) {
IClientConfig config =
ClientFactory.getNamedConfig(clientName, DisableAutoRetriesByDefaultClientConfig.class);
ConsulServerList consulServerList = new ConsulServerList(this.client, properties);
consulServerList.initWithNiwsConfig(config);
ZoneAwareLoadBalancer<ConsulServer> lb = new ZoneAwareLoadBalancer<>(config);
lb.setServersList(consulServerList.getInitialListOfServers());
lb.setServerListImpl(consulServerList);
return LBClient.create(lb, config);
}
}
and then use it in feign:
public class Demo {
public static void main(String[] args) {
ConsulLBFactory consulLBFactory = new ConsulLBFactory(
new ConsulClient(),
new ConsulDiscoveryProperties(new InetUtils(new InetUtilsProperties()))
);
RibbonClient ribbonClient = RibbonClient.builder()
.lbClientFactory(consulLBFactory)
.build();
GitHub github = Feign.builder()
.client(ribbonClient)
.decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
List<Contributor> contributors = github.contributors("OpenFeign", "feign");
for (Contributor contributor : contributors) {
System.out.println(contributor.login + " (" + contributor.contributions + ")");
}
}
interface GitHub {
#RequestLine("GET /repos/{owner}/{repo}/contributors")
List<Contributor> contributors(#Param("owner") String owner, #Param("repo") String repo);
}
public static class Contributor {
String login;
int contributions;
}
}
you can find this demo code here, add api.github.com to your local consul before running this demo.

spring reactor and boot dependency

Is boot a prerequisite to run spring reactor?
I am trying to use spring reactor in a regular web application environment. I can see that the reactor configuration is created. Consumers are registered. Notifications are called. Events are NOT fired. What and how to check?
Configuration
#Configuration
#EnableReactor
public class ReactorConfiguration {
#Bean
Environment env() {
return new Environment();
}
#Bean
Reactor createReactor(Environment env) {
return Reactors.reactor().env(env).dispatcher(Environment.THREAD_POOL)
.get();
}
}
Registering consumers:
#PostConstruct
public void onStartUp() {
logger.debug("Registering Consumers");
reactor.on(Selectors.T(Envelope.class), processParentRequest());
reactor.on(Selectors.T(Bundle.class), processOptimizerRequest());
reactor.on(Selectors.$(Constants.LOWER_ASG), processLowerAsgsRequest());
reactor.on(Selectors.$(constants.SET_CONSUMPTION_LEVEL),
processConsumersRequest());
reactor.on(Selectors.$(constants.SET_GENERATION_LEVEL),
processProducersRequest());
reactor.on(Selectors.$(constants.SET_STORAGE_SUPPLY_LEVEL),
processStoragesRequest());
}
private Consumer<Event<Envelope>> processParentRequest() {
return envelope -> optimizerUpdatingService
.processParentRequest(envelope);
}
private Consumer<Event<Bundle>> processOptimizerRequest() {
return bundle -> eventProcessingDispenser
.processOptimizerRequest(bundle);
}
private Consumer<Event<Envelope>> processLowerAsgsRequest() {
return envelope -> lowerAsgsProcessingService
.processLowerAsgRequest(envelope);
}
private Consumer<Event<Message>> processConsumersRequest() {
return message -> consumersProcessingService
.processConsumersRequest(message);
}
private Consumer<Event<Message>> processProducersRequest() {
return message -> producersProcessingService
.processProducersRequest(message);
}
private Consumer<Event<Message>> processStoragesRequest() {
return message -> storageProcessingService
.processStoragesRequest(message);
}
Spring Boot is not a pre-requisite at all, it simply provides some conveniences when using Spring with Reactor.
There could be any number of things happening. Without any details it's hard to give specific suggestions.
Simple answer is NO.
Spring boot is not a pre requisite but it makes your bootstrapping easy.
All you need is project reactor in your classpath to use reactor

JMS application on cloudBees with Amazon EC2 broker

I have a Java ee 7 JMS app, I would like deploy it on CloudBees. I have tried a simple app with cloudAMQP, it work. But CloudAMQP offers hosted RabbitMQ service. I have to use JMS client, because my code is implement in java JMS.
The last day, I have create a glassfish 4 broker in Amazon Ec2, I have deployed a simple JMS app in CloudBees glassfish web profile platform and I have expected it can work with my Amazon glassfish broker. But there is a error.
My code is:
#JMSConnectionFactoryDefinition(
name="java:comp/jms/__defaultConnectionFactory",
maxPoolSize = 30,
minPoolSize= 20,
properties = {
"addressList=mq://ec2-54-194-27-99.eu-west-1.compute.amazonaws.com:7676",
"reconnectEnabled=true"
}
)
#JMSDestinationDefinition(
name = "java:comp/jms/webappQueue",
interfaceName = "javax.jms.Queue",
destinationName = "PhysicalWebappQueue")
#Named
#RequestScoped
public class ReceiverBean {
static final Logger logger = Logger.getLogger("ReceiverBean");
#Inject
private JMSContext context;
#Resource(lookup = "java:comp/jms/webappQueue")
private Queue queue;
.....
public void getMessage() {
try {
JMSConsumer receiver = context.createConsumer(queue);
String text = receiver.receiveBody(String.class, 1000);
if (text != null) {
FacesMessage facesMessage =
new FacesMessage("Reading message: " + text);
FacesContext.getCurrentInstance().addMessage(null, facesMessage);
} else {
FacesMessage facesMessage =
new FacesMessage("No message received after 1 second");
FacesContext.getCurrentInstance().addMessage(null, facesMessage);
}
} catch (JMSRuntimeException t) {
logger.log(Level.SEVERE,
"ReceiverBean.getMessage: Exception: {0}",
t.toString());
}
}
The error is
|2013-11-17T22:34:09.146+0000|SEVERE|glassfish 4.0|javax.enterprise.system.core|_ThreadID=45;_ThreadName=AutoDeployer;_TimeMillis=1384727649146;_LevelValue=1000;|
Exception while loading the app : CDI deployment failure:Exception
List with 2 exceptions:
Exception 0 :
org.jboss.weld.exceptions.DeploymentException: WELD-001408 Unsatisfied
dependencies for type [JMSContext] with qualifiers [#Default] at
injection point [[BackedAnnotatedField] #Inject private
javaeetutorial.websimplemessage.SenderBean.context]
Can you have some ideas for my case?
Thanks in advance.
Web profile don't include JMS, so this API and related annotation / injection isn't supported.
For this to work you need a full-profile JavaEE + adequate configuration set in container to plug your broker. Probably simpler to use vanila JMS API vs annotation-driven integration

Resources