how to configure axon server client with remote server host? (without localhost) - spring

I'm new to Axon server. I tried to work with Axon server with spring boot.
I installed the axon server on one of my cloud instances. when I run the spring boot application, the application finds the local Axon server. but there is no local one in my case.
I couldn't find a method to configure the IP address in the property file. if you know how to configure the remote host of the Axon server in Spring boot application please help me to do it.
The error like below,
Requesting connection details from localhost:8124
Connecting to AxonServer node [localhost:8124] failed: UNAVAILABLE: io exception
Failed to get connection to AxonServer. Scheduling a reconnect in 2000ms
Thanks.

To configure the location of Axon Server, add the following property to the application.properties file:
axon.axonserver.servers=<hostname/ip address>:<port>
If you are running Axon Server on the default port, you can omit the port number.

Related

How does Spring Micro service know Axon Server Port

This entire code is available at: https://github.com/Naresh-Chaurasia/API-MicroServices-Kafka/tree/master/Microservices-CQRS-SAGA-Kafka/DiscoveryService
I have following spring-boot setup.
Client/Postman is calling API gateway (which is also acting as load balancer).
The API gateway and Products are spring boot application/Microservices with are registered with Eureka
Discovery Service (Also Spring Boot Application).
I run the applications in following order: Eureka discovery service, Products, API Gateway
In the pom.xml file for Products, I have following entry:
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-spring-boot-starter</artifactId>
<version>4.4.7</version>
</dependency>
I am using following Axon Server: AxonServer-4.5.5.jar to run the Axon Server, and the axonserver.properties entry is as follows:
server.port=8026
axoniq.axonserver.name=My Axon Server
axoniq.axonserver.hostname=localhost
axoniq.axonserver.devmode.enabled=true
The default port for Axon server is 8024. I have tried running it on 8024, 8025, 8026 by updating the axonserver.properties file. The Axon server is running on localhost.
Every time I change the port in Axon server by updating the axonserver.properties file, the Product Microservice identifies the Axon server, even though Axon Server is not running on default port. I do not specify Axon server port in the Product Microservice.
My Question is: Even though I NOT am Specifying the port in Product Microservice, how is it that the Product Microservice identifies the correct port.
I believe you are missunderstanding the ports here.
Axon Server has 3 ports:
server.port: HTTP port for the Axon Server console. Default is 8024;
port: gRPC port for the Axon Server node. Default is 8124;
internal-port: gRPC port for communication between Axon Server nodes within a cluster (Axon EE only). Default is 8224.
So, a default AF application will always try to connect to an Axon Server running at 8124, which is the gRPC port. The 8024 port is used for you to access AS dashboard (and other more specific things, like the Rest API endpoints).
To add a bit more, you can check the ref-guide for the full list of properties and configuration here: https://docs.axoniq.io/reference-guide/axon-server/administration/admin-configuration/configuration#configuration-properties

Spring Cloud Deployer Local is unable to spin up worker remote partitions when server.port property is set in master's application properties file

I am trying to build a batch service in an existing application that has server.port=8080 property configured in application.properties file. When I run the batch process and Spring Batch trying to bring up remote partitions(separate JVMs), spring cloud deployer local throws error saying
"\r\n\r\n***************************\r\nAPPLICATION FAILED TO START\r\n***************************\r\n\r\nDescription:\r\n\r\nThe Tomcat connector configured to listen on port 8080 failed to start. The port may already be in use or the connector may be misconfigured.\r\n\r\nAction:\r\n\r\nVerify the connector's configuration, identify and stop any process that's listening on port 8080, or configure this application to listen on another port.
Is there a way to make the framework generate random ports for worker partitions being the server.port property that is already configured in the application.properties as is?
Thanks.
A Spring Batch remote partitioning setup requires a message broker for the communication between the manager and workers, but it does not require any web capabilities. You seem to be deploying all your apps locally (manager and workers) as web applications, hence the port conflict when multiple workers are deployed.
You have at least two options:
Either set a random server port for each app (See how Spring Boot allows you to do that here)
Or, if the number of workers is fixed, set ports to distinct values statically.

Redisson and Spring Boot to connect to the AWS EC2 Hosting Redis

Hi I have installed Redis on AWS-EC2 instance. I am able to ping the instance from my local machine.
I have opened all ports on my EC2 instance so i should be able to connect and ping the Redis Server.
I am able to ping the EC2 server using the IP(public)
I want to hit the Redis server from my Windows local machine running Spring boot application(with Redission)
In my spring boot I have configured the file(Json) with Single Server Configurations. This is as described in the link below -
https://github.com/redisson/redisson/wiki/2.-Configuration
{
"singleServerConfig":{
"idleConnectionTimeout":10000,
"connectTimeout":10000,
"timeout":3000,
"retryAttempts":3,
"retryInterval":1500,
"password":null,
"subscriptionsPerConnection":5,
"clientName":null,
"address": "redis://<EC2-IP>:6379",
"subscriptionConnectionMinimumIdleSize":1,
"subscriptionConnectionPoolSize":50,
"connectionMinimumIdleSize":24,
"connectionPoolSize":64,
"database":0,
"dnsMonitoringInterval":5000
},
"threads":16,
"nettyThreads":32,
"codec":{
"class":"org.redisson.codec.FstCodec"
},
"transportMode":"NIO"
}
Then i am instantiating bean in my spring boot app-
#Bean(name="redissonClient",destroyMethod="shutdown") public
RedissonClient redissonClient() { return Redisson.create(config); }
However, i am getting an exception below -
2019-11-18 14:54:54.432 WARN 81628 --- [isson-netty-1-6] io.netty.channel.DefaultChannelPipeline : An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: CommandDecoder.decode() must consume the inbound data or change its state if it did not decode anything.
at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:379) ~[netty-codec-4.1.27.Final.jar:4.1.27.Final]
at io.netty.handler.codec.ReplayingDecoder.channelInputClosed(ReplayingDecoder.java:329) ~[netty-codec-4.1.27.Final.jar:4.1.27.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:359) ~[netty-codec-4.1.27.Final.jar:4.1.27.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:342) ~[netty-codec-4.1.27.Final.jar:4.1.27.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.27.Final.jar:4.1.27.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.27.Final.jar:4.1.27.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.27.Final.jar:4.1.27.Final
The link below points that this is fixed - https://github.com/redisson/redisson/issues/1566
and occurs when we can not communicate to the redis server.
As I have all ports opened on EC2 i don't think communication should be a problem for redis on port 6379. Still i am facing the issue. please help if any one has any idea or if i am missing something.
Thanks

How to configure proxy url in weblogic to connect destination from source via Proxy

I have deployed an application in weblogic managed server which internally connects to cloud network, since this application where I deployed is a secured network so it should connect via proxy and hence I need to configure the proxy url settings in weblogic, I added below settings in server start option in weblogic managed server however application is getting fail to start.
For Example:-
Source Ip:- SourceIp
Destination Ip:- DestIP(which is configured in application properties file)
Proxy Url :- ProxyIp
Proxy port :- 8080
Configuration done in managed server as per below.
-Dhttp.proxyHost= ProxyIp-Dhttp.proxyPort=8080 -Dhttps.proxyHost=ProxyIp-Dhttps.proxyPort=8080 -Dhttp.nonProxyHosts=SourceIp
Note:- If I deploy the application in non secured network where I do not need to configure any proxy works fine and application gets started. I am expecting post proxy configuration in web logic, my app should up and running.However I get below error:
DestIp failed: Connection timed out (Connection timed out)

Eureka First Discovery & Config Client Retry with Docker Compose

We've three Spring Boot applications:
Eureka Service
Config Server
Simple Web Service making use of Eureka and Config Server
I've set up the services so that we use a Eureka First Discovery, i.e. the simple web application finds out about the config server from the eureka service.
When started separately (either locally or by starting them as individual docker images) everything is ok, i.e. start config server after discovery service is running, and the Simple web service is started once the config server is running.
When docker-compose is used to start the services, they obviously start at the same time and essentially race to get up and running. This isn't an issue as we've added failFast: true and retry values to the simple web service and also have the docker container restarting so that the simple web service will eventually restart at a time when the discovery service and config server are both running but this doesn't feel optimal.
The unexpected behaviour we noticed was the following:
The simple web service reattempts a number of times to connect to the discovery service. This is sensible and expected
At the same time the simple web service attempts to contact the config server. Because it cannot contact the discovery service, it retries to connect to a config server on localhost, e.g. logs show retries going to http://localhost:8888. This wasn't expected.
The simple web service will eventually successfully connect to the discovery service but the logs show it stills tries to establish communication to the config server by going to http://localhost:8888. Again, this wasn't ideal.
Three questions/observations:
Is it a sensible strategy for the config client to fall back to trying localhost:8888 when it has been configured to use discovery to find the config server?
When the eureka connections is established, should the retry mechanism not now switch to trying the config server endpoint as indicated by Eureka? Essentially putting in higher/longer retry intervals and periods for the config server connection is pointless in this case as it's never going to connect to it if it's looking at localhost so we're better just failing fast.
Are there any properties that can override this behaviour?
I've created a sample github repo that demonstrates this behaviour:
https://github.com/KramKroc/eurekafirstdiscovery/tree/master

Resources