Syncing Spring Boot Contexts Startup - spring

We have two ports are exposed 8081 and 8080 in our application. This creates two contexts in the same applicaiton which causes problems such as when the 8080 port is not ready, 8081 port can respond to requests. I want to know whether is there some smart ways to sync those ports so I can rely that application has successfully started whether 8080 or 8081 port responds? In my some situation I want to respond to ping request OK if my cache is loaded correctly.

So we solved this problem by listening to ApplicationReadyEvent at each of the endpoints we wanted it to not respond with 200, instead we respond with 4xx status code until the event is received.

Related

Eureka client sometimes registrates with wrong host name

I have a question about eureka like this question but the solution of this issue were of no help at all. See the similar issue here:
Another similar issue
Well, in my issue, I'm trying to construct a graceful release module based on eureka. By pull down any service in eureka before actually shut them down to ensure there is no loadbalance exception when the specified application is closed.
I have tested the situations to set eureka.instance.preferIpAddressto false and true.
while eureka.instance.preferIpAddress=false,ribbon will not recognize those applications registered with machine name and will throw a no loadbalancer exception.
while eureka.instance.preferIpAddress=false,ribbon will recognize those applications registered with machine name and everything is going right. That means, ribbon can get the real ip address of those applications.
Here is my case, I need to not only figure out why in both situations, the instanceId of applications in eureka will still showing with machine name, but also the same application will
get chance to have different instanceId even after simple restart!
Here is what I observed:
Server IP is 192.168.24.201 with hosts setting it's name to localhost
restart the same application in several times It can be seen that sometimes the instanceId of this application will change between localhost:applicationName:8005 and 192.168.24.201:applicationName:8005.
But both instanceId have the same IP address. And that means both of them won't lead to a loadbalance exception. It only makes my manually controlling of eureka server more difficult. And that is also acceptable.
The biggest problem is, sometimes the instanceId of different server will be localhost:applicationName:8005 and that leads to conflicts! By restart the application, the situation will solve in chance but not all the times! So if I'm using eureka as a cluster of several server, I cannot ensure my application is correctly registrate into eureka!
Here is the eureka client setting of application8005:
eureka:
instance:
lease-renewal-interval-in-seconds: ${my-config.eureka.instance.heartbeatInterval:5}
lease-expiration-duration-in-seconds: ${my-config.eureka.instance.deadInterval:15}
preferIpAddress: true
client:
service-url:
defaultZone: http://192.168.24.201:8008/eureka/
registry-fetch-interval-seconds: ${my-config.eureka.client.fetchRegistryInterval:20}
Here is the eureka server setting of EurekaServer:
eureka:
server:
eviction-interval-timer-in-ms: ${my-config.eureka.server.refreshInterval:5000}
enable-self-preservation: false
responseCacheUpdateIntervalMs: 5000
I don't know why applications' instanceId will sometimes not using IP as beginning string but using localhost.
The problem was solved by using prefer-ip-address: true and instance-id: ${spring.cloud.client.ip-address}:${spring.application.name}:${server.port}:${spring.cloud.nacos.config.group}
I have ruled that each server can run only one same app.
In this case each instance will have it's own unique id in this way.

How to handle random heroku ports

It appears that when I start my heroku server remotely, it chooses a random value for the port. This is handled fine on the server by using the process.env.PORT value but how does a hard-coded client know what port to use to connect to the heroku server? There seems to be no way to force the heroku port value - this seems to be for cool server restart ability and container issues and to prevent port collisions. That's cool but how can I use a server who's port changes from time to time?
While your app needs to listen on a random port, from an outsider point of view, you will always open a connection on port 80 or 443.
Heroku has a router through every connection goes first.
Whenever a request goes to appname.herokuapp.com or a custom domain you have configured, it is sent to that router.
The router knows about all your running dynos (and the port the app runs on), and will pick one randomly to send the connection to.

How to allow Socket IO requests on Amazon EC2 instance?

I have a web app running on a Amazon EC2 Instance on port 8080, the webapp while starting, starts a Socket io server listening on port 9092.
in the client file connecting to the Socket io server i have this:
io.connect('http://<IPADDRESS>:9092');
Unfortunately, this request is getting blocked as shown :
I thought the problem was about inbound rules of my EC2 instance, i therefore allowed traffic for the purpose as shown:
But the requests are still blocked...
NOTE: When my app is hosted locally, everything works fine.
So why is amazon behaving this way and what am i supposed to do to come across this issue?
UPDATE:
netstat -a -n | grep 9092 outputs this on instance:
Also have a look on what firefox shows me about a request attempt timings:
It turns that i was binding my server to the localhost address, as if it were accessed from the localhost.
Thanks to #robertklep comment, i did bound the server to the ec2 instance address and it's working now.
The easiest way to establish a socket connection with your server from outside of EC2 is to listen to all the incoming traffic:
server.listen(3000, '0.0.0.0');
This is only recommended for testing and development environment. Do not use this for production.

Exposing Web API in Service Fabric

I'm having trouble accessing my Web Api that has been deployed to my Service Fabric cluster. I've followed the new Stateless Web Api template and have added the http endpoint seen below. I also made modifications that to the OwinCommunication as depicted here.
<Resources>
<Endpoints>
<Endpoint Name="ServiceEndpoint" Type="Input" Protocol="http" Port="8080" />
</Endpoints>
</Resources>
When creating my cluster I added a custom endpoint of 80 to my Node Type.
The client connection endpoint to my cluster is: mycluster.eastus.cloudapp.azure.com:19000
Also, I have a load balancing rule that maps port 80 to backend port 8080 over TCP. The probe associated is on port 80, and I have tried both protocols (http and tcp), but neither seem to work.
Locally, I can access an endpoint on my Web Api by calling http://localhost:8080/health/ping, and get back "pong". When I attempt to access it in service fabric cluster, a file is downloaded. The URL I use to access it in the cloud is http://mycluster.eastus.cloudapp.azure.com:19000/health/ping. I've tried other ports (19080, 80, 8080) but they either hang or give me a 400.
My questions regarding exposing a Web Api in a service fabric cluster are:
Should the probe be http or tcp?
Should the probe backend port be set to the web api port (e.g. 8080)?
Is my URL/port correct for accessing my api?
Why is a binary file being downloaded? This happens in all browsers, and the content being displayed in postman and fiddler.
Found the answer to my question after a number of heuristics. If my Web Api endpoint is set to port 8080 then I need the following:
Probe for port 8080 on TCP
A load balancing rule with port 80 and backend port 8080
Access the Web Api over the following URL: http://mycluster.eastus.cloudapp.azure.com/health/ping
As for #4, this is still a mystery.
http://mycluster.eastus.cloudapp.azure.com:19000/health/ping
This is wrong.
It should be http://mycluster.eastus.cloudapp.azure.com:8080/health/ping
At least this what the documentation says. So it should work without touching the load balancer.

WinSock client ports and router port forwarding

I have a server application that binds to a port and listens on it. I've set up the router to forward the data on this port to the server.
Now, on the client side, I don't actually bind() the socket to any port, and I usually end up with a different port everytime. In that case, how can I prepare the router to forward that port to the client? Or am I supposed to use bind() with the client socket as well? (I remember reading that you're not supposed to do that.)
Firewalls are usually stateful - meaning if TCP connection request into the protected network is allowed, then the packets back to the client are matched (and passed through) automatically. That is to say you don't worry about the client, just setup port forwarding to the server app.

Resources