Deploying webservices on multihomed machines - osgi

I am using Apache CXF through OSGi to expose my OSGi services as web services. I am able to set the webservice uri through the "org.apache.cxf.ws.address" property, but this ties me to a single IP Address on the server. Some services need to deploy on servers that are out of my control and so I would like to be able to deploy the services to all address on the server.
How can I do this?

Set the address to 0.0.0.0 to listen on all the network interfaces. In case you need to remove unneeded, then deploy firewall.

Related

Spring DiscoveryClient in local environment

Assuming I have two services: customerService and orderService. Both are Springboot applications contained in its own dockerfile.
In production they should be managed using Kubernetes. So there could be mutliple instances of each service. As the services should be able to call each other via REST I want to use ServiceDiscovery to get the destination of a service instance of the respective other type (e.g. customerService wants to get host, port, etc. of a running orderService instance).
I want to achieve this by using an injected DiscoveryClient in my Spring Boot services.
I understand how this works in production as there is the Kubernetes cluster the DiscoveryClient communicates with.
But how does this work in a local environment where there is no Kubernetes but only Docker running?
I think you should be looking into Kubernetes services instead of using an injected Discovery client.
For Service discovery and load-balancing, you can use Services in Kubernetes. From Kubernetes documentation:
An abstract way to expose an application running on a set of Pods as a
network service. With Kubernetes, you don't need to modify your
application to use an unfamiliar service discovery mechanism.
Kubernetes gives Pods their own IP addresses and a single DNS name for
a set of Pods and can load-balance across them.
By this, you can also avoid the overhead of maintaining discovery server instances and their availability.
How does this work in a local environment where there is no Kubernetes but only Docker running?
You can use spring profiles for picking this URL based on your environment.
For example, you'll have two spring configuration files (application-dev, application-prod) and in the dev file the URL for the second application would be localhost relative and in the prod file, the URL would be the DNS which you'll get as a part of Kubernetes service setup. So use the profile dev while running in local and prod while running in production(or an appropriate profile for your Kubernetes environment).

Stop specific instance to register Eureka

I have a eureka server running on test server and multiple services registers from test server to this eureka server.
Now problem is sometimes developers also connect their local microservice instance for some service to eureka. Due to this it shows multiple instances for that service on eureka and load balancer starts sending request to local servers as well from feign client. That causes issues in testing as test server is not able to connect local developers machine in feign client calls.
I instructed developers to set eureka.client.register-with-eureka=false from local but still if someone connects how can I stop that. Is there a way that eureka server registers only from specific IP (test server ip)? Or any other solution to prevent this problem?
For the services that you don;t want them to register, remove #EnableDiscoveryClient from the services. #EnableDiscoveryClien lives in spring-cloud-commons and picks the implementation on the classpath. This will stop your services from getting discovered but then you won;t be able to make the Feign calls to other services and take the benefit of load balancing your calls.

Spring boot micro service Deployment

I am new to Micro services architecture.. I want know if it is required to deploy all Spring boot micro services in the same local network ..? As Eureka Discovery service uses Private/Internal IP Address for Registration , i am unable to call a services from another service deployed in different local network...
Please let me know how micro services deployed across multiple Sub nets should communicate each other in this case
or is there a way to tell eureka to use only public IP Address..?
Not sure if this would help, but by default, your eureka client will try to connect to the default localhost:8761 where the eureka discovery server should exist.
In case you have a different location for your discovery-service you can use the following in the application.properties file in your client services.
eureka.client.service-url.defaultZone=http://{address}-{port}/eureka

Problems setting up Zuul proxy server with Eureka discovery

I am trying to set up a zuul proxy server which will act as a gateway service for other apis in my microservice architecture.
So far all the tutorials that I have come across have the discovery client and zuul proxy set up in different gradle modules while I am trying to set them up in the same gradle module.
I have defined the routes and can see that my services have been successfully registered in the eureka dashboard.
I have also verified that I can ping the services using a discovery client from my gatekeeper service but whenever I try to access the services from the URL, I get
"Load balancer does not have available server for client:xyz"
exception.
Can somebody please help me setting this up?

Is it possible to deploy multiple Spring Boot applications to a server?

We can create restful API in spring boot jar file?
1)can we split multiple jar file in Apache server?
2) if we split multiple jar file how will identify which jar contain correct rest APIs
How spring boot jar file will work in server?
For Development Environment
You can configure ports via application.properties or via system properties.
Or with option to jvm --server.port=8081
So, there is no problem to run a few APIs on single machine with different ports.
You don't need Apache Server. Spring Boot has it's own embedded for you. So, you can easily use it.
Let's say you have two APIs.
localhost:8081 (Checkout Service)
localhost:8082 (Payment Service)
Hostname and port - it's your identification for each service.
When you trying to search something in Google.
You browser - it's a client.
And Google's servers - it's a server.
The same here. Checkout Service trying to delegate some job to Payment Service. So, Checkout Service - it's a client. And this client should know the address of Payment Service.
For Production Environment
You should think twice, how you will monitor performance, manage scalability and so on.

Resources