Assuming I have two services: customerService and orderService. Both are Springboot applications contained in its own dockerfile.
In production they should be managed using Kubernetes. So there could be mutliple instances of each service. As the services should be able to call each other via REST I want to use ServiceDiscovery to get the destination of a service instance of the respective other type (e.g. customerService wants to get host, port, etc. of a running orderService instance).
I want to achieve this by using an injected DiscoveryClient in my Spring Boot services.
I understand how this works in production as there is the Kubernetes cluster the DiscoveryClient communicates with.
But how does this work in a local environment where there is no Kubernetes but only Docker running?
I think you should be looking into Kubernetes services instead of using an injected Discovery client.
For Service discovery and load-balancing, you can use Services in Kubernetes. From Kubernetes documentation:
An abstract way to expose an application running on a set of Pods as a
network service. With Kubernetes, you don't need to modify your
application to use an unfamiliar service discovery mechanism.
Kubernetes gives Pods their own IP addresses and a single DNS name for
a set of Pods and can load-balance across them.
By this, you can also avoid the overhead of maintaining discovery server instances and their availability.
How does this work in a local environment where there is no Kubernetes but only Docker running?
You can use spring profiles for picking this URL based on your environment.
For example, you'll have two spring configuration files (application-dev, application-prod) and in the dev file the URL for the second application would be localhost relative and in the prod file, the URL would be the DNS which you'll get as a part of Kubernetes service setup. So use the profile dev while running in local and prod while running in production(or an appropriate profile for your Kubernetes environment).
Related
What are the benefits to use "spring service discovery kubernetes" instead of using directly the Service DNS coming from Kubernetes?
I mean, If I deploy in kubernetes 2 services (service-a and service-b), and service-b exposes a Rest API.
service-a can easily connect to service-b using the url "http://service-b/...".
Question #1. In order to let service-a be able to connect to service-b using the service DNS, service-b has to be deployed before service-a?
Question #2. What are the pros/cons using the spring discovery?
Question #1:
No, the order in which you deploy the services is not important to use the kubernetes DNS services to resolve the ips, the only thing here is that if you deploy serviceA after serviceB, you will have in serviceA as an environment variable the ip of serviceB but not the inverse.
Question #2:
The spring service discovery is an alternative to the native kubernetes service discovery and it is used by other spring cloud projects like spring-cloud-eureka to perform the service discovery. The only pros I see in this approach is that you can custom the load balancing algorithm tath you can use to spread the load among the different instances
The microservices are getting registered to Eureka with the pod name as hostname, this causing UnknownHostException, when the Zull API gateway trying to forward the request to the service.
The complete setup working fine with the docker-compose, the issues are happening when I am trying to use the Kubernetes.
For example, the order microservice running with the pod name as "oc-order-6b64576f4b-7gp85" and the order microservice getting register to to Eureka with "oc-order-6b64576f4b-7gp85" as the hostname. Which is causing
java.net.UnknownHostException: oc-order-6b64576f4b-7gp85: Name does not resolve
I am running just one instance of each micro-services as a POC and one instance of the Eureka server. How to fix, how micro-service can register them self with the right hostname.
I want to use the Eureka as discovery service as it is well integrated with spring boot and I do not want to do code changes for Kubernetes deployment.
Add the below property to your spring properties file in each service project:
eureka.instance.hostName=${spring.application.name}
or (if you want a different name than the spring application name)
eureka.instance.hostName=<custom-host-name>
Spring boot will now register with the provided hostname in Eureka
I am setting up the Spring-boot microservices with the cluster bi-direction Pivotal cloud cache.
I have set up the bi-directional cluster in Pivotal Cloud, I have a list of locators with ports.
I have already some online docs.
https://github.com/pivotal-cf/PCC-Sample-App-PizzaStore
But couldn't understand the on which configuration the spring boot app will know to connect.
I am looking for some tutorial or some reference where I can have spring boot app linked up with the PCC(gemfire)
The way you configure a app running in PCF (Pivotal Cloud Foundry) to talk to a PCC (Pivotal Cloud Cache) service instance is by binding the app to that service instance. You can bind it either by running the cf bind command or by adding the service name in the app`s manifest.yml, something like the below
path: build/libs/cloudcache-pizza-store-1.0.0-SNAPSHOT.jar
services:
- dev-service-instance
I hope you are using Spring Boot for Apache Geode & Pivotal GemFire (SBDG) in your app, if not I recommend you to use it as it makes connecting to PCC service instance extremely easy. SBDG has the logic to extract credentials, hostname:ports needed to connect to a service instance.
You as a app developer just need to
Create the service instance.
Bind your app to the service instance.
The boilerplate code for configuring credentials, hostnames, ips are handled by SBDG.
When you deploy an application in Cloud Foundry, (or Pivotal Cloud), you need to bind it to one or more services. Service details are then automatically exposed to the app via the VCAP_SERVICES environment variable. In the case of PCC this will include the name and port of the locator. By adding the spring-geode-starter (or spring-gemfire-starter) jar to the application it will automatically process the VCAP_SERVICES value and extract the necessary endpoint information in order to connect to the cluster.
Furthermore, if security is enabled on your PCC instance, you will also need to have created a service key. As with the locator details, the necessary credentials will be exposed via VCAP_SERVICES and the starter jar will automatically process and configure them.
I am creating a simple project in microservices using spring boot and netflix OSS to get my hands dirty. I have created two services
config service which has to register itself in discovery(eureka)
service.
discovery service which requires config service to be running to get its configuration.
Now when I am starting these services, both services fails due to inter dependency. What are the best practices resolve this issue and which one to start first.
PS:- I know I am creating circular dependency, But what is the way to deal with situation like this where I want to keep eureka configuration also with the config server
Thanks
I believe that you can find the answer for your question in the official spring cloud config server documentation:
Here: http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html#_spring_cloud_config_client
Basically you have to choose between a "Config First Bootstrap" or "Discovery First Bootstrap".
From the docs:
"If you are using a `DiscoveryClient implementation, such as Spring Cloud Netflix and Eureka Service Discovery or Spring Cloud Consul (Spring Cloud Zookeeper does not support this yet), then you can have the Config Server register with the Discovery Service if you want to, but in the default "Config First" mode, clients won’t be able to take advantage of the registration.
If you prefer to use DiscoveryClient to locate the Config Server, you can do that by setting spring.cloud.config.discovery.enabled=true (default "false"). The net result of that is that client apps all need a bootstrap.yml (or an environment variable) with the appropriate discovery configuration. (...)"
I am using a spring-cloud discovery service in my docker-compose. All spring-boot services that are under my control register themselves on startup via "EnableDiscoveryClient".
But I have other services in my docker-compose that I run from their dockerhub images. How can I register them as well (database, ldap, ...)
Is there a way to configure eureka to look for defined components (pull) instead of the client "push"?
You would have to implement a Sidecar app and most-likely create a Docker image using a database, ldap image from the dockerhub as the starting point.
I published a blog post about this exact topic: Microservices Sidecar pattern implementation using Postgres, Spring Cloud Netflix and Docker