I have a spring-boot service and using the zookeeper discovery for the service registry. Now, I have to deploy the whole system in K8s, now I am getting the problem with Feign-client. The client try to access the pods address but I want to access the K8s service name. How can I set the metadata for my service name.
Related
Trying to use Spring Cloud Kubernetes Discovery server with discovery client as described in https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#spring-cloud-kubernetes-discoveryserver
and the client couldn't fetch service information from other namespaces. There is already an issue raised in Spring Cloud Kubernetes in GitHub - https://github.com/spring-cloud/spring-cloud-kubernetes/issues/824
Tried Fabric8 client also, encountering error as below:
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://xx.xx.xx.xx/api/v1/services. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. services is forbidden: User "system:serviceaccount:abcd01:xyzssvc" cannot list resource "services" in API group "" at the cluster scope.
Did anyone manage to integrate Spring Cloud Kubernetes Discovery Server with Discovery Client? Integrating Discovery Server with Discovery client will help to prevent assigning clusterrole permission to services.
Assuming I have two services: customerService and orderService. Both are Springboot applications contained in its own dockerfile.
In production they should be managed using Kubernetes. So there could be mutliple instances of each service. As the services should be able to call each other via REST I want to use ServiceDiscovery to get the destination of a service instance of the respective other type (e.g. customerService wants to get host, port, etc. of a running orderService instance).
I want to achieve this by using an injected DiscoveryClient in my Spring Boot services.
I understand how this works in production as there is the Kubernetes cluster the DiscoveryClient communicates with.
But how does this work in a local environment where there is no Kubernetes but only Docker running?
I think you should be looking into Kubernetes services instead of using an injected Discovery client.
For Service discovery and load-balancing, you can use Services in Kubernetes. From Kubernetes documentation:
An abstract way to expose an application running on a set of Pods as a
network service. With Kubernetes, you don't need to modify your
application to use an unfamiliar service discovery mechanism.
Kubernetes gives Pods their own IP addresses and a single DNS name for
a set of Pods and can load-balance across them.
By this, you can also avoid the overhead of maintaining discovery server instances and their availability.
How does this work in a local environment where there is no Kubernetes but only Docker running?
You can use spring profiles for picking this URL based on your environment.
For example, you'll have two spring configuration files (application-dev, application-prod) and in the dev file the URL for the second application would be localhost relative and in the prod file, the URL would be the DNS which you'll get as a part of Kubernetes service setup. So use the profile dev while running in local and prod while running in production(or an appropriate profile for your Kubernetes environment).
I am new to Micro services architecture.. I want know if it is required to deploy all Spring boot micro services in the same local network ..? As Eureka Discovery service uses Private/Internal IP Address for Registration , i am unable to call a services from another service deployed in different local network...
Please let me know how micro services deployed across multiple Sub nets should communicate each other in this case
or is there a way to tell eureka to use only public IP Address..?
Not sure if this would help, but by default, your eureka client will try to connect to the default localhost:8761 where the eureka discovery server should exist.
In case you have a different location for your discovery-service you can use the following in the application.properties file in your client services.
eureka.client.service-url.defaultZone=http://{address}-{port}/eureka
The microservices are getting registered to Eureka with the pod name as hostname, this causing UnknownHostException, when the Zull API gateway trying to forward the request to the service.
The complete setup working fine with the docker-compose, the issues are happening when I am trying to use the Kubernetes.
For example, the order microservice running with the pod name as "oc-order-6b64576f4b-7gp85" and the order microservice getting register to to Eureka with "oc-order-6b64576f4b-7gp85" as the hostname. Which is causing
java.net.UnknownHostException: oc-order-6b64576f4b-7gp85: Name does not resolve
I am running just one instance of each micro-services as a POC and one instance of the Eureka server. How to fix, how micro-service can register them self with the right hostname.
I want to use the Eureka as discovery service as it is well integrated with spring boot and I do not want to do code changes for Kubernetes deployment.
Add the below property to your spring properties file in each service project:
eureka.instance.hostName=${spring.application.name}
or (if you want a different name than the spring application name)
eureka.instance.hostName=<custom-host-name>
Spring boot will now register with the provided hostname in Eureka
I am setting up the Spring-boot microservices with the cluster bi-direction Pivotal cloud cache.
I have set up the bi-directional cluster in Pivotal Cloud, I have a list of locators with ports.
I have already some online docs.
https://github.com/pivotal-cf/PCC-Sample-App-PizzaStore
But couldn't understand the on which configuration the spring boot app will know to connect.
I am looking for some tutorial or some reference where I can have spring boot app linked up with the PCC(gemfire)
The way you configure a app running in PCF (Pivotal Cloud Foundry) to talk to a PCC (Pivotal Cloud Cache) service instance is by binding the app to that service instance. You can bind it either by running the cf bind command or by adding the service name in the app`s manifest.yml, something like the below
path: build/libs/cloudcache-pizza-store-1.0.0-SNAPSHOT.jar
services:
- dev-service-instance
I hope you are using Spring Boot for Apache Geode & Pivotal GemFire (SBDG) in your app, if not I recommend you to use it as it makes connecting to PCC service instance extremely easy. SBDG has the logic to extract credentials, hostname:ports needed to connect to a service instance.
You as a app developer just need to
Create the service instance.
Bind your app to the service instance.
The boilerplate code for configuring credentials, hostnames, ips are handled by SBDG.
When you deploy an application in Cloud Foundry, (or Pivotal Cloud), you need to bind it to one or more services. Service details are then automatically exposed to the app via the VCAP_SERVICES environment variable. In the case of PCC this will include the name and port of the locator. By adding the spring-geode-starter (or spring-gemfire-starter) jar to the application it will automatically process the VCAP_SERVICES value and extract the necessary endpoint information in order to connect to the cluster.
Furthermore, if security is enabled on your PCC instance, you will also need to have created a service key. As with the locator details, the necessary credentials will be exposed via VCAP_SERVICES and the starter jar will automatically process and configure them.