Spring boot admin listing kubernetes internal urls. Not able to navigate to the application page - spring-boot

Problem
Trying to use Spring boot admin to do a deep monitoring of spring boot micro services running in Kubernetes.
Spring boot admin listing the micro services but pointing to the internal IPs.
Spring boot admin application listing page showing the internal IP
The application details page has almost zero info
Details
Kubernetes 1.15
Spring boot applications are getting discovered by Spring boot admin using Spring cloud discovery
spring-cloud-kubernetes version 1.1.0.RELEASE
The problem is that the IPs are of internal pod network and would not be accessible to the users in any real world scenario.
Any hints on how to approach this scenario ? Any alternatives ?
Also I was wondering how spring boot admin would behave in case of pods with more than one replica. I think it is close to impossible to point to a unique pod replica through ingress or node port.
Hack I am working on
If I can start another pod which exposes the Linux desktop to the end user. From a browser of this desktop, user may be able to access the pod network ips. It is just a wild thought as a hack.

Spring Boot Admin register each application/client based on its name by below property.
spring.boot.admin.client.instance.name=${spring.application.name}
If all your pods have same name it can register based on individual ips by enabling perfer-ip property (which is false by default):
spring.boot.admin.client.instance.prefer-ip=true
In your case, you want to SBA to register based on the Kubernetes load balanced url, then service-base-url property should be set the corresponding application's url.
spring.boot.admin.client.instance.service-base-url=http://myapp.com

Related

Hazelcast +Sring Boot + Kubernetes - How to setup Client-Server topology

I'm trying to see how to configure Client-Server with my Spring Boot application using Hazelcast on Kubernete, since we want to have the capability of sharing the cache between different Spring Boot applications (I'm already able to setup the Embedded distributed cache with Kubernetes - which is not what we need).
In case of Spring Boot single application(not on Kubernetes), its kind of easy where i will spin up a Server lets say with 'localhost' and also spin up the client connecting to localhost. Also i can have multiple instances(members) of Server which will form a Hazelcast Cluster.
However in case of Kubernetes, I know we need to have 2 different Spring Boot applications, one will act as a Server and others will be client accessing the cache, but want to know how the client would connect to the Server. Because in case of Spring we Autowire the HazelcastInstance, so how would i connect to the Server which is running in its own Kubernetes Pod ( and container).
There are a few deployment guides for Kubernetes here, and worked examples here & here.
If your server pods are joining, then you pretty much have it. A client uses the same discovery mechanism.
Add the hazelcast-kubernetes plugin to the client's pod, and set the configuration properties with the same values as you use on the server for namespace, dns, etc.
Thanks Neil. As you indicated its the same way i currently configured in embedded caching on Kubernetes. I'am using the Service-Name and namespace to discover and connect to the Server members from HazelcastClient instance.
This is from Client Spring Boot application's HazelcastConfiguration:
#Bean
public HazelcastInstance hazelcastInstance() {
final ClientConfig config = new ClientConfig();
config.setClusterName("cluster-name");
if (enableOnK8s) {
config.getNetworkConfig().getKubernetesConfig().setEnabled(true)
.setProperty("namespace", namespaceValue)
.setProperty("service-name", serviceName);
}
return HazelcastClient.newHazelcastClient(config);
}
And on Hazelcast Server Spring Boot application, configuration stays same as Embedded Hazelcast configuration.

Configuration or link required to connect cluster of Pivotal Coud Cache in Spring boot microservices

I am setting up the Spring-boot microservices with the cluster bi-direction Pivotal cloud cache.
I have set up the bi-directional cluster in Pivotal Cloud, I have a list of locators with ports.
I have already some online docs.
https://github.com/pivotal-cf/PCC-Sample-App-PizzaStore
But couldn't understand the on which configuration the spring boot app will know to connect.
I am looking for some tutorial or some reference where I can have spring boot app linked up with the PCC(gemfire)
The way you configure a app running in PCF (Pivotal Cloud Foundry) to talk to a PCC (Pivotal Cloud Cache) service instance is by binding the app to that service instance. You can bind it either by running the cf bind command or by adding the service name in the app`s manifest.yml, something like the below
path: build/libs/cloudcache-pizza-store-1.0.0-SNAPSHOT.jar
services:
- dev-service-instance
I hope you are using Spring Boot for Apache Geode & Pivotal GemFire (SBDG) in your app, if not I recommend you to use it as it makes connecting to PCC service instance extremely easy. SBDG has the logic to extract credentials, hostname:ports needed to connect to a service instance.
You as a app developer just need to
Create the service instance.
Bind your app to the service instance.
The boilerplate code for configuring credentials, hostnames, ips are handled by SBDG.
When you deploy an application in Cloud Foundry, (or Pivotal Cloud), you need to bind it to one or more services. Service details are then automatically exposed to the app via the VCAP_SERVICES environment variable. In the case of PCC this will include the name and port of the locator. By adding the spring-geode-starter (or spring-gemfire-starter) jar to the application it will automatically process the VCAP_SERVICES value and extract the necessary endpoint information in order to connect to the cluster.
Furthermore, if security is enabled on your PCC instance, you will also need to have created a service key. As with the locator details, the necessary credentials will be exposed via VCAP_SERVICES and the starter jar will automatically process and configure them.

Recommended/Alternative ways of starting a Spring Boot app if config server is down?

Was wondering the recommended way of starting a spring boot app if the Spring cloud config server is temporarily down or unavailable. What would be the approach? I know of the retry configurations, but I am wondering if there is a way to have a 'replica' config server and use that as a failover (or something along those lines).
Sure, why not?
After all, spring-cloud-config server exposes rest API and all the interaction with spring boot microservices is done over HTTP.
From this point of view, you can scale out the spring cloud config server by providing more than one instance of it all are up-and-running and mapping them to one virtual IP.
If you're running in some kind of orchestrated environment (like kubernetes) it is a very easy thing to do.

Spring Boot Actuator + Spring Boot Admin - Is there a way to define a custom management url?

Is there a way I can define the port for the management URLs (not the management.server.port) so that spring boot admin can identify the actuator URLs from the spring boot app for monitoring?
I'm running the spring boot app in a docker container and it's externally exposed on a different port using the Kubernetes NodePort.
If you are using service discovery for application lookup you could define the exposed management port in instance metadata. This metadata is used to build up the management URL.
More details documented here:
http://codecentric.github.io/spring-boot-admin/current/#spring-cloud-discovery-support
Handling is done in de.codecentric.boot.admin.server.cloud.discovery.DefaultServiceInstanceConverter
Example for Eureka:
eureka.instance.metadata-map.management.port=[K8S-EXPOSED-PORT]
If you are using Service Discovery, take a look into DefaultServiceInstanceConverter, try specifying the management.port property.
If you are not using Service Discovery, then take a look into de.codecentric.boot.admin.server.domain.values.Registration, you might need to use the builder apis to register your application correctly (try to set managementUrl properly). Note, you will need to do this in your client application (the one which is being monitored).

Microservices Config and eureka service which one to start first?

I am creating a simple project in microservices using spring boot and netflix OSS to get my hands dirty. I have created two services
config service which has to register itself in discovery(eureka)
service.
discovery service which requires config service to be running to get its configuration.
Now when I am starting these services, both services fails due to inter dependency. What are the best practices resolve this issue and which one to start first.
PS:- I know I am creating circular dependency, But what is the way to deal with situation like this where I want to keep eureka configuration also with the config server
Thanks
I believe that you can find the answer for your question in the official spring cloud config server documentation:
Here: http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html#_spring_cloud_config_client
Basically you have to choose between a "Config First Bootstrap" or "Discovery First Bootstrap".
From the docs:
"If you are using a `DiscoveryClient implementation, such as Spring Cloud Netflix and Eureka Service Discovery or Spring Cloud Consul (Spring Cloud Zookeeper does not support this yet), then you can have the Config Server register with the Discovery Service if you want to, but in the default "Config First" mode, clients won’t be able to take advantage of the registration.
If you prefer to use DiscoveryClient to locate the Config Server, you can do that by setting spring.cloud.config.discovery.enabled=true (default "false"). The net result of that is that client apps all need a bootstrap.yml (or an environment variable) with the appropriate discovery configuration. (...)"

Resources