what difference between KubernetesClient and OpenshiftClient in fabric8 - fabric8

I am learning fabric8,
When I use KubernetesClient, it can access pods and return res
KubernetesClient k8sClient = new DefaultKubernetesClient();
try {
PodList podList = k8sClient.pods().inNamespace("myns").list();
logger.info("There are {} }pods in myns namespace.", podList.getItems().size());
} catch (KubernetesClientException exception) {
logger.info("error: {}", exception.getMessage());
}
When I use openshiftClient, it failed to return any results and no any errors.
OpenShiftClient openshiftClient = new DefaultOpenShiftClient();
try {
PodList podList = openshiftClient.pods().inNamespace("myns").list();
logger.info("There are {} }pods in myns namespace.", podList.getItems().size());
} catch (KubernetesClientException exception) {
logger.info("error: {}", exception.getMessage());
}
I am curious what difference between KubernetesClient and OpenshiftClient. In fact my cluster is a OpenShift 4.7.32, my fabbric8 version is
implementation group: 'io.fabric8', name: 'kubernetes-client', version: '5.8.0'
implementation group: 'io.fabric8', name: 'kubernetes-api', version: '3.0.12'
Any ideas for that?

KubernetesClient is the base of all clients provided by Fabric8. It is most popular module among all provided modules and is used when user only needs to access Kubernetes native resources (For example, Deployment, Pod, Service etc) or Custom Resources (via Fabric8 KubernetesClient CustomResource API). For most cases, you'd only need to manage basic Kubernetes Resources and would only need KubernetesClient, which is available via this dependency:
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>5.9.0</version>
</dependency>
OpenShiftClient is specific to Red Hat OpenShift container platform. It is a superset of KubernetesClient which means you get all the functionality of KubernetesClient along with some additional support of OpenShift specific API groups. Some examples of these are:
DeploymentConfig openshiftClient.deploymentConfigs()
Route openshiftClient.routes()
BuildConfig openshiftClient.buildConfigs()
ImageStream openshiftClient.imageStreams()
Project openshiftClient.projects()
Template openshiftClient.templates()
If you would not have been using OpenShiftClient, you would be accessing these resources via Fabric8 KubernetesClient Generic API. with OpenShiftClient, you get model classes and DSL support for these OpenShift specific resources. You can have a look at OpenShiftClient interface to see all the additional DSL endpoints provided. You can access this via the openshift-client module:
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>openshift-client</artifactId>
<version>5.9.0</version>
</dependency>
I tried reproducing your issue by trying out your code on Red Hat OpenShift Developer Sandbox by running these code samples on OpenShiftCluster:
KubernetesClient:
public static void main(String[] args) {
try (KubernetesClient k8sClient = new DefaultKubernetesClient()) {
PodList podList = k8sClient.pods().inNamespace("rokumar-dev").list();
logger.info("There are {} pods in myns namespace.", podList.getItems().size());
} catch (KubernetesClientException exception) {
logger.info("error: {}", exception.getMessage());
}
}
OpenShiftClient:
public static void main(String[] args) {
try (OpenShiftClient k8sClient = new DefaultOpenShiftClient()) {
PodList podList = k8sClient.pods().inNamespace("rokumar-dev").list();
logger.info("There are {} pods in myns namespace.", podList.getItems().size());
} catch (KubernetesClientException exception) {
logger.info("error: {}", exception.getMessage());
}
}
However, it gives same result for me in both cases. I see you're using kubernetes-api(which is using quite old version of Fabric8 OpenShift Client(see code here)). This module is marked deprecated and is no longer maintained. I think if you run this code with openshift-client dependency it should work

Related

Spring Cloud Config Server resilient SearchPathCompositeEnvironmentRepository when git repository is not found

I have a use case that I need to support multiples git backends. I found that it is possible to use Composite Repository but I realized after some tests that if a repository is not present in one of the git backends the request to configserver will throw an exception "RepositoryNotFound".
It would be amazing if I have the option to choose whether the request to configserver would fail or return empty for the specific git backend keeping the response from the others git backend.
I tried implementing a new Repository that inherits from SearchPathCompositeEnvironmentRepository that catch the exception and ignore it.
Something like:
#Slf4j
public class ResilientCompositeEnvironmentRepository extends SearchPathCompositeEnvironmentRepository
{
public ResilientCompositeEnvironmentRepository(List<EnvironmentRepository> environmentRepositories)
{
super(environmentRepositories);
}
#Override
public Environment findOne(String application, String profile, String label, boolean includeOrigin)
{
Environment env = new Environment(application, new String[] {profile}, label, null, null);
for (EnvironmentRepository repo : environmentRepositories)
{
try
{
env.addAll(repo.findOne(application, profile, label, includeOrigin).getPropertySources());
}
catch (Exception e)
{
log.warn("Could not find repo", e);
}
}
return env;
}
}
But the problem is that the SearchPathCompositeEnvironmentRepository bean created at EnvironmentRepositoryConfiguration is a Primary bean and I'm not able to easily override it.
Thanks
I found that there. is flag failOnError in the newest version. I will update my spring-cloud version.

Load balancing problems with Spring Cloud Kubernetes

We have Spring Boot services running in Kubernetes and are using the Spring Cloud Kubernetes Load Balancer functionality with RestTemplate to make calls to other Spring Boot services. One of the main reasons we have this in place is historical - in that previously we ran our services in EC2 using Eureka for service discovery and after the migration we kept the Spring discovery client/client-side load balancing in place (updating dependencies etc for it to work with the Spring Cloud Kubernetes project)
We have a problem that when one of the target pods goes down we get multiple failures for requests for a period of time with java.net.NoRouteToHostException ie the spring load balancer is still trying to send to that pod.
So I have a few questions on this:
Shouldn't the target instance get removed automatically when this happens? So it might happen once but after that, the target pod list will be repaired?
Or if not is there some other configuration we need to add to handle this - eg retry / circuit breaker, etc?
A more general question is what benefit does Spring's client-side load balancing bring with Kubernetes? Without it, our service would still be able to call other services using Kubernetes built-in service / load-balancing functionality and this should handle the issue of pods going down automatically. The Spring documentation also talks about being able to switch from POD mode to SERVICE mode (https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#loadbalancer-for-kubernetes). But isn't this service mode just what Kubernetes does automatically? I'm wondering if the simplest solution here isn't to remove the Spring Load Balancer altogether? What would we lose then?
An update on this: we had the spring-retry dependency in place, but the retry was not working as by default it only works for GETs and most of our calls are POST (but OK to call again). Adding the configuration spring.cloud.loadbalancer.retry.retryOnAllOperations: true fixed this, and hence most of these failures should be avoided by the retry using an alternative instance on the second attempt.
We have also added a RetryListener that clears the load balancer cache for the service on certain connection exceptions:
#Configuration
public class RetryConfig {
private static final Logger logger = LoggerFactory.getLogger(RetryConfig.class);
// Need to use bean factory here as can't autowire LoadBalancerCacheManager -
// - it's set to 'autowireCandidate = false' in LoadBalancerCacheAutoConfiguration
#Autowired
private BeanFactory beanFactory;
#Bean
public CacheClearingLoadBalancedRetryFactory cacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
return new CacheClearingLoadBalancedRetryFactory(loadBalancerFactory);
}
// Extension of the default bean that defines a retry listener
public class CacheClearingLoadBalancedRetryFactory extends BlockingLoadBalancedRetryFactory {
public CacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
super(loadBalancerFactory);
}
#Override
public RetryListener[] createRetryListeners(String service) {
RetryListener cacheClearingRetryListener = new RetryListener() {
#Override
public <T, E extends Throwable> boolean open(RetryContext context, RetryCallback<T, E> callback) { return true; }
#Override
public <T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {}
#Override
public <T, E extends Throwable> void onError(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {
logger.warn("Retry for service {} picked up exception: context {}, throwable class {}", service, context, throwable.getClass());
if (throwable instanceof ConnectTimeoutException || throwable instanceof NoRouteToHostException) {
try {
LoadBalancerCacheManager loadBalancerCacheManager = beanFactory.getBean(LoadBalancerCacheManager.class);
Cache loadBalancerCache = loadBalancerCacheManager.getCache(CachingServiceInstanceListSupplier.SERVICE_INSTANCE_CACHE_NAME);
if (loadBalancerCache != null) {
boolean result = loadBalancerCache.evictIfPresent(service);
logger.warn("Load Balancer Cache evictIfPresent result for service {} is {}", service, result);
}
} catch(Exception e) {
logger.error("Failed to clear load balancer cache", e);
}
}
}
};
return new RetryListener[] { cacheClearingRetryListener };
}
}
}
Are there any issues with this approach? Could something like this be added to the built in functionality?
Shouldn't the target instance get removed automatically when this
happens? So it might happen once but after that the target pod list
will be repaired?
To resolve this issue you have to use the Readiness and Liveness Probe in Kubernetes.
Readiness will check the health of the endpoint that your application has, on the period of interval. If the application fails it will mark your PODs as Unready to accept the Traffic. So no traffic will go to that POD(replica).
Liveness will restart your application if it fails so your container or we can say POD will come up again and once we will get 200 response from app K8s will mark your POD as Ready to accept the traffic.
You can create the simple endpoint in the application that give response as 200 or 204 as per need.
Read more at : https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Make sure you application using the Kubernetes service to talk with each other.
Application 1 > Kubernetes service of App 2 > Application 2 PODs
To enable load balancing based on Kubernetes Service name use the
following property. Then load balancer would try to call application
using address, for example service-a.default.svc.cluster.local
spring.cloud.kubernetes.loadbalancer.mode=SERVICE
The most typical way to use Spring Cloud LoadBalancer on Kubernetes is
with service discovery. If you have any DiscoveryClient on your
classpath, the default Spring Cloud LoadBalancer configuration uses it
to check for service instances. As a result, it only chooses from
instances that are up and running. All that is needed is to annotate
your Spring Boot application with #EnableDiscoveryClientto enable
K8s-native Service Discovery.
References : https://stackoverflow.com/a/68536834/5525824

Capturing cassandra cql metrics from spring boot application

I want to capture the db query metrics from springboot cassandra application and expose to prometheus endpoint.
Already have implemenatation for springboot+ postgres and its working with r2dbc-proxy. since r2dbc not providing support for cassandra. looking for any sample implementation.
After edited code for below comment:
String contactPoint = System.getProperty("contactPoint", "127.0.0.1");
// init default prometheus stuff
DefaultExports.initialize();
// setup Prometheus HTTP server
Optional<HTTPServer> prometheusServer = Optional.empty();
try {
prometheusServer = Optional.of(new HTTPServer(Integer.getInteger("prometheusPort", 9095)));
} catch (IOException e) {
System.out.println("Exception when creating HTTP server for Prometheus: " + e.getMessage());
}
Cluster cluster = Cluster.builder()
.addContactPointsWithPorts(new InetSocketAddress(contactPoint, 9042))
.withoutJMXReporting()
.build();
try (Session session = cluster.connect()) {
MetricRegistry myRegistry = new MetricRegistry();
myRegistry.registerAll(cluster.getMetrics().getRegistry());
CollectorRegistry.defaultRegistry.register(new DropwizardExports(myRegistry));
session.execute("create keyspace if not exists test with replication = {'class': 'SimpleStrategy', 'replication_factor': 1};");
session.execute("create table if not exists test.abc (id int, t1 text, t2 text, primary key (id, t1));");
session.execute("truncate test.abc;");
}
catch(IllegalStateException ex){
System.out.println("metric registry fails to configure!!!!!");
throw ex;
}
}
}
If using micrometer
add dependency com.datastax.oss:java-driver-metrics-micrometer
Create CqlSessionBuilderCustomizer and register MeterRegistry (io.micrometer.core.instrument.MeterRegistry) bean using withMetricRegistry method.
Create DriverConfigLoaderBuilderCustomizer with used metrics (https://stackoverflow.com/a/62940370/12584290)
DataStax Java driver exposes metrics via Dropwizard Metrics library (driver version 3.x, driver version 4.x) that could be exposed as Prometheus endpoint using via standard Prometheus libraries, like io.prometheus.simpleclient_dropwizard that is part of Prometheus Java client library.
Here is an example for driver version 4.x, but with small modification it could work with 3.x as well. The main part is following:
MetricRegistry registry = session.getMetrics()
.orElseThrow(() -> new IllegalStateException("Metrics are disabled"))
.getRegistry();
CollectorRegistry.defaultRegistry.register(new DropwizardExports(registry));
the rest is just creating session, exposing metrics via HTTP, etc.

How to configure the ObjectMapper for Unirest in a spring boot project

I am using Unirest in a project which is working fine for me. However, I want to post some data and do not want to escape all the JSON as it looks ugly and is just a pain in the neck.
I found a few links on how to configure the ObjectMapper for Unirest and it gives this code.
Unirest.setObjectMapper(new ObjectMapper() {
com.fasterxml.jackson.databind.ObjectMapper mapper =
new com.fasterxml.jackson.databind.ObjectMapper();
public String writeValue(Object value) {
try {
return mapper.writeValueAsString(value);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
public <T> T readValue(String value, Class<T> valueType) {
try {
return mapper.readValue(value, valueType);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
});
But, no examples show where it is best to do this in a Spring Boot API project.
I tried to set it up in the main class method, but I am getting an error that 'setObjectMapper' cannot be resolved. I also tried to do this in the controller but I get the same error.
My Gradle deps for these two libraries are:
// https://mvnrepository.com/artifact/com.mashape.unirest/unirest-java
compile group: 'com.mashape.unirest', name: 'unirest-java', version: '1.4.5'
compile 'com.fasterxml.jackson.core:jackson-databind:2.10.1'
Can anyone show me how to use the Jackson object mapper with Unirest in a Spring Boot API project as I have been googling and reading docs for two days now. Would appreciate some help.
Thank you in advance
You have several issues here:
The version of unirest you're using (1.4.5) does not contain the feature to configure object mapper. This feature was added later (github PR). So you should update to the latest version available at maven central - 1.4.9. This alone will fix your compilation problem.
You can keep your Unirest configuration code in the main method. However if you want to use not default jackson ObjectMapper(), but the one from the spring context, then it's better to create something like a fake spring bean to inject ObjectMapper:
#Configuration
public class UnirestConfig {
#Autowired
private com.fasterxml.jackson.databind.ObjectMapper mapper;
#PostConstruct
public void postConstruct() {
Unirest.setObjectMapper(new ObjectMapper() {
public String writeValue(Object value) {
try {
return mapper.writeValueAsString(value);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
public <T> T readValue(String value, Class<T> valueType) {
try {
return mapper.readValue(value, valueType);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
});
}
}
Other than that it looks this library changed the package name. Now it's com.konghq. You might want to consider updating, but library API might have changed.
Upd: for the latest version
compile group: 'com.konghq', name: 'unirest-java', version: '3.1.04'
the new API is Unirest.config().setObjectMapper(...)

Removing/shutdown Firebase in Java app (for hot redepoy)

I tried to use org.springframework.boot:spring-boot-devtools to speed up development.
My project uses Firebase to authenticate some requests. Firebase initialized via:
#PostConstruct
public void instantiateFirebase() throws IOException {
FirebaseOptions options = new FirebaseOptions.Builder()
.setDatabaseUrl(String.format("https://%s.firebaseio.com", configuration.getFirebaseDatabase()))
.setServiceAccount(serviceJson.getInputStream())
.build();
FirebaseApp.initializeApp(options);
}
After context reloading on changing .class file Spring reports error:
Caused by: java.lang.IllegalStateException: FirebaseApp name [DEFAULT] already exists!
at com.google.firebase.internal.Preconditions.checkState(Preconditions.java:173)
at com.google.firebase.FirebaseApp.initializeApp(FirebaseApp.java:180)
at com.google.firebase.FirebaseApp.initializeApp(FirebaseApp.java:160)
What Firebase API allow deregister/destroy FirebaseApp that I should use in #PreDestroy?
Looks like it is not possible to disable/shutdown/reinitialize Firebase app.
In my case it is fine to keep that instance in memory without changes.
Depending on your requirements you may use as simple as:
#PostConstruct
public void instantiateFirebase() throws IOException {
// We use only FirebaseApp.DEFAULT_APP_NAME, so check is simple.
if ( ! FirebaseApp.getApps().isEmpty())
return;
Resource serviceJson = applicationContext.getResource(String.format("classpath:firebase/%s", configuration.getFirebaseServiceJson()));
FirebaseOptions options = new FirebaseOptions.Builder()
.setDatabaseUrl(String.format("https://%s.firebaseio.com", configuration.getFirebaseDatabase()))
.setServiceAccount(serviceJson.getInputStream())
.build();
FirebaseApp.initializeApp(options);
}
or filter data like:
for (FirebaseApp app : FirebaseApp.getApps()) {
if (app.getName().equals(FirebaseApp.DEFAULT_APP_NAME))
return;
}

Resources