We have springboot 2.1.9 version applications running as pods in our on-premise kubernetes platform.
This application is integrated with hazelcast 4.1 for caching purpose.
When we have single replica of the pod - everything works fine. But if we increase the replica count to > 1 then the other replicas are going to "carshloopbackoff" mode and are failing to start with the below error.
`Description:
An attempt was made to call a method that does not exist. The attempt was made from the following location:
org.springframework.boot.actuate.metrics.cache.HazelcastCacheMeterBinderProvider.getMeterBinder(HazelcastCacheMeterBinderProvider.java:34)
The following method did not exist:
'com.hazelcast.core.IMap com.hazelcast.spring.cache.HazelcastCache.getNativeCache()'
The method's class, com.hazelcast.spring.cache.HazelcastCache, is available from the following locations:
jar:file:/home/wmconsole/userpreferences.jar!/BOOT-INF/lib/hazelcast-all-4.1.jar!/com/hazelcast/spring/cache/HazelcastCache.class
It was loaded from the following location:
jar:file:/home/wmconsole/userpreferences.jar!/BOOT-INF/lib/hazelcast-all-4.1.jar!/
Action:
Correct the classpath of your application so that it contains a single, compatible version of com.hazelcast.spring.cache.HazelcastCache`
We defined the hazelcast service in the deployment specific service manifest and created the serviceaccount
Any idea why it is working for single replica and not for 2 ?
Any more changes required from Application end or Kubernetes infrastructure end?
Related
Considering a K8s cluster has 3 pods, each has a docker container of same SpringBoot app providing web services, with LogBack as logging solution.
Log path for each Spring Boot app is the same, and each application will do the same file logging, defined in application.properties. I wonder if without any volume configuration, is it going to cause locking situation so that each Spring Boot app will try to write to the same file at same time?
If above speculation is true, what is the best practice? Below is what I can think of
Create different volume for each pod so logging file write to different physical location. I think it is an overkill for logging
Generate unique file name suffix so app log file name for each application is different. Simply can be just cut off of pod name. However this solution requires to pass dynamic parameters to the pod -> container -> application
I am working in a Java Springboot microservice-based complex application that comprises 30 services.
All are containerized and from ECR, services are deployed inside the Kubernetes namespace in AWS.
Every time, the namespace is purged and all services are re-deployed.
How can I update only one service inside a namespaceā¦is it possible to do that kind of deployment.
Can someone please Any sample configurations using helm or any useful links
How can I update only one service inside a namespaceā¦is it possible to do that kind of deployment.
If you are executing helm upgrade it should only update the resources which are updated.
you need to understand how does Helm packs the resources, helm is using Kustomization so if you are updating Secrets, ConfigMap etc it will generate new names for those resources.
As a side effect, it will "change" the Deployment and the results will be a "full" deploy of all the resources
Let's say I have a service named "FooService" running in a docker container and a second service named "BarService" running in a second docker container. Both services register with Eureka (running in another docker container). Is it possible to have the same application name for both services? E.g. http://localhost/myservice/foo should call the FooService and http://localhost/myservice/bar should call the BarService. Development environment is Spring Boot and the services are implemented as Spring RestControllers. Just put "spring.application.name=myservice" in both bootstrap.properties files and then put #RequestMapping("${spring.application.name}") in the RestController will not work, of course. But is it somehow possible to register the services with a unique identifier and still call them with a common URL path?
Yes, I think this is a very common use case. You develop version service-a, version 1, and you want to deploy service-a, version 2, (canary or blue/green) You can deploy both versions, and register both versions with Eureka, and traffic will be sent to both versions. After you verify version 2, you can shut down version 1.
Briefing
I'm having some issue while setting up a Continuous Deployment environment for an application built using SpringBoot and Angular IO, using Shippable as CI and Elastic Beanstalk as production environment.
The Current Scenario
1) Application JAR being correctly generated through Shippable (we use heroku as staging)
2) Local JAR correctly generated and manual deploy to Elastic BeansTalk working fine
The Problem
My problem is integrating Shippable to automate the deploy to Elastic Beanstalk.
I've followed this tutorial from shippable:
http://blog.shippable.com/how-to-deploy-to-elastic-beanstalk-part-1
and I've finished up with a successfully deploy from Shippable to Elastic Beanstalk, except that Shippable has generated a .zip with the source code (it was actually the purpose of the tutorial and I haven't noticed that hehe) and I need the executable JAR to be deployed on my Elastic Beantalk environment.
The Specific Question
So the question: Is there a way to deploy my Springboot executable JAR to Elastic BeansTalk using the built-in Shippable Integration? Or do I have to manually write the steps on shippable.yml and use eb deploy to make it work?
Thanks a lot!
Update one
In this Amazon link:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-configuration.html#eb-cli3-artifact
They specify a way to deploy an artifact instead of the source code.
As Shippable calls eb deploy, creating the configuration file makes shippable integration calls the deploy on the artifact passed as parameter
to eb deploy. I believe it's just a matter of finding where (in file hierarchy) Shippable is calling the eb deploy. I though it was from the root of the project but it gives an ERROR:
ERROR: Application Version does not exist locally
(project/backend/target/myjar.jar). Try uploading the
Application Version again.
Anyone knows from where shippable calls the commands from the deploy section (configured in shippable.yml, more info in the first link mentioned in this question)?
I am trying to install WebSphere using Puppet Configuration Management. I have installed the module "puppet module install joshbeard-websphere" and started working on the same.
I am facing 2 issues:
After the "websphere::profile::dmgr" runs successfully I still don't get the SOAP port. It is mentioned that after successful run on client:
When a DMGR profile is created, this module will use Puppet's exported resources to export a file resource that contains information needed for application servers to federate with it. This includes the SOAP port and the host name (fqdn).
Is there something I am missing?
This module supports creating JDBC providers and data sources. At this time, it does not support the removal of JDBC providers or datasources or changing their configuration after they're created.