How can we run the Hazelcast management center on Kubernetes?
What i did so far is - Deployed the hazelcast/management-center Docker image on our Kubernetes cluster. However as seen on the management-center Pod log, it has started on localhost:8080 by default. I did this with a 'Deployment' on K8s pointing to Docker hazelcast/management-center image. Hazelcast Server is running as the Spring Boot application, also other applications are able to connect to this as Hazelcast client.
Question is how can we run/connect the management center to our Hazelcast cluster running on the same namespace in the Kubernetes (with 2 members in the cluster)?
I suggest using the official helm chart, it includes Management Center by default.
If it doesn't work for you, first I would check if there is a k8s Service defined for the deployment, and if yes, if there is a k8s Ingress exposing that service.
Related
I'm new to Kubernetes and I'm learning about statefulsets. For stateful applications, where the identity of pods matter, we use statefulsets instead of simple deployments so each pod can have its own persistent volume. The writes need to be pointed to the master pod, while the reading operations can be pointed to the slaves. So pointing to the ClusterIP service attached to the statefulset won't guarantee the replication, instead we need to use a headless service that will be pointing to the master.
My questions are the following :
How to edit the application.properties in spring boot project to use the slaves for reading operations ( normal ClusterIP service ) and the master for writing/reading operations ( Headless service )?
In case that is unnecessary and the headless service does this work for us, how does it work exactly since it's pointing to the master ?
I want to install Apache Jmeter on PKS (pivotal-container-service) for testing micro service on PCF.
I am not able to find any good resources ,did anyone tried and got success?
https://hub.docker.com/r/justb4/jmeter/
https://www.virtuallyghetto.com/2018/03/getting-started-with-vmware-pivotal-container-service-pks-part-1-overview.html
https://network.pivotal.io/products/pcfdev
Looking into Pivotal Container Service main page
Production-ready Kubernetes for the enterprise
So Pivotal Container Service seems to be using Kubernetes under the hood.
The process of installation of JMeter into Kubernetes and/or Pivotal Container Service would be as easy as creating an Image with your own Docker JMeter installation (or use any existing one) and using this image in the PCS.
I have my service (spring boot java application) running in a K8S cluster with 3 replicas(pods). My use-case requires me to deploy application contexts dynamically.
And i need to know which context is deployed on which of the 3 Pods through service discovery. Is there a way to register custom metadata for a service in K8S Service Discovery, like we do in Eureka using eureka.instance.metadata-map?
In terms of Kubernetes, we have Deployments and Services.
Deployment is a description of state for ReplicaSet, which creates Pods. Pod consists of one or more containers.
Service is an abstraction which defines a logical set of Pods and a policy by which to access them.
In Eureka, you can set a configuration of Pods dynamically and reconfigure them on-fly, which does not match with Kubernetes design.
In Kubernetes, when you use 1 Deployment with 3 Replicas, all 3 Pods should be the same. It has no options or features to export any metadata for separate Pods under the Services into different groups because ReplicaSet, which contain Pods with same labels, is a group itself.
Therefore, the better idea is to use 3 different Deployments with 1 Replica for each, all of them with the same configuration. Or use some Springboot’s features, like its service discovery if you want to reload application context on-fly.
We have a collection of microservices built with Spring Boot, using Spring Cloud Netflix. Up until now, they've been packaged as RPMs and deployed to VMs. Using Eureka has allowed for service registration/discovery (obviously) and our cross-microservice interaction to be done using Spring's RestTemplate with a Virtual IP (VIP), like the following:
http://foo-service/<PATH_TO_RESOURCE>
Client-side load-balancing was another benefit.
Now, we are looking to use Docker and run within Rancher. I'm wondering using Eureka still makes sense in this environment.
Within Rancher, if the Service is named 'foo-service', that name is used as a VIP within the Rancher internal network so the same URL shown above can also work, sans Eureka.
Also, if there are multiple Containers backing a Service, Rancher will round-robin load-balance traffic amongst them.
Plus, it seems Rancher will know about Containers coming and going sooner than Eureka would.
I'm struggling to find a solid reason to keep Eureka.
Not much familiar with Rancher, AFAIK it enables users to deploy a choice of Cattle, Docker Swarm, Apache Mesos or Kubernetes to manage your containers.
So, it finally comes down to whether your infrastructure platform provides service discovery functionality or not (I know Docker swarm and Kubernetes provides Service discovery, not sure about the others); if you get free service discovery out of the box from your platform and if you don't need client side load balancing, eureka is an overkill.
Here is an answer for the question in context of Kubernetes
https://stackoverflow.com/a/40568412/6785908
Quoting the relevant parts
In Kubernetes platform, using Eureka (Or Consul/zookeeper any
other service registries) for service discovery is an overkill; you
can achieve the same (arguably) functionality with Kubernetes Services
(+kube DNS Addon), which will act as a referable IP address and a load
balancer (not client side) for the ephemeral Pods. Read this
[article][1] by Christian Posta. If you want to refer your service by
its name instead of IP address add KubeDNS (A kubernetes add on) to
your cluster.
http://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/
Edit
Since you said,
Within Rancher, if the Service is named 'foo-service', it is used as a
VIP within the Rancher internal network so the same URL shown above
can also work, sans Eureka.
Also, if there are multiple Containers backing a Service, Rancher will
round-robing load-balance traffic amongst them.
So you are getting both Service discovery and the (server side) load balancer from your platform for free. So if you don't have a compelling reason to do client side load balancing, forget about eureka.
I was able to register the single nodejs app instance using the netflix sidecar app successfully. Both nodejs and sidecar bridge app are running in Cloud foundry.
Result:
SAMPLE-NODEJS n/a (1) (1) UP (1)
When i scale the nodeJS app to 3 instances, could not see the scaled instances in Eureka service registry. It still shows 1 instance.
Can some one help me to do this....
I want to register all the instances of Nodejs app with Eureka service registry with Sidecar bridge app.
Pls.. help.
Regards
Purandhar
Sidecar, like the eureka java client is built to register only one application with the eureka server at a time. It is not a eureka proxy for multiple applications. I built a proof of concept proxy that will do what you want.
This happens because it's not your node application, which is registering to eureka, but your sidecar, which still runs in one instance.
simple solution
you scale your sidecars with your node apps. This is quite straight forward, in particular when using container based deployment. You just can craft a docker container starting both, a node instance and a sidecar.
load balancing
you can extend your sidecar application to load balance traffic to your sidecars. Then your node apps will still be shown as a single instance, but still have load balancing to scaled node instances