If I have 2 namespaces A and B, attempting to reference a service from B using the standard DNS convention for k8s <svc-name>.<namespace>.svc.cluster.local fails
so from a a pod deployed in namespace B to reference db service deployed in namespace for example:
dbservice.A.svc.cluster.local
fails to resolve
Is there some additional configuration needed for the dns service to make this work ?
Related
I am struggling with a "strange" issue: The same Helm chart is being installed on the same Kubernetes Cluster (4 Nodes) in different namespace (dev vs int).
It is a Spring Boot application exposing the actuator/health Endpoint for liveness & readiness probes. The application is secured, but the endpoint is not (accessible without a token).
This does work in the dev namespace, but fails in the int namespace with Permission Denied.
Readiness probe failed: Get "http://*.*.*.*:8080/actuator/health": context deadline exceeded
What I tried:
ssh in a container in the namespace (and network) and check the health endpoint is accessible over the service Container IP: in the dev it does work, int I get connection refused (that is why the probe fails).
both namespaces have the same networkpolicy resource (even though it should not matter in my understanding)
both namespaces have the same ingress resource
the Spring Boot app is running with different profiles (application-dev & application-int) but the logs show that the endpoint is being exposed and the application does start.
Any suggestions what I am missing? Which kind of logs should I check?
Set up-1:(Not Working)
I have a task running in the ECS cluster. But it's going down because of a health check immediately after it started.
My service is spring boot based which has both traffic(for service calls) and management ports(for health check). I have "permitAll() permission for "*/health" path.
PFA: I configured the same by selecting the override port option in the TG health check tab as well.
Set up-2: (Working Fine)
I have the same setup in my docker-compose file and I can access health check endpoint in my local container.
This is how I defined in my compose:
service:
image: repo/a:name
container_name: container-1
ports:
- "9904:9904" # traffic port
- "8084:8084". # management Port
So, I tried configuring the management port on Task Def in the container section. I tried updated the corresponding service for this latest revision of the TD, but when I save this service, I'm getting an error. Is this the right way of handling this?
Error in ECS console:
Failed updating Service : The task definition is configured to use a dynamic host port,
but the target group with targetGroupArn arn:aws:elasticloadbalancing:us-east-2:{accountId}:targetgroup/ecs-container-tg/{someId} has a health check port specified.
Service
Two possible resolutions:
Is there a way I can specify this port mapping in the docker file?
Another way to configure the management port mappings in the container config of task definition within ECS? (Prefered)
Get rid of Spring Boot's actuator endpoint and implement our own endpoint for health? (BAD: As I need to implement lot of things to show all details which is returned by spring boot)
The task definition is configured to use a dynamic host port but target has a health check port specified.
Base on the error it seems like you have configured dynamic port mapping in Task definition, you can verify this in task definition.
understanding-dynamic-port-mapping-in-amazon-ecs
So in dynamic port, ECS schedule will assign and publish random port in the host which will be different than 8082, so change the health check setting accordingly to traffic port.
this will resolve the health issue, now come to your query
Is there a way I can specify this port mapping in the docker file?
No, port mapping happen at run time not at build time, you can specify that in task definition.
Another way to configure the management port mappings in the container config of task definition within ECS? (Prefered)
You can assign static port mapping which mean both publish port and expose will be same 8082:8082 in this health check will work by using static port mapping.
Get rid of Spring Boot's actuator endpoint and implement our own endpoint for health? (BAD: As I need to implement lot of things to show all details which is returned by spring boot)
Healthcheck is simple HTTP Get a call that ALB expecting 200 HTTP status code in response, so you can create a simple endpoint that will return 200 HTTP status code.
So, after 2 days of doing different things:
In task definition, the networking mode should be "Bridge" type
In task definition, leave the CPU and memory units empty. Providing them at the container level should be enough.
I have a question for load balancing in scope of Spring and Ribbon.
I have Microservices architecture with several services. Let's say services: A, B, C and D. All of the services are deployed in the cloud.
In front of the services stays LB that forwards requests to the corresponding service.
All of the services are implemented in Spring Boot.
Docker images are created per each service.Each service is containerised. In my local setup I am able to start all of my services in my local kubernetes cluster. For example:
kubectl get deployment
will result in:
NAME READY UP-TO-DATE AVAILABLE AGE
A 1/1 1 1 9h
B 2/2 2 2 59m
C 1/1 1 1 9h
....
Running in K8S service B can access service A, C or any other service in the namespace with:
public String getResponseFromService() {
return this.restTemplate.getForObject("http://service-a:8080/deals", String.class);
}
If I have N number of instances of service A, a round-robin rule by default is activated and random server is fetched each time when node B invokes service A.
Question:
Does it mean that k8s itself acts as Load Balancer and redirects the requests that are coming from node B to service A to one of the instances?
If the above is true, why I at all need Ribbon client LB. I know that it uses discovery client in order to check with k8s which are registered services in service registry, but if I do not care about the registry do I need the ribbon at all?
I need several instances per each service and communication between services through single endpoint (as example above).
Apologies for the question but I am pretty new to Spring Cloud Kubernete. I read a lot but still can not get this part.
Thanks in advance!
I have my service (spring boot java application) running in a K8S cluster with 3 replicas(pods). My use-case requires me to deploy application contexts dynamically.
And i need to know which context is deployed on which of the 3 Pods through service discovery. Is there a way to register custom metadata for a service in K8S Service Discovery, like we do in Eureka using eureka.instance.metadata-map?
In terms of Kubernetes, we have Deployments and Services.
Deployment is a description of state for ReplicaSet, which creates Pods. Pod consists of one or more containers.
Service is an abstraction which defines a logical set of Pods and a policy by which to access them.
In Eureka, you can set a configuration of Pods dynamically and reconfigure them on-fly, which does not match with Kubernetes design.
In Kubernetes, when you use 1 Deployment with 3 Replicas, all 3 Pods should be the same. It has no options or features to export any metadata for separate Pods under the Services into different groups because ReplicaSet, which contain Pods with same labels, is a group itself.
Therefore, the better idea is to use 3 different Deployments with 1 Replica for each, all of them with the same configuration. Or use some Springboot’s features, like its service discovery if you want to reload application context on-fly.
I have 2 ears—say, Ear1 and Ear2—for my application, which are deployed in clusters. Ear2 is having Ejb which is being called from Ear1. EJB reference is required for communication between Ear2 and Ear1. I am setting the below value to
Target Resource JNDI Name: corbaloc::ClusterServer1:2810,:ClusterServer2:2810/cell/clusters/Cluster1/ejb/com/mycompanyName/projectName/ejb/facade/EjbFacadeHome
But I am getting the below error:
Caused by: javax.naming.ServiceUnavailableException: A communication failure occurred while attempting to obtain an initial context with the provider URL: "corbaloc::mums00100251.in.net.intra:2810,:mums00100392.in.net.intra:2810/cell/clusters/Cluster1/ejb/com/bnpparibas/tradefinance/ejb/facade/EjbFacadeHome". Make sure that any bootstrap address information in the URL is correct and that the target name server is running.
Please help.
The correct format for referencing remote EJB with WebSphere Application Server 6.1 in this case would be like:
corbaloc:iiop:mums00100251.in.net.intra:2810/ejb/com/bnpparibas/tradefinance/ejb/facade/EjbFacadeHome,iiop:mums00100392.in.net.intra:2810/ejb/com/bnpparibas/tradefinance/ejb/facade/EjbFacadeHome
2810 suggests you attempted to use either the bootstrap port of node agents or deployment manager. I would check the ports (you can find the BOOTSTRAP port from the management console under Ports section of the server preferences), and if they still fail use the actual application servers' bootstrap ports.
There could also be a scoping issue which would mandate that. If you deployed your application to the Cluster scope it is possible that the naming service only in the cluster members can actually resolve the EJB.