GKE LoadBalancer External IP address cannot be reached - ansible

I deployed my containerized application to google kubernetes engine using Ansible.
I created a pod for the application with using a deployment, I also specified containerPort as 8080. This seems to be working fine.
- name: Create k8s pod for nginx
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Deployment
metadata:
name: "{{ app }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec:
replicas: 1
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: hello-dashboard
image: "{{ image_name }}"
# This app listens on port 8080 for web traffic by default.
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
Tracking the deployment
kubectl get deployments --namespace=nginx
shows the deployment is READY and AVAILABLE
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 34m
checking the pods created by the deployment
kubectl get pods --namespace=nginx
this also shows the pod was creates
NAME READY STATUS RESTARTS AGE
nginx-cb894bfc5-trnrk 1/1 Running 0 33m
Now, when i check for the LoadBalancer service
kubectl get services --namespace=nginx
The service was also created and assigned an external-ip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.99.240.181 35.242.130.109 80:31005/TCP 33m
But the problem is I can't access the deployed application using the external-ip from the LoadBalancer, the browser tells me it cannot be reached.

Most likely this is an issue with your Kubernetes Service or Deployment. GKE will automatically provision the firewall rules required for the ports mapped to the Service resource.
Ensure that you have exposed the correct port on your Service and mapped it to a valid port on your Deployment's Pods. Also note, that the firewall required is for port 31005 (the nodePort) since this is the one that is accepting traffic from the load balancer.
Ensure you allow incoming traffic as follows :
From the internet to the load balancer on TCP port 8080.
From the load balancer to all Kubernetes nodes on TCP port 31005.

I think there is some mismatch in ports like an application running port_no inside the containers with the service(service.yaml) port and target port configuration

Related

Call a rest api from one pod to another pod in same kubernetes cluster

In my k8s cluster I have two pods podA and podB. Both are in same k8s cluster. Microservice on pod B is a spring boot rest api. Microservice on pod A have ip and port of pod B in its application.yaml. now every time when podB recreates, ip change which forces us to change ip in application.yml of podA. Please suggest a better way.
My limitation is : I can't change the code of podA.
A Service will provide a consistent DNS name for accessing Pods.
An application should never address a Pod directly unless you have a specific reason to (custom load balancing is one I can think of, or StatefulSets where pods have an identity).
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
You will then have a consistent DNS name to access any Pods that match the selector:
my-service.default.svc.cluster.local
That's what service are for. Take a postgres service:
kind: Service
apiVersion: v1
metadata:
name: postgres-service
spec:
type: ClusterIP
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
You can use postgres-service in other pods instead of referring to the ip address of the pod. You also have the advantage that k8s's doing some load balancing for you as well.

Minikube VM hyperkit: Spring Boot: connect to local machine

I have a minikube cluster running on Mac OSX and a simple Spring Boot REST api that connects to Redis and Mongo DB, which I have installed and running locally.
I wish not to run Redis / MongoDb in a Docker container.
I will probably run them remotely in the cloud, therefore I would probably just connect to an external IP address.
What I don't understand is what IP address I should use to connect to my localhost machine.
I start up my Minikube with VM hyperkit.
Edit:
I also tried to start using virtualbox:
minikube start --vm-driver=virtualbox
In my spring boot application, I've configured:
spring.data.mongodb.host = 10.0.2.2
spring.redis.host = 10.0.2.2
But still, I get connection errors:
This works when I run the application locally on my host machine.
For the sake of completeness, this is my yaml file:
---
apiVersion: v1
kind: Service
metadata:
name: posts-api
labels:
app: posts-api
env: dev
spec:
type: NodePort
selector:
app: posts-api
ports:
- protocol: TCP
port: 8083
name: http
---
apiVersion: v1
kind: ReplicationController
metadata:
name: posts-api
spec:
replicas: 1
template:
metadata:
labels:
app: posts-api
spec:
containers:
- name: posts-api
image: kimgysen/posts-api:latest
ports:
- containerPort: 8083
livenessProbe:
httpGet:
path: /health
port: 8083
initialDelaySeconds: 120
timeoutSeconds: 3
I'll give you the answer I gave to someone with the same problem (different tech):
Kubernetes pod unable to connect to rabbit mq instance running locally
Replace the IP and port number, and the Service and Endpoints names as appropriate.

assign static IP to LoadBalancer service using k8s on aws

Objective: create a k8s LoadBalancer service on AWS whose IP is static
I have no problem accomplishing this on GKE by pre-allocating a static IP and passing it in via loadBalancerIP attribute:
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: dave
loadBalancerIP: 17.18.19.20
...etc...
But doing same in AWS results in externalIP stuck as <pending> and an error in the Events history
Removing the loadBalancerIP value allows k8s to spin up a Classic LB:
$ kubectl describe svc dave
Type: LoadBalancer
IP: 100.66.51.123
LoadBalancer Ingress: ade4d764eb6d511e7b27a06dfab75bc7-1387147973.us-west-2.elb.amazonaws.com
...etc...
but AWS explicitly warns me that the IPs are ephemeral (there's sometimes 2), and Classic IPs don't seem to support attaching static IPs
Thanks for your time
as noted by #Quentin, AWS Network Load Balancer now supports K8s
https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/
Network Load Balancing in Kubernetes
Included in the release of Kubernetes 1.9, I added support for using the new Network Load Balancer with Kubernetes services. This is an alpha-level feature, and as of today is not ready for production clusters or workloads, so make sure you also read the documentation on NLB before trying it out. The only requirement to expose a service via NLB is to add the annotation service.beta.kubernetes.io/aws-load-balancer-type with the value of nlb.
A full example looks like this:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer

Eureka and Kubernetes

I am putting together a proof of concept to help identify gotchas using Spring Boot/Netflix OSS and Kubernetes together. This is also to prove out related technologies such as Prometheus and Graphana.
I have a Eureka service setup which is starting with no trouble within my Kubernetes cluster. This is named discovery and has been given the name "discovery-1551420162-iyz2c" when added to K8 using
For my config server, I am trying to use Eureka based on a logical URL so in my bootstrap.yml I have
server:
port: 8889
eureka:
instance:
hostname: configserver
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://discovery:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/xyz/microservice-config
and I am starting this using
kubectl run configserver --image=xyz/config-microservice --replicas=1 --port=8889
This service ends up running named as configserver-3481062421-tmv4d. I then see exceptions in the config server logs as it tries to locate the eureka instance and cannot.
I have the same setup for this using docker-compose locally with links and it starts the various containers with no trouble.
discovery:
image: xyz/discovery-microservice
ports:
- "8761:8761"
configserver:
image: xyz/config-microservice
ports:
- "8888:8888"
links:
- discovery
How can I setup something like eureka.client.serviceUri so my microservices can locate their peers without knowing fixed IP addresses within the K8 cluster?
How can I setup something like eureka.client.serviceUri?
You have to have a Kubernetes service on top of the eureka pods/deployments which then will provide you a referable IP address and port number. And then use that referable address to look up the Eureka service, instead of "8761".
To address further question about HA configuration of Eureka
You shouldn't have more than one pod/replica of Eureka per k8s service (remember, pods are ephemeral, you need a referable IP address/domain name for eureka service registry). To achieve high availability (HA), spin up more k8s services with one pod in each.
Eureka service 1 --> a single pod
Eureka Service 2 --> another single pod
..
..
Eureka Service n --> another single pod
So, now you have referable IP/Domain name (IP of the k8s service) for each of your Eureka.. now it can register each other.
Feeling like it's an overkill?
If all your services are in same kubernetes namespace you can achieve everything (well, almost everything, except client side load balancing) that eureka offers though k8s service + KubeDNS add-On. Read this article by Christian Posta
Edit
Instead of Services with one pod each, you can make use of StatefulSets as Stefan Ocke pointed out.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Regarding HA configuration of Eureka in Kubernetes:
You can (meanwhile) use a StatefulSet for this instead of creating a service for each instance. The StatefulSet guarantees stable network identity for each instance you create.
For example, the deployment could look like the following yaml (StatefulSet + headless Service).
There are two Eureka instances here, according to the DNS
naming rules for StatefulSets (assuming namespace is "default"):
eureka-0.eureka.default.svc.cluster.local and
eureka-1.eureka.default.svc.cluster.local
As long as your pods are in the same namespace, they can reach Eureka also as:
eureka-0.eureka
eureka-1.eureka
Note: The docker image used in the example is from https://github.com/stefanocke/eureka. You might want to chose or build your own one.
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
ports:
- port: 8761
name: eureka
clusterIP: None
selector:
app: eureka
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 2
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: stoc/eureka
ports:
- containerPort: 8761
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# Due to camelcase issues with "defaultZone" and "preferIpAddress", _JAVA_OPTIONS is used here
- name: _JAVA_OPTIONS
value: -Deureka.instance.preferIpAddress=false -Deureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:8761/eureka/,http://eureka-1.eureka:8761/eureka/
- name: EUREKA_CLIENT_REGISTERWITHEUREKA
value: "true"
- name: EUREKA_CLIENT_FETCHREGISTRY
value: "true"
# The hostnames must match with the the eureka serviceUrls, otherwise the replicas are reported as unavailable in the eureka dashboard
- name: EUREKA_INSTANCE_HOSTNAME
value: ${MY_POD_NAME}.eureka
# No need to start the pods in order. We just need the stable network identity
podManagementPolicy: "Parallel"
#Stefan Ocke i'm trying to the same setup the same, but with my own image of eureka server. but i keep getting this error
Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
2019-09-27 06:27:03.363 ERROR 1 --- [ main] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://eureka-1.eureka:8761/eureka/}
Here are configurations:
Eureka Spring Properties:
server.port=${EUREKA_PORT}
spring.security.user.name=${EUREKA_USERNAME}
spring.security.user.password=${EUREKA_PASSWORD}
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.instance.prefer-ip-address=false
eureka.server.wait-time-in-ms-when-sync-empty=0
eureka.server.eviction-interval-timer-in-ms=15000
eureka.instance.leaseRenewalIntervalInSeconds=30
eureka.instance.leaseExpirationDurationInSeconds=30
eureka.instance.hostname=${EUREKA_INSTANCE_HOSTNAME}
eureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:8761/eureka/,http://eureka-1.eureka:8761/eureka/
StatefulSet Config:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: "my-image"
command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","/app/eureka-service.jar"]
ports:
- containerPort: 8761
env:
- name: EUREKA_PORT
value: "8761"
- name: EUREKA_USERNAME
value: "theusername"
- name: EUREKA_PASSWORD
value: "thepassword"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: EUREKA_INSTANCE_HOSTNAME
value: ${MY_POD_NAME}.eureka
Service Config:
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
clusterIP: None
selector:
app: eureka
ports:
- port: 8761
targetPort: 8761
Ingress Controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: eureka
servicePort: 8761
You have to install a kubernetes kube-dns server to resolve names with their IPs, and then expose your eureka pods as a service. (see kubernetes docs) for more infos to how to create dns and services.
#random_dude, what will be the case if i used to create 2 or 3 replicas of eureka? it turned out that when i mount a micro-service 'X' i will be registred in all eureka replicas, but when it becomes down, only one replicas gets the update ! the others still consider the micro-service instance as running
I got exactly this problem and it got resoled by adding an environment variable for pod. This has the answer. Sample env variable for my pod is shown below,
I'm wondering where you put that config, is that on service-registry which is on the eureka, or in the client-side (where we want to connect to eureka)?
my current case is, that all configuration is placed on a repo, the eureka config as well. which makes the config static.

Kubernetes - Vagrant and NodePort

I'm trying to run the kubernetes guestbook app on Vagrant on my local Mac OS X machine.
I have all the nodes (master and node-1) running, by running the following:
./kubectl.sh create -f ../examples/guestbook/all-in-one/guestbook-all-in-one.yaml
As I'm running locally, I also changed the about yaml file, to use NodePort instead of LoadBalancer
Running the following:
./kubectl.sh describe service frontend returns the following:
Name: frontend
Labels: app=guestbook,tier=frontend
Selector: app=guestbook,tier=frontend
Type: NodePort
IP: 10.247.127.146
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31030/TCP
Endpoints: 10.246.9.12:80,10.246.9.13:80,10.246.9.7:80
Session Affinity: None
No events.
If I try: http://10.247.127.146:31030 It doesn't conenct to the guestbook app.
Is there anything I'm doing wrong?
Thanks
nodePort is the port available on every kubernetes node.
You need to find the IP address of your kubernetes node, and then http://<nodeIP>:31030 should work.

Resources