Not able to call external resources through kubernetes ingress - elasticsearch

I am trying to configure ingress resources in kubernetes, I want to know if I can access external resources via kuberntes(Example, I installed kibana in a virtual machine and I want to access through kubernetes ingress as below)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/add-base-url: "true"
spec:
rules:
- host: test.com
http:
paths:
- path: "/"
backend:
serviceName: service1
servicePort: 1000
- path: "/test"
backend:
serviceName: service2.test
servicePort: 2000
- path: "/kibana"
backend:
serviceName: <ip-address>
servicePort: 9092
Any suggested is this the right way of calling external resources(or) we cannot initiate a call as it is outside of kubernetes...
I am trying to call as test.com/kibana
Please suggest.

For external resources you should create Endpoints object.
This is explained with Services without selectors
Services most commonly abstract access to Kubernetes Pods, but they can also abstract other kinds of backends. For example:
You want to have an external database cluster in production, but in your test environment you use your own databases.
You want to point your Service to a Service in a different Namespace or on another cluster.
You are migrating a workload to Kubernetes. Whilst evaluating the approach, you run only a proportion of your backends in Kubernetes.
In any of these scenarios you can define a Service without a Pod selector. For example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
Because this Service has no selector, the corresponding Endpoint object is not created automatically. You can manually map the Service to the network address and port where it’s running, by adding an Endpoint object manually:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.0.2.42
ports:
- port: 9376
So once you add the Endpoint setup a Service for it, you will be able to use is inside Ingress.

Related

How to exposing CockroachDB using Ingress on Google Cloud for external loadtesting

For my current distributed databases project in my studies I should deploy a CockrouchDB Cluster on Google Cloud Kubernetes Engine and run a YCSB Loadtest against it.
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
Now I need to expose the DB Console on Port 8080 as well as the Database Endpoint on Port 26257.
so far I started changing the cockraochdb-public service to kind: NodePort and exposing its ports using an Ingress. My current Problem is exposing both ports (if possible on their default ports 8080 and 26257) and having them accessible from YCSB.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
labels:
app: cockroachdb
spec:
backend:
serviceName: cockroachdb-public
servicePort: 26257
rules:
- http:
paths:
- path: /labs/*
backend:
serviceName: cockroachdb-public
servicePort: 8080
- path: /*
backend:
serviceName: cockroachdb-public
servicePort: 26257
So far I just managed to route it to different paths. I'm not sure if this may work, because the JDBC driver used by YCSB is using TCP not http.
How do I expose two ports of one service using an Ingress for TCP?
Focusing on:
How do I expose two ports of one service using an Ingress for TCP?
In general when an Ingress resource is referenced it's for HTTP/HTTPS traffic.
You cannot expose the TCP traffic with an Ingress like the one mentioned in your question.
Side note!
There are some options to use an Ingress controller to pass the TCP/UDP traffic (nginx-ingress).
You could expose your application with service of type LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: cockroach-db-cockroachdb-public
namespace: default
spec:
ports:
- name: grpc
port: 26257
protocol: TCP
targetPort: grpc # (containerPort: 26257)
- name: http
port: 8080
protocol: TCP
targetPort: http # (containerPort: 8080)
selector:
selector: INSERT-SELECTOR-FROM-YOUR-APP
type: LoadBalancer
Disclaimer!
Above example is taken from cockroachdb Helm Chart with modified value:
service.public.type="LoadBalancer"
By above definition you will expose your Pods to external traffic on ports: 8080 and 26257 with a TCP/UDP LoadBalancer. You can read more about it by following below link:
Cloud.google.com: Kubernetes Engine: Docs: How to: Exposing apps: Creating a service of type LoadBalancer
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
If this VM is located in GCP infrastructure you could also take a look on Internal TCP/UDP LoadBalancer:
Cloud.google.com: Kubernetes Engine: Using an internal TCP/UDP load balancer
Also I'm not sure about the annotations of your Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
In GKE when you are creating an Ingress without specifying the ingress.class you are using: gce controller. The ingress.citrix.com annotations are specific to citrix controller and will not work with gce controller.
Additional resources:
Kubernetes.io: Docs: Ingress
Kubernetes.io: Docs: Service

Spring Boot HTTP made secure (HTTPS) with Kubernetes Ingress

I am new at Kubernetes and GKE. I have some microservices written in Spring Boot 2 and deployed from GitHub to GKE. I would like to make these services secure and I want to know if it's possible to use ingress on my gateway microservice to make the entry point secure just like that. I created an ingress with HTTPS but it seems all my health checks are failing.
Is it possible to make my architecture secure just by using ingress and not change the spring boot apps?
Yes, It would be possible to use a GKE ingress given your scenario, there is an official guide on how to do this step by step.
Additionally, here's a step by step guide on how to implement Google Managed certs.
Also, I understand that my response is somewhat general, but I can only help you so much without knowing your GKE infrastructure (like your DNS name for said certificate among other things).
Remember that you must implement this directly on your GKE infrastructure and not on your GCP side, if you modify or create something new outside GKE but that it's linked to GKE, you might see that either your deployment rolled back after a certain time or that stopped working after a certain time.
Edit:
I will assume several things here, and since I don't have your Spring Boot 2 deployment yaml file, I will replace that with an nginx deployment.
cert.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: ssl-cert
spec:
domains:
- example.com
nginx.yaml
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "nginx"
namespace: "default"
labels:
app: "nginx"
spec:
replicas: 1
selector:
matchLabels:
app: "nginx"
template:
metadata:
labels:
app: "nginx"
spec:
containers:
- name: "nginx-1"
image: "nginx:latest"
nodeport.yaml (please modify "targetPort: 80" to your needs)
apiVersion: "v1"
kind: "Service"
metadata:
name: "nginx-service"
namespace: "default"
labels:
app: "nginx"
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "nginx"
type: "NodePort"
ingress-cert.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
networking.gke.io/managed-certificates: ssl-cert
spec:
backend:
serviceName: nginx-service
servicePort: 80
Keep in mind that assuming your DNS name "example.com" is pointing into your Load Balancer external IP, it could take a while to your SSL certificate to be created and applied.

https for eks loadbalancer

I want to secure my web application running on Kubernetes (EKS).
I have one front-end service .Front end service is running on port 80 .I want to run this on port 443 .When I kubectl get all .I see that my load balancer is running on port 443 , but I am not able to open it in the browser.
---
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:1234567890:certificate/12345c409-ec32-41a8-8542-712345678
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 80
protocol: TCP
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: 123456789.dkr.ecr.us-west-2.amazonaws.com/demoui:demo123
ports:
- containerPort: 80
env:
- name: MESSAGE
value: Hello Kubernetes!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200,404"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80} , {"HTTPS": 443}]'
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: hello-kubernetes
servicePort: 80
AWS ALB Ingress Controller is designed to create application Load Balancer and relevant resources on AWS level within Ingress YAML configuration file. Actually, ALB Ingress controller parses configuration for the load balancer from the Ingress YAML definition file and then apply Target groups one per Kubernetes service with specified instances and NodePorts exposed on a particular nodes. On the top level Listeners expose connection port for Load Balancer and make decision for request routing according to defined routing rules as per official AWS ALB Ingress Controller Workflow documentation.
Just after a short theory tour, I have a few concerns about you current configuration:
First, I would recommend to check AWS ALB Ingress Controller
setup and inspect the relevant logs:
kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "alb-ingress[a-zA-Z0-9-]+")
And then verify whether Load Balancer has been successfully generated within AWS console.
Inspect Target groups for particular ALB in order to ensure whether
health checks for k8s instances all are good.
Ensure, whether Security groups contain appropriate firewall rules for your instances in order to allow inbound and outbound network traffic across ALB.
I encourage you to get familiar with dedicated chapter about HTTP to HTTPS redirection in the official AWS ALB Ingress Controller documentation.
Here is what I have for my cluster to run on https.
In my ingress/Load balancer:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: CERT
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
# ports using the ssl certificate
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# which protocol a Pod speaks
In my Ingress controller, configMap of the nginx configuration:
app.kubernetes.io/force-ssl-redirect: "true"
Hope this works for you.
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/tasks/ssl_redirect/

Running socket.io in Google Container Engine with multiple pods fails

I'm trying to run a socket.io app using Google Container Engine. I've setup the ingress service which creates a Google Load Balancer that points to the cluster. If I have one pod in the cluster all works well. As soon as I add more, I get tons of socket.io errors. It looks like the connections end up going to different pods in the cluster and I suspect that is the problem with all the polling and upgrading socket.io is doing.
I setup the load balancer to use sticky sessions based on IP.
Does this only mean that it will have affinity to a particular NODE in the kubernetes cluster and not a POD?
How can I set it up to ensure session affinity to a particular POD in the cluster?
NOTE: I manually set the sessionAffinity on the cloud load balancer.
Here would be my ingress yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-ip
spec:
backend:
serviceName: my-service
servicePort: 80
Service
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: myApp
spec:
sessionAffinity: ClientIP
type: NodePort
ports:
- port: 80
targetPort: http-port
selector:
app: myApp
First off, you need to set "sessionAffinity" at the Ingress resource level, not your load balancer (this is only related to a specific node in the target group):
Here is an example Ingress spec:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-sticky
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Second, you probably need to tune your ingress-controller to allow longer connection times. Everything else, by default, supports websocket proxying.
If you are still having issues please provide outputs for kubectl describe -oyaml pod/<ingress-controller-pod> and kubectl describe -oyaml ing/<your-ingress-name>
Hope this helps, good luck!

Eureka and Kubernetes

I am putting together a proof of concept to help identify gotchas using Spring Boot/Netflix OSS and Kubernetes together. This is also to prove out related technologies such as Prometheus and Graphana.
I have a Eureka service setup which is starting with no trouble within my Kubernetes cluster. This is named discovery and has been given the name "discovery-1551420162-iyz2c" when added to K8 using
For my config server, I am trying to use Eureka based on a logical URL so in my bootstrap.yml I have
server:
port: 8889
eureka:
instance:
hostname: configserver
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://discovery:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/xyz/microservice-config
and I am starting this using
kubectl run configserver --image=xyz/config-microservice --replicas=1 --port=8889
This service ends up running named as configserver-3481062421-tmv4d. I then see exceptions in the config server logs as it tries to locate the eureka instance and cannot.
I have the same setup for this using docker-compose locally with links and it starts the various containers with no trouble.
discovery:
image: xyz/discovery-microservice
ports:
- "8761:8761"
configserver:
image: xyz/config-microservice
ports:
- "8888:8888"
links:
- discovery
How can I setup something like eureka.client.serviceUri so my microservices can locate their peers without knowing fixed IP addresses within the K8 cluster?
How can I setup something like eureka.client.serviceUri?
You have to have a Kubernetes service on top of the eureka pods/deployments which then will provide you a referable IP address and port number. And then use that referable address to look up the Eureka service, instead of "8761".
To address further question about HA configuration of Eureka
You shouldn't have more than one pod/replica of Eureka per k8s service (remember, pods are ephemeral, you need a referable IP address/domain name for eureka service registry). To achieve high availability (HA), spin up more k8s services with one pod in each.
Eureka service 1 --> a single pod
Eureka Service 2 --> another single pod
..
..
Eureka Service n --> another single pod
So, now you have referable IP/Domain name (IP of the k8s service) for each of your Eureka.. now it can register each other.
Feeling like it's an overkill?
If all your services are in same kubernetes namespace you can achieve everything (well, almost everything, except client side load balancing) that eureka offers though k8s service + KubeDNS add-On. Read this article by Christian Posta
Edit
Instead of Services with one pod each, you can make use of StatefulSets as Stefan Ocke pointed out.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Regarding HA configuration of Eureka in Kubernetes:
You can (meanwhile) use a StatefulSet for this instead of creating a service for each instance. The StatefulSet guarantees stable network identity for each instance you create.
For example, the deployment could look like the following yaml (StatefulSet + headless Service).
There are two Eureka instances here, according to the DNS
naming rules for StatefulSets (assuming namespace is "default"):
eureka-0.eureka.default.svc.cluster.local and
eureka-1.eureka.default.svc.cluster.local
As long as your pods are in the same namespace, they can reach Eureka also as:
eureka-0.eureka
eureka-1.eureka
Note: The docker image used in the example is from https://github.com/stefanocke/eureka. You might want to chose or build your own one.
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
ports:
- port: 8761
name: eureka
clusterIP: None
selector:
app: eureka
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 2
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: stoc/eureka
ports:
- containerPort: 8761
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# Due to camelcase issues with "defaultZone" and "preferIpAddress", _JAVA_OPTIONS is used here
- name: _JAVA_OPTIONS
value: -Deureka.instance.preferIpAddress=false -Deureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:8761/eureka/,http://eureka-1.eureka:8761/eureka/
- name: EUREKA_CLIENT_REGISTERWITHEUREKA
value: "true"
- name: EUREKA_CLIENT_FETCHREGISTRY
value: "true"
# The hostnames must match with the the eureka serviceUrls, otherwise the replicas are reported as unavailable in the eureka dashboard
- name: EUREKA_INSTANCE_HOSTNAME
value: ${MY_POD_NAME}.eureka
# No need to start the pods in order. We just need the stable network identity
podManagementPolicy: "Parallel"
#Stefan Ocke i'm trying to the same setup the same, but with my own image of eureka server. but i keep getting this error
Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
2019-09-27 06:27:03.363 ERROR 1 --- [ main] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://eureka-1.eureka:8761/eureka/}
Here are configurations:
Eureka Spring Properties:
server.port=${EUREKA_PORT}
spring.security.user.name=${EUREKA_USERNAME}
spring.security.user.password=${EUREKA_PASSWORD}
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.instance.prefer-ip-address=false
eureka.server.wait-time-in-ms-when-sync-empty=0
eureka.server.eviction-interval-timer-in-ms=15000
eureka.instance.leaseRenewalIntervalInSeconds=30
eureka.instance.leaseExpirationDurationInSeconds=30
eureka.instance.hostname=${EUREKA_INSTANCE_HOSTNAME}
eureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:8761/eureka/,http://eureka-1.eureka:8761/eureka/
StatefulSet Config:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: "my-image"
command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","/app/eureka-service.jar"]
ports:
- containerPort: 8761
env:
- name: EUREKA_PORT
value: "8761"
- name: EUREKA_USERNAME
value: "theusername"
- name: EUREKA_PASSWORD
value: "thepassword"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: EUREKA_INSTANCE_HOSTNAME
value: ${MY_POD_NAME}.eureka
Service Config:
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
clusterIP: None
selector:
app: eureka
ports:
- port: 8761
targetPort: 8761
Ingress Controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: eureka
servicePort: 8761
You have to install a kubernetes kube-dns server to resolve names with their IPs, and then expose your eureka pods as a service. (see kubernetes docs) for more infos to how to create dns and services.
#random_dude, what will be the case if i used to create 2 or 3 replicas of eureka? it turned out that when i mount a micro-service 'X' i will be registred in all eureka replicas, but when it becomes down, only one replicas gets the update ! the others still consider the micro-service instance as running
I got exactly this problem and it got resoled by adding an environment variable for pod. This has the answer. Sample env variable for my pod is shown below,
I'm wondering where you put that config, is that on service-registry which is on the eureka, or in the client-side (where we want to connect to eureka)?
my current case is, that all configuration is placed on a repo, the eureka config as well. which makes the config static.

Resources