Posting a json message to a webserver inside a pod - go

I am new to kubernetes.
I have implemented a webserver inside a pod and set a Nodeport service for that pod.
I want to send a POST request with a custom message (in json) to a pod after it has been created and ready to use. I want to use the go client library for that matter. Could you please let me know how I can do that?
Which part of the library come to help?
Thanks.

Say the go server runs on locally, you normally use http://localhost:3000 to access it. The pod then has a containerPort of 3000.
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-web-deployment
labels:
app: GoWeb
spec:
replicas: 1
selector:
matchLabels:
app: GoWeb
template:
metadata:
labels:
app: GoWeb
spec:
containers:
- name: go-web
image: me/go-web:1.0.1
ports:
- containerPort: 3000
The Service is then an abstraction of that pod, that describes how to access 1 or many Pods running that service.
The nodePort of the service is 31024.
apiVersion: v1
kind: Service
metadata:
name: go-web-service
spec:
type: NodePort
selector:
app: GoWeb
ports:
- port: 3000
nodePort: 31024
The application is published on http://node-ip:node-port for the public to consume. Kubernetes manages the mappings between the node and the container in the background.
| User | -> | Node:nodePort | -> | Pod:containerPort |
The Kubernetes internal Service and Pod IP's are not often available to the outside world (unless you specifically set a cluster up that way). Whereas the nodes themselves will often carry an IP address that is routable/contactable.

Related

Unable to expose docker LoadBalancer service

I am trying to deploy a docker image which is in public repository. I am trying to create a loadbalancer service, and trying to expose the service in my system ip address, and not 127.0.0.1.
I am using a windows 10 , and my docker has WSL2 instead of hyper-v.
Below is my .yaml file. So, the service inside will run in port 4200, so to avoid any kind of confusion I was keeping all the ports in 4200.
apiVersion: v1
kind: Service
metadata:
name: hoopla
spec:
selector:
app: hoopla
ports:
- protocol: TCP
port: 4200
targetPort: 4200
clusterIP: 10.96.1.3
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.0.144
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: hoopla
name: hoopla
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: hoopla
template:
metadata:
labels:
app.kubernetes.io/name: hoopla
spec:
containers:
- image: pubrepo/myimg:latest
name: hoopla
ports:
- containerPort: 4200
Can anybody help me here to understand what mistake I am making. I basically want to expose this on my system IP address.
The loadBalancer service type require a cloud provider's load Balancer ( https://kubernetes.io/docs/concepts/services-networking/service/ )
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created
If you want to expose your service to your local machine, use kubernetes service nodePort type for example, and if you just want to test your webapp, you can use the kubernetes service clusterIp type and make a port-forward, for example with your clusterIp service:
kubectl port-forward svc/hoopla 4200:4200

Visual Studio application is not exposed using a Kubernetes service

Background
I am using Docker for Windows v20.10.6 (with Kubernetes enabled).
I have created two simple, out-of-the-box .NET 5.0 applications:
1. Web API (reaching through HTTP, listening on port 7070)
2. Web App (MVC) that shows a parsed table from the Web API (listening on port 80)
A. ✔️ Created a connection between the applications using Docker Swarm Mode
Created a swarm using docker swarm init
Created an 'overlay' driver network named personal-overlay.
Created the Web API service using docker service create –-network personal-overlay --name api webapi
Created the Web App service using docker service create --name web –-network personal-overlay -p 30080:80 webapp
B. ✔️ Created a generic NGINX deployment and service
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
tier: frontend
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
name: nginx
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx
service:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30080
selector:
app: myapp
I could access the NGINX through http://localhost:30080 without an issue (using the web browser).
❌ The issue I'm currently facing
Tagged the images test/api and test/web
Created the same files using those Visual Studio images:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
tier: frontend
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
name: test-pod
labels:
app: myapp
spec:
containers:
- name: api
image: test/api
imagePullPolicy: Never
- name: web
image: test/web
imagePullPolicy: Never
service:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30080
selector:
app: myapp
Yet, I can not access http://localhost:30080.
EDIT [1]:
I am trying to access it through the web browser, and I get an HTTP ERROR 500: "Failed to load resource: the server responded with a status of 500 (Internal Server Error)."
Whenever I am using curl -I http://localhost:30080 I get the following response:
HTTP/1.1 500 Internal Server Error
Date: Thu, 13 May 2021 08:20:25 GMT
Server: Kestrel
Content-Length: 0
EDIT [2]:
I even tried to scale it down into just this one pod (the web application).
pod:
apiVersion: v1
kind: Pod
metadata:
name: consumer-pod
labels:
name: consumer-pod
app: api-and-consumer
spec:
containers:
- name: consumer
image: test/web
imagePullPolicy: Never
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: consumer-external-svc
labels:
name: consumer-external-svc
app: api-and-consumer
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080
selector:
name: consumer-pod
app: api-and-consumer
Yet it does not work (with nor without the ports section at the pod YAML file).
These are the logs I get using the kubectl logs web-pod-<fullname> command (which says it is actually listening on port 80):
←[40m←[1m←[33mwarn←[39m←[22m←[49m: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
←[40m←[1m←[33mwarn←[39m←[22m←[49m: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {70ddc140-9846-4052-b869-8bcc5250d39e} may be persisted to storage in unencrypted form.
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:80
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
←[40m←[32minfo←[39m←[22m←[49m: Microsoft.Hosting.Lifetime[0]
Content root path: /app
I should also mention that using kubectl cluster-info dump I get the following line (for the service though, not the pod itself):
time="2021-05-13T10:56:35Z" level=error msg="Port 30080 for service web-external-svc is already opened by another service"

Spring Boot HTTP made secure (HTTPS) with Kubernetes Ingress

I am new at Kubernetes and GKE. I have some microservices written in Spring Boot 2 and deployed from GitHub to GKE. I would like to make these services secure and I want to know if it's possible to use ingress on my gateway microservice to make the entry point secure just like that. I created an ingress with HTTPS but it seems all my health checks are failing.
Is it possible to make my architecture secure just by using ingress and not change the spring boot apps?
Yes, It would be possible to use a GKE ingress given your scenario, there is an official guide on how to do this step by step.
Additionally, here's a step by step guide on how to implement Google Managed certs.
Also, I understand that my response is somewhat general, but I can only help you so much without knowing your GKE infrastructure (like your DNS name for said certificate among other things).
Remember that you must implement this directly on your GKE infrastructure and not on your GCP side, if you modify or create something new outside GKE but that it's linked to GKE, you might see that either your deployment rolled back after a certain time or that stopped working after a certain time.
Edit:
I will assume several things here, and since I don't have your Spring Boot 2 deployment yaml file, I will replace that with an nginx deployment.
cert.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: ssl-cert
spec:
domains:
- example.com
nginx.yaml
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "nginx"
namespace: "default"
labels:
app: "nginx"
spec:
replicas: 1
selector:
matchLabels:
app: "nginx"
template:
metadata:
labels:
app: "nginx"
spec:
containers:
- name: "nginx-1"
image: "nginx:latest"
nodeport.yaml (please modify "targetPort: 80" to your needs)
apiVersion: "v1"
kind: "Service"
metadata:
name: "nginx-service"
namespace: "default"
labels:
app: "nginx"
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "nginx"
type: "NodePort"
ingress-cert.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
networking.gke.io/managed-certificates: ssl-cert
spec:
backend:
serviceName: nginx-service
servicePort: 80
Keep in mind that assuming your DNS name "example.com" is pointing into your Load Balancer external IP, it could take a while to your SSL certificate to be created and applied.

How to expose deployment as a service in 2.1-ee?

I created a service and use nodeport etc but couldn't access the service.
I created a web-service.yaml file with the following content and used kubectl to create the Service:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: web-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
selector:
app: webserver
and the webserver.yaml file with the following Deployment details
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 3
template:
metadata:
labels:
run: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
In your deployment, label is run=webserver, but in your service, label is app=webserver. The service uses app=webserver as a Selector, through which it selects three pods that have the label "app" set to "webserver". In this case none of the pods has the label "app" so the deployment is not successfully exposed as a service. The label names and values in the deployment and service should match.

Running socket.io in Google Container Engine with multiple pods fails

I'm trying to run a socket.io app using Google Container Engine. I've setup the ingress service which creates a Google Load Balancer that points to the cluster. If I have one pod in the cluster all works well. As soon as I add more, I get tons of socket.io errors. It looks like the connections end up going to different pods in the cluster and I suspect that is the problem with all the polling and upgrading socket.io is doing.
I setup the load balancer to use sticky sessions based on IP.
Does this only mean that it will have affinity to a particular NODE in the kubernetes cluster and not a POD?
How can I set it up to ensure session affinity to a particular POD in the cluster?
NOTE: I manually set the sessionAffinity on the cloud load balancer.
Here would be my ingress yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-ip
spec:
backend:
serviceName: my-service
servicePort: 80
Service
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: myApp
spec:
sessionAffinity: ClientIP
type: NodePort
ports:
- port: 80
targetPort: http-port
selector:
app: myApp
First off, you need to set "sessionAffinity" at the Ingress resource level, not your load balancer (this is only related to a specific node in the target group):
Here is an example Ingress spec:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-sticky
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Second, you probably need to tune your ingress-controller to allow longer connection times. Everything else, by default, supports websocket proxying.
If you are still having issues please provide outputs for kubectl describe -oyaml pod/<ingress-controller-pod> and kubectl describe -oyaml ing/<your-ingress-name>
Hope this helps, good luck!

Resources