Running a bash script using a Kubenetes Service - bash

I am not sure how dumb or un-reasonable this question is, but we are trying to see if we can do this in any way.
I have a .bash file. And I want to run this when I invoke a url.
Let's take the url is https://domainname.com/jobapi
When I invoke this on the browser, this should invoke a .bash script on the container.
Is this really possible?
If it is possible, want to know if I need to add this script as a deployment or a job?

The first step, before looking at Kubernetes, is to configure a web server to run your script. This could be a generic web server like nginx or Apache, and you could add your script as a CGI script. There's plenty of tutorials out there that explain how to write CGI scripts.
Depending on the requirements of your application, a simple HTTP hook server might be a better match. Have a look at, for example, https://github.com/adnanh/webhook.
Either way, try this out with just Docker first, before trying to create a pod and potentially a service and an ingress in Kubernetes.
In a second step, to be able to access your service (the server invoking your script), you need to create a pod, probably through a deployment, and potentially a service and an ingress for it.
Kubernetes jobs are for running a script (or other program) once. They're most useful to automate maintaince tasks for your application.

What I would try to do is to run the shell script from a php file. Otherwise you are going to need some sort of driver to trigger the script.
So you would have the script as a regular executable, and upon the request php will execute it via shell.
Actually you can make it like an API; domain.com/job1 could execute job1, domain.com/jobn could execute jobn and so on.
Now, the way I'm describing would work only as a Deployment, as you want the server to be always up and ready to get requests.

Create a ingress service (NodePort if external facing) which will call a service
Call a service which maps the labels defined on the pod(runs script).This pod can be from a deployment or a simple pod
Make this service expose a pod/deployment
deployment can trigger the pods with shell script or a pod can trigger a shell script as well.
Ingress service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "domainame.com"
http:
paths:
- path: /jobapi
pathType: Prefix
backend:
serviceName: my-service
servicePort: 8080
my-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
nodePort: 30007
port: 8080
targetPort: 8080
run your bash script, this can be done by defining a deployment or a pod
Pod:
k run MyApp --image=nginx --labels=app=MyApp --port=8080 -- /bin/sh -c echo 'Im up'
or
Deployment.yaml
controllers/nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: MyAppdep
spec:
replicas: 2
selector:
matchLabels:
app: MyApp
template:
metadata:
labels:
app: MyApp
spec:
containers:
- name: nginx
image: nginx:1.14.2
commands: ["/bin/sh","-c","echo 'test'"]
ports:
- containerPort: 8080

Related

Job that executes command inside a pod

What I'd like to ask is if is it possible to create a Kubernetes Job that runs a bash command within another Pod.
apiVersion: batch/v1
kind: Job
metadata:
namespace: dev
name: run-cmd
spec:
ttlSecondsAfterFinished: 180
template:
spec:
containers:
- name: run-cmd
image: <IMG>
command: ["/bin/bash", "-c"]
args:
- <CMD> $POD_NAME
restartPolicy: Never
backoffLimit: 4
I considered using :
Environment variable to define the pod name
Using Kubernetes SDK to automate
But if you have better ideas I am open to them, please!
The Job manifest you shared seems the valid idea.
Yet, you need to take into consideration below points:
As running command inside other pod (pod's some container) requires interacting with Kubernetes API server, you'd need to interact with it using Kubernetes client (e.g. kubectl). This, in turn, requires the client to be installed inside the job's container's image.
job's pod's service account has to have the permissions to pods/exec resource. See docs and this answer.

Posting a json message to a webserver inside a pod

I am new to kubernetes.
I have implemented a webserver inside a pod and set a Nodeport service for that pod.
I want to send a POST request with a custom message (in json) to a pod after it has been created and ready to use. I want to use the go client library for that matter. Could you please let me know how I can do that?
Which part of the library come to help?
Thanks.
Say the go server runs on locally, you normally use http://localhost:3000 to access it. The pod then has a containerPort of 3000.
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-web-deployment
labels:
app: GoWeb
spec:
replicas: 1
selector:
matchLabels:
app: GoWeb
template:
metadata:
labels:
app: GoWeb
spec:
containers:
- name: go-web
image: me/go-web:1.0.1
ports:
- containerPort: 3000
The Service is then an abstraction of that pod, that describes how to access 1 or many Pods running that service.
The nodePort of the service is 31024.
apiVersion: v1
kind: Service
metadata:
name: go-web-service
spec:
type: NodePort
selector:
app: GoWeb
ports:
- port: 3000
nodePort: 31024
The application is published on http://node-ip:node-port for the public to consume. Kubernetes manages the mappings between the node and the container in the background.
| User | -> | Node:nodePort | -> | Pod:containerPort |
The Kubernetes internal Service and Pod IP's are not often available to the outside world (unless you specifically set a cluster up that way). Whereas the nodes themselves will often carry an IP address that is routable/contactable.

Ambassador Edge Stack : Working with sample project but not with my project

I am trying to configure Ambassador as API Gateway in my kubernates cluster locally.
Installation:
installed from https://www.getambassador.io/docs/latest/tutorials/getting-started/ both windows and Kubernetes part
can login with >edgectl login --namespace=ambassador localhost and see dashboard
configure with a sample project they provide from https://www.getambassador.io/docs/latest/tutorials/quickstart-demo/
Here is the YML file for deployment of demo app
apiVersion: apps/v1
kind: Deployment
metadata:
name: quote
namespace: ambassador
spec:
replicas: 1
selector:
matchLabels:
app: quote
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: quote
spec:
containers:
- name: backend
image: docker.io/datawire/quote:0.4.1
ports:
- name: http
containerPort: 8080
Everything is working as expected. Now I am trying to configure with my project. But it is not working.
So for simpler case, with keeping every configuration as the demo of Ambassador, I just modify from image: docker.io/datawire/quote:0.4.1 to image: angularapp:latest where this is a docker image of Angular 10 project.
But I am getting upstream connect error or disconnect/reset before headers. reset reason: connection failure
I spent one day with this problem. I restored my Kubernates from docker desktop app and reconfigured but no luck.
That error occurs when a mapping is valid, but the service it is pointing to cannot be reached for some reason. Is the deployment actually running (kubectl get deploy -A -o wide)? Is your angular app exposing port 8080? 8080 is a pretty common kubernetes port, but not so much in the frontend development world. If you use kubectl exec -it {{AMBASSADOR_POD}} -- sh does curl http://quote return the expected output?

How to deploy a simple Hello World program to local Kubernetes cluster

I have a very simple spring-boot Hello World program. When I run the application locally, I can navigate to http://localhost:8080/ and see the "Hello World" greeting displayed on the page. I have also created a Dockerfile and can build an image from it.
My next goal is to deploy this to a local Kubernetes cluster. I have used Docker Desktop to create a local kubernetes cluster. I want to create a deployment for my application, host it locally on the cluster, and access it from a browser.
I am not sure where to start with this deployment. I know that I will need to create charts, but I have no idea how to ultimately push this image to my cluster...
You need to create a kubernetes deployment and service definitions respectively.
These definitions can be in json or yaml format. Here is example definitions, you can use these definitions as template for your deploy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-very-first-deployment
labels:
app: first-deployment
spec:
replicas: 1
selector:
matchLabels:
app: first-deployment
template:
metadata:
labels:
app: first-deployment
spec:
containers:
- name: your-app
image: your-image:with-version
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: your-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30180
targetPort: 8080
selector:
app: first-deployment
Do not forget to update image line in deployment yaml with your image name and image version. After that replacement, save this file with name for example deployment.yaml and then apply this definition with kubectl apply -f deployment.yml command.
Note that, you need to use port 30180 to access your application as it is stated in service definition as nodePort value. (http://localhost:30180)
Links:
Kubernetes services: https://kubernetes.io/docs/concepts/services-networking/service/
Kubernetes deployments: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
you need to define deployment first to start , define docker image and required environment in deployment.

Running socket.io in Google Container Engine with multiple pods fails

I'm trying to run a socket.io app using Google Container Engine. I've setup the ingress service which creates a Google Load Balancer that points to the cluster. If I have one pod in the cluster all works well. As soon as I add more, I get tons of socket.io errors. It looks like the connections end up going to different pods in the cluster and I suspect that is the problem with all the polling and upgrading socket.io is doing.
I setup the load balancer to use sticky sessions based on IP.
Does this only mean that it will have affinity to a particular NODE in the kubernetes cluster and not a POD?
How can I set it up to ensure session affinity to a particular POD in the cluster?
NOTE: I manually set the sessionAffinity on the cloud load balancer.
Here would be my ingress yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-ip
spec:
backend:
serviceName: my-service
servicePort: 80
Service
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: myApp
spec:
sessionAffinity: ClientIP
type: NodePort
ports:
- port: 80
targetPort: http-port
selector:
app: myApp
First off, you need to set "sessionAffinity" at the Ingress resource level, not your load balancer (this is only related to a specific node in the target group):
Here is an example Ingress spec:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-sticky
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Second, you probably need to tune your ingress-controller to allow longer connection times. Everything else, by default, supports websocket proxying.
If you are still having issues please provide outputs for kubectl describe -oyaml pod/<ingress-controller-pod> and kubectl describe -oyaml ing/<your-ingress-name>
Hope this helps, good luck!

Resources