Docker image to be reread by kubernetes google cloud - image

as I am green to that subject, could you pls. help.
I deploy docker image to gcloud kubernetes.
What to do to make the cluster reread the docker image when a new one would appear?
My code is:
sudo docker build -t gcr.io/${PROJECT_ID}/sf:$ENV .
sudo docker push gcr.io/${PROJECT_ID}/sf:$ENV
sudo gcloud container clusters create sf:$ENV --num-nodes=3
sudo kubectl run sfmill-web$ENV --image=gcr.io/${PROJECT_ID}/sf:$ENV --port 8088
sudo kubectl expose deployment sfmill-web$ENV --type=LoadBalancer --port 8088 --target-port 8088

kubectl set image deployment/sfmill-web$ENV sf=sf:$ENV
I encourage you to explore use Kubernetes configuration files to define resources.
You can explore the YAML for your deployment with:
kubectl get deployment/sfmill-web$ENV --output=yaml > ${PWD}/sfmill-web$ENV.yaml
You could then tweak the value of the image property and then reapply this to your cluster using:
kubectl apply --filename=${PWD}/sfmill-web$ENV.yaml
The main benefit to the configuration file approach is that you're effectively creating code to manage your infrastructure and, each time you change your code, you could check it into source control thereby knowing what you did at each stage.
Using kubectl is great but it makes it more challenging to recreate the cluster from scratch.... Which kubectl command did I perform next? Yes, you could (bash) script all your kubectl commands too which would help but configuration files remain the ideal solution.
HTH

Related

How can we send aws address in my local machine to jenkins image run in docker container?

I am trying to send the path of aws in my host machine to jenkins that will be run in a docker container. So I downloaded jenkins image and I am trying to use aws cli command in jenkins pipeline in order to build nodejs application and then deploy it to s3 bucket. For the I need aws cli in jenkins image that I am running through docker. As far as I know, once you run any image in docker container, then it will be a seprate environemnt in itself so jenkins will not know that I have aws installed in my mac unless I send it address of aws in my mac which is what I am trying to do with
-v $(which aws): $(which aws)
command.
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $(which aws):$(which aws) jenkins/jenkins:2.190.2
However after I run this container in command line, it shows the following error response
docker: Error response from daemon: Mounts denied:
The path /usr/local/bin/aws
is not shared from OS X and is not known to Docker.
According to some of the answers I found in stackoverflow I then tried to add the address of aws in Docker file sharing panel. When I added the address of aws in docker, it again shows that
The path /usr is reserved by Docker however it may be possible to export specific subdirectories.
I have been able to get around this. I tried adding the whole
usr/local/bin/aws
in docker file sharing panel but still it shows the same problem. Does anyone have any idea what other things we can do in order to send the address of aws in my local container to jenkins image that I am trying to run in docker container?
You need to install aws-cli in your docker image, and then you will able to use aws-cli inside your container.
FROM jenkins/jenkins:2.190.2
USER root
RUN apt-get update && \
apt-get install awscli -y
USER jenkins
-v or volumes are not designed to bind the host executable, but they are designed for files and folders for persistent storage. If you need executable you need to add in your docker image.
To be able to save (persist) data and also to share data
between containers, Docker came up with the concept of volumes. Quite
simply, volumes are directories (or files) that are outside of the
default Union File System and exist as normal directories and files on
the host filesystem.
understanding-volumes-docker
For this question
I am trying to use aws CLI command in jenkins pipeline in order to
build nodejs application and then deploy it to s3 bucket.
If you are inside AWS, you can assign the IAM role to Jenkins server and you will not be required to bind host keys.
Or if you are outside AWS, then you just need bind host aws config and credentials ,
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $HOME/.aws/:/var/jenkins_home/.aws/ jenkins/jenkins:2.190.2

Where the images get stored in Azure Kubernetes Cluster after pulling from Azure Container Registry?

I am using Kubernetes to deploy all my microservices provided by Azure Kubernetes Services.
Whenever I release an update of my microservice which is getting frequently from last one month, it pulls the new image from the Azure Container Registry.
I was trying to figure out where do these images reside in the cluster?
Just like Docker stores, the pulled images in /var/lib/docker & since the Kubernetes uses Docker under the hood may be it stores the images somewhere too.
But if this is the case, how can I delete the old images from the cluster that are not in use anymore?
Clusters with Linux node pools created on Kubernetes v1.19 or greater default to containerd for its container runtime (Container runtime configuration).
To manually remove unused images on a node running containerd:
Identity node names:
kubectl get nodes
Start an interactive debugging container on a node (Connect with SSH to Azure Kubernetes Service):
kubectl debug node/aks-agentpool-11045208-vmss000003 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Setup crictl on the debugging container (check for newer releases of crictl):
The host node's filesystem is available at /host, so configure crictl to use the host node's containerd.sock.
curl -sL https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz | tar xzf - -C /usr/local/bin \
&& export CONTAINER_RUNTIME_ENDPOINT=unix:///host/run/containerd/containerd.sock IMAGE_SERVICE_ENDPOINT=unix:///host/run/containerd/containerd.sock
Remove unused images on the node:
crictl rmi --prune
You are correct in guessing that it's mostly up to Docker, or rather to whatever the active CRI plugin is. The Kubelet automatically cleans up old images when disk space runs low so it's rare that you need to ever touch it directly, but if you did (and are using Docker as your runtime) then it would be the same docker image commands as per normal.
I was trying to figure out where do these images reside in the
cluster?
With the test and check, the result shows each node in the AKS cluster installed the Docker server, and the images stored like Docker as you say that the image layers stored in the directory /var/lib/docker/.
how can I delete the old images from the cluster that are not in use
anymore?
You can do this through the Docker command inside the node. Follow the steps in Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes to make a connection to the node, then you could delete the image through the Docker CLI docker rmi image_name:tag, but carefully with it, make sure the image is really no more useful.

How to use kubernetes helm from a CI/CD pipeline hosted outside the k8s cluster

I am using kubernetes helm to deploy apps to my cluster. Everything works fine from my laptop when helm uses the cluster's kube-config file to deploy to the cluster.
I want to use helm from my CI/CD server (which is separate from my cluster) to automatically deploy apps to my cluster. I have created a k8s service account for my CI/CD server to use. But how do I create a kube-config file for the service account so that helm can use it to connect to my cluster from my CI/CD server??
Or is this not the right way to use Helm from a CI/CD server?
Helm works by using the installed kubectl to talk to your cluster. That means that if you can access your cluster via kubectl, you can use helm with that cluster.
Don't forget to make sure you're using to proper context in case you have more than one cluster in you kubcfg file. You can check that by running kubectl config current-context and comparing that to the cluster details in the kubecfg.
You can find more details in Helm's docs, check the quick start guide for more information.
why not just run your CI server inside your kubernetes cluster then you don't have to manage secrets for accessing the cluster? We do that on Jenkins X and it works great - we can run kubectl or helm inside pipelines just fine.
In this case you will want to install kubectl on whichever slave or agent you have identified for use by your CI/CD server, OR install kubectl on-the-fly in your automation, AND then make sure you have OR are able to generate a kubeconfig to use.
To answer the question:
But how do I create a kube-config file for the service account ...
You can set new clusters, credentials, and contexts for use with kubectl in a default or custom kubeconfig file using kubectl config set-cluster, kubectl config set-credentials, and kubectl config set-context. If you have KUBECONFIG env variable set and pointing to a kubeconfig file, that works or when setting new entries simply pass -kubeconfig to point to a custom file.
Here's the relevant API documentation for v1.6.
We created helmsman which provides you with declarative syntax to manage helm charts in your cluster. It configures kubectl (and therefore helm) for you wherever you run it. It can also be used from a docker container.

DOCKER - LAMP Stack issues - Premade Image

Trying to setup a LAMP stack with docker,
and found and tried to used https://hub.docker.com/r/linode/lamp/
But I can't find and don't know how to access the files linked to the domain
or how to change the domain name from example.com and so on.
I think my real question is how do I change files or rebuild an image
from other people.
First of all I want to mention I'm not a big fan of this image + approach because it's bundling multiple microservices. I would recommend to use a container for apache2, a container for mysql etc.
But for the setup of LAMP. I'm using the documentation provided on the site.
I've a path /xx/test/index.html which contains some html. I will map the port of the container on my container port + mount my files to the right folder in the container.
docker run -p 80:80 -t -i -v /root/test/:/var/www/example.com/public_html/ linode/lamp /bin/bash
I'm using -ti and start a bash session. In this they are starting the apache2 + mysql service. (it is the approach of the official documentation. Not mine. It's a strange approach):
root#35d00285b625:/# service apache2 start
* Starting web server apache2 *
root#35d00285b625:/# service mysql start
* Starting MySQL database server mysqld [ OK ]
* Checking for tables which need an upgrade, are corrupt or were
not closed cleanly.
After starting the services you can exit the container by pressing ctrl + p then ctrl + q. Now you can check your server-ip:80 to check your html code. If you want to replace example.conf you can mount your own apache2 configurations too.
If you want to change foldernames inside the image I would recommend to create your own dockerfile which starts with:
FROM docker pull linode/lamp
RUN changes..
First of all, Consider using microservices in separate containers. This will provide advantages like:
Fault Containment
Ease of Upgrades
Eliminates long-term commitment to a single technology stack
Easy to scale
System resilience
...
Now Docker was created with having microservices in mind, so for your LAMP Stack, I recommend using Apache+PHP in a container and mysql in another container. To make your containers communicate to eachother, create a userdefined network and put both containers in it.
Now back to your question:
You have 3 options for using your custom configuration files:
You need to mount your configuration files when creating a container(Recommended):
sudo docker run -d --name my-apache -v /path/to/custom/httpd.conf:/usr/local/apache2/conf/httpd.conf httpd
Please note this example is using library (official) apache2 image from docker hub, You should consult image creator's instructions for custom images.
You can manually edit the configuration file inside a running container and commit it as a new image.
sudo docker commit my-apache myrepository/myimagename:tag
sudo docker run -d myrepository/myimagename:tag
Create your own image via Dockerfile, and using FROM <base image> directive.

Source code changes in kubernetes, SaltStack & Docker in production and locally

This is an abstract question and I hope that I am able to describe this clear.
Basically; What is the workflow in distributing of source code to Kubernetes that is running in production. As you don't run Docker with -v in production, how do you update running pods.
In production:
Do you use SaltStack to update each container in each pod?
Or
Do you rebuild Docker images and restart every pod?
Locally:
With Vagrant you can share a local folder for source code. With Docker you can use -v, but if you have Kubernetes running locally how would you mirror production as close as possible?
If you use Vagrant with boot2docker, how can you combine this with Docker -v?
Short answer is that you shouldn't "distribute source code", you should rather "build and deploy". In terms of Docker and Kubernetes, you would build by means of building and uploading the container image to the registry and then perform a rolling update with Kubernetes.
It would probably help to take a look at the specific example script, but the gist is in the usage summary in current Kubernetes CLI:
kubecfg [OPTIONS] [-u <time>] [-image <image>] rollingupdate <controller>
If you intend to try things out in development, and are looking for instant code update, I'm not sure Kubernetes helps much there. It's been designed for production systems and shadow deploys are not a kind of things one does sanely.

Resources