Gitlab CI/CD with Kubernetes 1.16: Does my production job fail due to the Kubernetes version? - continuous-integration

We are using Gitlab's CI/CD Auto DevOps on gitlab.com with a connected Kubernetes cluster. Until recently, we ran on Azure, but have now decided to switch over to digitalocean. The build / deploy pipeline used to run fine on Azure, but when I run it now on our fresh cluster, I’m getting this error during the "production" job:
$ auto-deploy deploy
secret/production-secret replaced
Deploying new release...
Release "production" does not exist. Installing it now.
Error: validation failed: unable to recognize "": no matches for kind
"Deployment" in version "extensions/v1beta1"
ERROR: Job failed: exit code 1
After doing some googling, I found this release announcement for Kubernetes 1.16, which states that the Deployment resource has been moved up from extensions/v1beta1 to (eventually) apps/v1, and - more importantly - has been dropped from extensions/*:
https://kubernetes.io/blog/2019/09/18/kubernetes-1-16-release-announcement/
The Kubernetes version used on digitalocean is indeed 1.16.2. I do not recall the version we used on Azure, but judging from the article’s date, the 1.16 release is somewhat recent (September 2019).
As far as I can tell, the deployment algorithm is implemented inside Gitlab's "auto-deploy" image, specifically this script, but I fail to see where I can adapt the specific kubectl commands being executed.
My question is this: Am I right in assuming that this issue is caused by Gitlab’s CI/CD using a pre-1.16 notation to automatically create Deployments on Kubernetes clusters? If so, how can I adapt the deployment script to use the apps/v1 scope?

I was getting the same error while doing it with circleCI, then found that the kubectl version was 1.13 against the kubernetes version of 1.15.4 , so refer to this stackoverflow post and try placing the kubectl version in gitlab, same as the one your current kubernetes cluster shows.
you can do
kubectl version
to get the client and server versions, just match the gitlab kubectl with the updated one

Use 'apps/v1' for 'apiVersion', 'extensions/v1beta1' has deprecated Deployment for a few versions now.

As it turns out, Gitlab Auto DevOps indeed does not support Kubernetes 1.16 as of yet. They're working on it.
See also the issue I opened on gitlab.
I assume it's possible to fork the helm chart project and build your own version, but I'm not willing to go to those lengths.

As it turns out, Gitlab Auto DevOps indeed does not support Kubernetes 1.16 as of yet.
That might change with GitLab 13.5 (October 2020)
Incremental rollout via AutoDevOps is compatible with Kubernetes 1.16
Clusters in GKE were automatically upgraded to Kubernetes v1.16 on October 6th, 2020. We
updated Auto DevOps to support this version
for incremental rollouts to continue working as expected. This upgrade affects users who
continuously deploy to production using timed incremental rollout, and those who automatically
deploy to staging but manually deploy to production.
See Documentation and Issue.
And that comes with, still with GitLab 13.5:
Upgrade Auto DevOps to Helm 3
Auto DevOps aims to bring outstanding ease of use and security best practices to its users, out of the box. Until now, Auto DevOps in Kubernetes environments required Helm v2 to be installed on the cluster. This presented a security risk given Tiller’s root access rights. With the introduction of Helm v3, Tiller is no longer a requirement.
The current GitLab version finally supports Helm v3 so you can rest assured you’re getting the latest and great functionality and security updates. In the case of a GitLab Managed Cluster, you can upgrade your Helm installation by following our documentation. Note that Helm 2 support is expected to end around November 2020.
See Documentation and Issue.
As I mention here (May 2021), Helm V2 will be not supported starting GitLab 14.0 (June 2021).

Related

How to migrate from certificate-based Kubernetes integration to the Gitlab agent for Kubernetes

I have some integration tubes for more than a year working without any problem. And I realize that until this month February 2023 support for the integration of certified-based Kubernetes ends. So I have to migrate to using something called Gitlab agent. Which apparently exists a few days ago and I had not noticed. I have created the agent without problems in my Kubernetes cluster following the gitlab documentation, but I have a small problem.
How can I tell my CI/CD workflow to stop using my old certificate-based integration and now start using my new Gitlab agent to authenticate/authorize/integrate with my kubernetes cluster.
I have followed these instructions https://docs.gitlab.com/ee/user/infrastructure/clusters/migrate_to_gitlab_agent.html
But the part I'm not quite clear on is what additional instructions I should add to my .gitlab-ci.yml file to tell it to use the gitlab agent.
I already created gitlab agent, created config.yaml, also gitlab says that they are connected with my GKE.
Try adding a tag in my config.yaml like this.
ci_access:
projects:
-id: path/to/project
In the same way, the integration pipeline worked without problems, but I am sure that it is working because it is connected in the other way. Any way to ensure that it is working with the gitlab agent?

Build a Kubernetes Operator For rolling updates

I have created a Kubernetes application (Say deployment D1, using docker image I1), that will run on client clusters.
Requirement 1 :
Now, I want to roll updates whenever I update my docker image I1, without any efforts from client side
(Somehow, client cluster should automatically pull the latest docker image)
Requirement 2:
Whenever, I update a particular configMap, the client cluster should automatically start using the new configMap
How should I achieve this ?
Using Kubernetes Cronjobs ?
Kubernetes Operators ?
Or something else ?
I heard that k8s Operator can be useful
Starting with the Requirement 2:
Whenever, I update a particular configMap, the client cluster should
automatically start using the new configMap
If configmap is mounted to the deployment it will get auto-updated however if getting injected as the Environment restart is only option unless you are using the sidecar solution or restarting the process.
For ref : Update configmap without restarting POD
How should I achieve this ?
ImagePullpolicy is not a good option i am seeing however, in that case, manual intervention is required to restart deployment and it
pulls the latest image from the client side and it won't be in a
controlled manner.
Using Kubernetes Cronjobs ?
Cronjobs you will run which side ? If client-side it's fine to do
that way also.
Else you can keep deployment with Exposed API which will run Job to
update the deployment with the latest tag when any image gets pushed
to your docker registry.
Kubernetes Operators ?
An operator is a good native K8s option you can write in Go,
Python or your preferred language with/without Operator framework or Client Libraries.
Or something else?
If you just looking for updating the deployment, Go with running the API in the deployment or Job you can schedule in a controlled manner, no issue with the operator too would be a more native and a good approach if you can create, manage & deploy one.
If in the future you have a requirement to manage all clusters (deployment, service, firewall, network) of multiple clients from a single source of truth place you can explore the Anthos.
Config management from Git repo sync with Anthos
You can build a Kubernetes operator to watch your particular configmap and trigger cluster restart. As for the rolling updates, you can configure the deployment according to your requirement. A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template) is changed, for example, if the labels or container images of the template are updated. Add the specifications for rolling update on your Kubernetes deployment .spec section:
type: RollingUpdate
rollingUpdate:
maxSurge: 3 //the maximum number of pods to be created beyond the desired state during the upgrade
maxUnavailable: 1 //the maximum number of unavailable pods during an update
timeoutSeconds: 100 //the time (in seconds) that waits for the rolling event to timeout
intervalSeconds: 5 //the time gap in seconds after an update
updatePeriodSeconds: 5 //time to wait between individual pods migrations or updates

Any best way to create kibana automated snapshot to GCP storage as i am using an older version of Kibana

Any best way to create a kibana automated snapshot to GCP storage as I am using an older version of Kibana 7.7.1, Also I do not have any automated backup currently.
Kibana has Snapshot lifecycle management(SLM) that helps you do this. You have to run the Kibana with basic license
Here is a tutorial, you could also directly use the SLM API to create and automate this process along with Index-lifecycle management.

Swagger UI not updated when deploying to AKS

I have a Spring Boot application that exposes multiple APIs and uses swagger for documentation. This service is then deployed to AKS using Helm through Azure DevOps.
When running locally, the swagger documentation looks updated but however, when I deploy it; the documentation goes back to the outdated version. I'm not really sure what is happening during deployment and I am unable to find any help on the forums.
As far as I know; I do not think there is any sort of caching taking place but again I'm not sure.
It sounds like you suspect an incorrect version of your application is running in the cluster following a build and deployment.
Assuming things like local browser caching have been eliminated from the equation, review the state of deployments and/or pods in your cluster using CLI tools.
Run kubectl describe deployment <deployment-name>, the pod template will be displayed which defines which image tag the pods should use. This should correlate with the tag your AzDO pipeline is publishing.
List the pods and describe them to see if the expected image tag is what is running in the cluster after a deployment. If not, check the pods for failures - when describing the pod, pay attention to the lastState object if it exists. Use kubectl logs <podname> to troubleshoot in the application layer.
It can take a few minutes for the new pods to become available depending on configuration.

Deploy service to Kubernetes via Go

I am trying to write a Go script which will deploy given deployment to K8S cluster. Ideally, I would like the script to do something like this bash:
KUBECONFIG="/kubeconfig" kubectl rollout restart --namespace $k8s_namespace "deployment/${service_name}"
I have been looking into the Kubernetes implementation in GitHub and the client-go code but so far the only Go APIs I found is how to create deployments like stated here But instead I want to be able to reference an existing deployment in given k8s namespace and do rollout restart.

Resources