How to migrate from certificate-based Kubernetes integration to the Gitlab agent for Kubernetes - yaml

I have some integration tubes for more than a year working without any problem. And I realize that until this month February 2023 support for the integration of certified-based Kubernetes ends. So I have to migrate to using something called Gitlab agent. Which apparently exists a few days ago and I had not noticed. I have created the agent without problems in my Kubernetes cluster following the gitlab documentation, but I have a small problem.
How can I tell my CI/CD workflow to stop using my old certificate-based integration and now start using my new Gitlab agent to authenticate/authorize/integrate with my kubernetes cluster.
I have followed these instructions https://docs.gitlab.com/ee/user/infrastructure/clusters/migrate_to_gitlab_agent.html
But the part I'm not quite clear on is what additional instructions I should add to my .gitlab-ci.yml file to tell it to use the gitlab agent.
I already created gitlab agent, created config.yaml, also gitlab says that they are connected with my GKE.
Try adding a tag in my config.yaml like this.
ci_access:
projects:
-id: path/to/project
In the same way, the integration pipeline worked without problems, but I am sure that it is working because it is connected in the other way. Any way to ensure that it is working with the gitlab agent?

Related

Swagger UI not updated when deploying to AKS

I have a Spring Boot application that exposes multiple APIs and uses swagger for documentation. This service is then deployed to AKS using Helm through Azure DevOps.
When running locally, the swagger documentation looks updated but however, when I deploy it; the documentation goes back to the outdated version. I'm not really sure what is happening during deployment and I am unable to find any help on the forums.
As far as I know; I do not think there is any sort of caching taking place but again I'm not sure.
It sounds like you suspect an incorrect version of your application is running in the cluster following a build and deployment.
Assuming things like local browser caching have been eliminated from the equation, review the state of deployments and/or pods in your cluster using CLI tools.
Run kubectl describe deployment <deployment-name>, the pod template will be displayed which defines which image tag the pods should use. This should correlate with the tag your AzDO pipeline is publishing.
List the pods and describe them to see if the expected image tag is what is running in the cluster after a deployment. If not, check the pods for failures - when describing the pod, pay attention to the lastState object if it exists. Use kubectl logs <podname> to troubleshoot in the application layer.
It can take a few minutes for the new pods to become available depending on configuration.

Set up the jobs of CI in a VM server instead of docker image

GitLab CI is highly integrated with Docker. But in some cases the applications need some interactions with some app (which cannot be deployed in docker)
so i want to make my jobs (on gitlab-ci.yml) be running on a Linux VM Server.
how can i set up that in Gitlab? i searched in many website but i didn't find the answer.
thanks you
You can use different executors with Gitlab. For your case, you should set up Gitlab Runner as shell executor and register it (provide it with token obtained from repo)
https://docs.gitlab.com/runner/install/linux-repository.html

Gitlab CI/CD with Kubernetes 1.16: Does my production job fail due to the Kubernetes version?

We are using Gitlab's CI/CD Auto DevOps on gitlab.com with a connected Kubernetes cluster. Until recently, we ran on Azure, but have now decided to switch over to digitalocean. The build / deploy pipeline used to run fine on Azure, but when I run it now on our fresh cluster, I’m getting this error during the "production" job:
$ auto-deploy deploy
secret/production-secret replaced
Deploying new release...
Release "production" does not exist. Installing it now.
Error: validation failed: unable to recognize "": no matches for kind
"Deployment" in version "extensions/v1beta1"
ERROR: Job failed: exit code 1
After doing some googling, I found this release announcement for Kubernetes 1.16, which states that the Deployment resource has been moved up from extensions/v1beta1 to (eventually) apps/v1, and - more importantly - has been dropped from extensions/*:
https://kubernetes.io/blog/2019/09/18/kubernetes-1-16-release-announcement/
The Kubernetes version used on digitalocean is indeed 1.16.2. I do not recall the version we used on Azure, but judging from the article’s date, the 1.16 release is somewhat recent (September 2019).
As far as I can tell, the deployment algorithm is implemented inside Gitlab's "auto-deploy" image, specifically this script, but I fail to see where I can adapt the specific kubectl commands being executed.
My question is this: Am I right in assuming that this issue is caused by Gitlab’s CI/CD using a pre-1.16 notation to automatically create Deployments on Kubernetes clusters? If so, how can I adapt the deployment script to use the apps/v1 scope?
I was getting the same error while doing it with circleCI, then found that the kubectl version was 1.13 against the kubernetes version of 1.15.4 , so refer to this stackoverflow post and try placing the kubectl version in gitlab, same as the one your current kubernetes cluster shows.
you can do
kubectl version
to get the client and server versions, just match the gitlab kubectl with the updated one
Use 'apps/v1' for 'apiVersion', 'extensions/v1beta1' has deprecated Deployment for a few versions now.
As it turns out, Gitlab Auto DevOps indeed does not support Kubernetes 1.16 as of yet. They're working on it.
See also the issue I opened on gitlab.
I assume it's possible to fork the helm chart project and build your own version, but I'm not willing to go to those lengths.
As it turns out, Gitlab Auto DevOps indeed does not support Kubernetes 1.16 as of yet.
That might change with GitLab 13.5 (October 2020)
Incremental rollout via AutoDevOps is compatible with Kubernetes 1.16
Clusters in GKE were automatically upgraded to Kubernetes v1.16 on October 6th, 2020. We
updated Auto DevOps to support this version
for incremental rollouts to continue working as expected. This upgrade affects users who
continuously deploy to production using timed incremental rollout, and those who automatically
deploy to staging but manually deploy to production.
See Documentation and Issue.
And that comes with, still with GitLab 13.5:
Upgrade Auto DevOps to Helm 3
Auto DevOps aims to bring outstanding ease of use and security best practices to its users, out of the box. Until now, Auto DevOps in Kubernetes environments required Helm v2 to be installed on the cluster. This presented a security risk given Tiller’s root access rights. With the introduction of Helm v3, Tiller is no longer a requirement.
The current GitLab version finally supports Helm v3 so you can rest assured you’re getting the latest and great functionality and security updates. In the case of a GitLab Managed Cluster, you can upgrade your Helm installation by following our documentation. Note that Helm 2 support is expected to end around November 2020.
See Documentation and Issue.
As I mention here (May 2021), Helm V2 will be not supported starting GitLab 14.0 (June 2021).

Advice for Continuous Integration / Development

I've got a Docker based PHP project. PHP framework is Laravel.
The project is setup in Gitlab and I use Jenkins for CI/CD.
When I merge into the master branch, a new build is triggered in Jenkins. I clone the repo, run Unit tests etc etc.
Once completed, I build a new Docker image with the latest codebase inside and push this image up to the Docker registry.
My jenkinsfile then calls a script on the production server that pulls down the latest docker image and stops / starts the running container.
I setup a Nginx proxy/Load balancer so users do not see any down time during the starting and stopping of containers.
This workflow works very well but I have one issue:
The storage folder in Laravel gets wiped when I do a new deployment, so any files uploaded by users are lost.
How do I overcome this?
I've recently started working on a new version of the project that sends all file uploads to Digital Ocean Spaces but I've found this to be very very slow.
I'm assuming S3 will be the same.
All suggestions are welcome.
My solution was to map a volume in container to the host, when I run started my docker container.
I also had to set permissions but now I have persistence during deployments.
No requirement for S3 or Spaces.

TeamCity Build Agent on CloudFoundry?

Is it possible to deploy TeamCity build agents onto CloudFoundry?
Is there an OSS project/buildpack for doing something like this?
I'm very new to cloud foundry; how would someone go about creating one?
I am not sure I understand what you are trying to achieve. If you would like to run TeamCity agents on virtual machines, there is a good build-it support cloud providers in TeamCity. Two come out of the box: VMWare and AWS. If you need to replicate an environment/bluprint during your build process using CloudFoundry, there is programmatic API for this.

Resources