Travis for CI/CD - continuous-integration

We are planning to move from jenkins to travis for all our micro services and had a question when it comes to CD to different environments.
For example, we have a micro-service git repository with just master branch and 3 different environments on aws - dev, test and production. We can successfully build a docker image and push it to aws ecr.
After going through multiple resources, looks like many of them suggest to have different git branches for deploying to different environments which in my opinion is an overkill.
Is there any alternative with which we can deploy to different environments without having multiple branches?

How about this method?
It'll enable you to do this via untagged/tagged/specifically tagged commits. Simply switch from the depicted Cloud Foundry example to your AWS deployment.

Related

How to migrate from certificate-based Kubernetes integration to the Gitlab agent for Kubernetes

I have some integration tubes for more than a year working without any problem. And I realize that until this month February 2023 support for the integration of certified-based Kubernetes ends. So I have to migrate to using something called Gitlab agent. Which apparently exists a few days ago and I had not noticed. I have created the agent without problems in my Kubernetes cluster following the gitlab documentation, but I have a small problem.
How can I tell my CI/CD workflow to stop using my old certificate-based integration and now start using my new Gitlab agent to authenticate/authorize/integrate with my kubernetes cluster.
I have followed these instructions https://docs.gitlab.com/ee/user/infrastructure/clusters/migrate_to_gitlab_agent.html
But the part I'm not quite clear on is what additional instructions I should add to my .gitlab-ci.yml file to tell it to use the gitlab agent.
I already created gitlab agent, created config.yaml, also gitlab says that they are connected with my GKE.
Try adding a tag in my config.yaml like this.
ci_access:
projects:
-id: path/to/project
In the same way, the integration pipeline worked without problems, but I am sure that it is working because it is connected in the other way. Any way to ensure that it is working with the gitlab agent?

CI/CD involves EC2

Our code is divided to modules and stored on local Git server. The various modules are built and uploaded to the ECR.
Question: currently, can execute deployment on certain EC2 instance. What will be the preferred for my local Jenkins server to run the Deploy actions on the EC2?
Note: I've finished Working with SSM in the past with BAD impression!!
Thx - Albert

How to execute some script on the ec2 instance from the bitbucket-pipelines.yml?

I've bitbucket repository, bitbucket pipeline there and EC2 instance. EC2 have access to the repository ( can perform pull and docker build/run)
So it seems I only need to upload to EC2 some bash scripts and call it from bitbucket pipeline. How can I call it? Usually ssh connection is used to perform scripts on EC2, is it applicable from bitbucket pipeline? Is it a good solution?
two ways to solve this problem, I will leave it up to you.
I see you are using AWS, and AWS has a nice service called CodeDeploy. you can use that and create a few deployment scripts and then integrate it with your pipeline. Problem with it is that it is an agent that needs to be installed. so it will consume some resource not much but if u are looking at an agentless design then this solution wont work. you can check the example in the following answer https://stackoverflow.com/a/68933031/8248700
You can use something like Python Fabric (its a small gun) or Ansible (its a big canon) to achieve this. it is an agentless design works purely on SSH.
I'm using both the approaches for different scenarios. For AWS I use CodeDeploy and for any other cloud vendor I use Python Fabric. (We can use CodeDeploy on other than AWS but then it comes under on-premise pricing for which it charges for per deployment)
I hope this brings some clarity.

Set up the jobs of CI in a VM server instead of docker image

GitLab CI is highly integrated with Docker. But in some cases the applications need some interactions with some app (which cannot be deployed in docker)
so i want to make my jobs (on gitlab-ci.yml) be running on a Linux VM Server.
how can i set up that in Gitlab? i searched in many website but i didn't find the answer.
thanks you
You can use different executors with Gitlab. For your case, you should set up Gitlab Runner as shell executor and register it (provide it with token obtained from repo)
https://docs.gitlab.com/runner/install/linux-repository.html

why do you need to have staging and production on the same K8 instance when using Jenkins x

So im not seeing why you would want jenkins x to install a staging and production on the same K8 server as itself. Does this not mean every team has its own production?
I could understand having jenkins x and staging on one server, and then have another server for production.
So with Jenkins X, each team has their own Environments like Staging and Production.
When installing Jenkins X via the jx create cluster command we default to creating the teams environments in different namespaces in the kubernetes cluster.
Obviously you could use different clusters for different teams; so each team could use a separate cluster.
There is also a requirement that lots of folks want to use separate clusters (and cloud Service Accounts) for different environments of a team. e.g. the Dev environment could be on one cluster, the Staging on another cluster and Production another one.
We are working on making multi-cluster configuration easy to setup - for now its a manual process.

Resources