I am deploying a series of Cloud Build Triggers through Terraform, but I also want Terraform to trigger once every deployed Cloud Build so that it can do the initial deployment.
The Cloud Build Triggers are used to deploy Cloud Functions (and also Cloud Run and maybe Workflows). We could deploy the functions in the Terraform but we want to keep the command easy to modify so we don't want to duplicate it on both Terraform and the Cloud Build config.
It's important for the clarity and the evolutivity/maintainability of your pipeline to separate clearly the concern of each step.
You have a (set of) step to deploy the infrastructure of your project (here, your terraform)
You have a (set of) step that run process on your project (can be an Ansible script on VM, trigger Cloud Functions, Cloud Run, or a Cloud Build trigger).
I'm pretty sure that you can add this trigger in Terraform, but I strongly don't recommend you to do this.
Edit 1
I wasn't clear. You have to run your trigger by API after the terraform deployment, in your main pipeline. Then, the subsequent trigger will be done by Push to the Git repository.
Related
I have a standalone cloud source repository, (not cloned from Github).
I am using this to automate deploying of ETL pipelines . So I am folowing Google recommended guidelines, i.e committing the ETL pipeline as a .py file.
The cloud build trigger associated with the Cloud source repository will run the code as mentioned in the cloudbuild.yaml file and put the resultant .py file on the composer DAG bucket.
Composer will pick up this DAG and run it .
Now my question is, how do I orchestrate the CICD in dev and prod? I did not find any proper documentation to do this. So as of now I am following manual approach. If my code passes in dev, I am committing the same to the prod repo. Is there a way to do this in a better way?
Cloud Build Triggers allow you to conditionally execute a cloudbuily.yaml file on various ways. Have you tried setting up a trigger that fires only on changes to a dev branch?
Further, you can add substitutions to your trigger and use them in the cloudbuild.yaml file to, for example, name the generated artifacts based on some aspect of the input event.
See: https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values and https://cloud.google.com/build/docs/configuring-builds/use-bash-and-bindings-in-substitutions
I have an AWS CodePipeline which uses CodeBuild as the build step and deploys Lambda functions. This pipeline is triggered upon any commit on the development branch which houses multiple Lambda functions. Right now, since all these Lambdas use the same pipeline, they have the same build job as well.
The problem is, what happens in case one of my Lambdas has a different requirement in the build step (say installing a library). Is there any way to trigger a different build job for a specific Lambda? I am guessing this delves into the age-old issue of Codepipeline unable to deal with monorepo, but any suggestions are welcome.
You could integrate change detection for your lambda functions. The only thing you need is that you need to check out the source separately in the job so you got the .git folder (see: https://forums.aws.amazon.com/thread.jspa?threadID=251732).
Afterwards you can easily check with git which lambda function was actually changed and run your pre-build commands based on the result.
I am having this GitLab CI/CD which builds then tests and pushes my projects container to GitLab container register successfully. But now I am wondering how I can do the deployment stage automated too. currently, I am doing it manually and after each successful pipeline, I SSH to my server and run several commands to pull the latest images from the GitLab.com container registry and then run them. But I would like to make this step automated as well. Yet, I don't know how?
Actually I have seen some examples of opening an ssh session from CI/CD pipeline but it doesn't feel secure enough. So I was wondering is there a better way for this or I have to actually do this.
Not that I am using gitlab.com so the GitLab server is not installed on my machine and I can't share assets between them directly
There are many ways to achieve this, depending on your setup, other requirements, scale etc.
I'll just give you two options.
I. Kubernetes
create cluster (ie control plane) somewhere
add your cluster to GitLab (now GitLab can even create cluster for you in AWS and GCP, check this page)
attach your target machine as a worker node to the cluster
create Kubernetes YAML files \ Helm chart for your application and deploy via usual ways, e.g. kubectl apply -f ... or helm install ..., or rely on Auto DevOps to do this step for you
This is quite complex but sort of "right" way of doing things.
II. Private GitLab runner
go to Settings > CI/CD > Runners of your GitLab project or group
obtain the registration token
install your own GitLab runner right on the target machine, and register it on the GitLab server using registration token, see example
give runner some specific tag
use that tag in your .gitlab-ci.yml file, see documentation
then deployment process is just a local process of docker pull... and docker run ... for your image
This is a lot simpler, but is a "wrong" way, as you are mixing CI\CD infrastructure with target environment.
I have a Golang app running on Google Cloud App Engine that I can update manually with "gcloud app deploy" but I cannot figure out how to schedule automatic redeployments. I'm assuming I have to use cron.yaml, but then I'm confused about what url to use. Basically it's just a web app with one main index.html page with changing content, and I would like to schedule automatic redeployments... how do I have to go about that?
If you want to automatically re-deploy your app when the code changes, you need what's called CI/CD (Continuous integration/deployment). What a CI does is, for each new commit to your repository, check out the new code and run a test script. If all the tests pass (or if you don't have any tests at all), the CI server can then deploy your code to App Engine, all automatically.
One free (for open-source projects) CI provider is Travis CI. To configure it, you need to make an account with Travis, and a file called .travis.yml in the root of your repository. To set up App Engine deploys, you can follow this guide to set up a service account and add the encrypted file to your repo. It will run a gcloud app deploy from a container on their servers, whenever you push code to a certain branch (master by default) in your repo.
Another option, which avoids setting up CI at all, is to simply change your app to generate the dynamic parts of the page when it gets requested. Reading the documentation for html/template would point you in the right direction.
My team and I were setting everything up so that Forge was in charge of deployment exclusively, while a CI cloud service would run unit/integration tests on each push to develop or master (staging or production, respectively).
Given the fact that Forge will trigger a deployment on each push to master (or any other branch), where does the CI server takes place in this model? Can I get a quick explanation of the workflow (and if possible an example CI cloud that would work with it)
Next to the auto deploy trigger Forge provides you a deploy-hook-url that can be called to trigger the deployment script. Usually the ci cloud service provides a way to customize the test/deployment process with some sort of bash scripts (curl) or gives an option to call an url after a successful run.
For example I used to use codeship for ci and they have an option in the settings called deployment where i could insert a custom script which calls the trigger url like curl -X GET https://forge.laravel.com/servers/xxx/sites/xxx/deploy/http?token=xxx
deactivate the aug-deploy trigger
customize the ci settings and call the forge-hook after successful run