Deploy code from gitlab on ec2 WITHOUT.gitlab-ci.yml file - amazon-ec2

I am using gitlab as repository and want to push my code on ec2 whenever any commit is done on gitlab. The gitlab CD/CI documentation states that I have to add a file .gitlab-ci.yml at the root directory of my repo. This is actually a problem for me because, I want project repo to have only code and not any configuration related info like build and deploy etc. Also when anybody clones the repo, they would have access to location where my code is pushed/deployed on ec2. Is there any work around for this problem ?

You'll need to use a gitlab-ci.yml filke to deploy your application. The file provides instructions and a pipeline "infrastructure" which, if properly configured, will build, test and automatically deploy your code.
If you are worried about leaking credentials, you should use the built-in instance variables to mask your important bits, like a "$SERVERNAME" or "$DB_PASSWORD" for instance.
Lastly, you can use the power of gitignore, in order to not publish all of your credentials or sensitive bits to your projects' servers or instances.

Related

Using Cloud source repository in a GCP production project

I have a standalone cloud source repository, (not cloned from Github).
I am using this to automate deploying of ETL pipelines . So I am folowing Google recommended guidelines, i.e committing the ETL pipeline as a .py file.
The cloud build trigger associated with the Cloud source repository will run the code as mentioned in the cloudbuild.yaml file and put the resultant .py file on the composer DAG bucket.
Composer will pick up this DAG and run it .
Now my question is, how do I orchestrate the CICD in dev and prod? I did not find any proper documentation to do this. So as of now I am following manual approach. If my code passes in dev, I am committing the same to the prod repo. Is there a way to do this in a better way?
Cloud Build Triggers allow you to conditionally execute a cloudbuily.yaml file on various ways. Have you tried setting up a trigger that fires only on changes to a dev branch?
Further, you can add substitutions to your trigger and use them in the cloudbuild.yaml file to, for example, name the generated artifacts based on some aspect of the input event.
See: https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values and https://cloud.google.com/build/docs/configuring-builds/use-bash-and-bindings-in-substitutions

How to prevent GitLab CI/CD from deleting the whole build

I'm currently having a frustrating issue.
I have a setup of GitLab CI on a VPS server, which is working completely fine, I have my pipelines running without a problem.
The issue comes after having to redo a pipeline. Each time GitLab deletes the whole folder, where the build is and builds it again to deploy it. My problem is that I have a "uploads" folder, that stores all user content, that was uploaded, and each time I redo a pipeline everything gets deleted from this folder and I obviously need this content, because it's the purpose of the app.
I have tried GitLab CI cache - no luck. I have also tried making a new folder, that isn't in the repository, it deletes it too.
Running my first job looks like so:
Job
As you can see there are a lot of lines, that says "Removing ..."
In order to persist a folder with local files while integrating CI pipelines, the best approach is to use Docker data persistency, as you'll be able to delete everything from the last build while keeping local files inside your application between your builds, while maintains the ability to start from stretch every time you start a new pipeline.
Bind-mount volumes
Volumes managed by Docker
GitLab's CI/CD Documentation provides a short briefing on how to persist storage between jobs when using Docker to build your applications.
I'd also like to point out that if you're using Gitlab Runner through SSH, they explicitly state they do not support caching between builds when using this functionality. Even when using the standard Shell executor, they highly discourage saving data to the Builds folder. so it can be argued that the best practice approach is to use a bind-mount volume to your host and isolate the application from the user uploaded data.

How should I deploy the result of the CI/CD pipeline on my production server

I am having this GitLab CI/CD which builds then tests and pushes my projects container to GitLab container register successfully. But now I am wondering how I can do the deployment stage automated too. currently, I am doing it manually and after each successful pipeline, I SSH to my server and run several commands to pull the latest images from the GitLab.com container registry and then run them. But I would like to make this step automated as well. Yet, I don't know how?
Actually I have seen some examples of opening an ssh session from CI/CD pipeline but it doesn't feel secure enough. So I was wondering is there a better way for this or I have to actually do this.
Not that I am using gitlab.com so the GitLab server is not installed on my machine and I can't share assets between them directly
There are many ways to achieve this, depending on your setup, other requirements, scale etc.
I'll just give you two options.
I. Kubernetes
create cluster (ie control plane) somewhere
add your cluster to GitLab (now GitLab can even create cluster for you in AWS and GCP, check this page)
attach your target machine as a worker node to the cluster
create Kubernetes YAML files \ Helm chart for your application and deploy via usual ways, e.g. kubectl apply -f ... or helm install ..., or rely on Auto DevOps to do this step for you
This is quite complex but sort of "right" way of doing things.
II. Private GitLab runner
go to Settings > CI/CD > Runners of your GitLab project or group
obtain the registration token
install your own GitLab runner right on the target machine, and register it on the GitLab server using registration token, see example
give runner some specific tag
use that tag in your .gitlab-ci.yml file, see documentation
then deployment process is just a local process of docker pull... and docker run ... for your image
This is a lot simpler, but is a "wrong" way, as you are mixing CI\CD infrastructure with target environment.

gcloud automatic redeployment Golang app

I have a Golang app running on Google Cloud App Engine that I can update manually with "gcloud app deploy" but I cannot figure out how to schedule automatic redeployments. I'm assuming I have to use cron.yaml, but then I'm confused about what url to use. Basically it's just a web app with one main index.html page with changing content, and I would like to schedule automatic redeployments... how do I have to go about that?
If you want to automatically re-deploy your app when the code changes, you need what's called CI/CD (Continuous integration/deployment). What a CI does is, for each new commit to your repository, check out the new code and run a test script. If all the tests pass (or if you don't have any tests at all), the CI server can then deploy your code to App Engine, all automatically.
One free (for open-source projects) CI provider is Travis CI. To configure it, you need to make an account with Travis, and a file called .travis.yml in the root of your repository. To set up App Engine deploys, you can follow this guide to set up a service account and add the encrypted file to your repo. It will run a gcloud app deploy from a container on their servers, whenever you push code to a certain branch (master by default) in your repo.
Another option, which avoids setting up CI at all, is to simply change your app to generate the dynamic parts of the page when it gets requested. Reading the documentation for html/template would point you in the right direction.

Dev->Stage->Prod with Git deployment for Azure Websites

How best should I accomplish the following deployment objectives with Git deployment for Azure?
Easily switch when working locally to either use fake in-memory data or (eventually) non-production snapshot of real data
Deploy to staging environment on Azure such that at first I could use fake in-memory data and eventually move to non-production snapshot of real data.
Deploy to production with real data
I currently deploy using Github and a staging branch to a staging Azure website. Since I deploy to a public repo, the web.config file is ignored by git. (EDIT: I just learned that ignoring web.config actually causes deployment error on azure)
Any help/suggestion is appreciated.
It's actually supposed to be simpler than that. Please see this page. Basically, the idea is that you set some AppSettings in the Azure portal to override the default values that are committed to your repo.
Well... Here's what I did that works for me right now.
To quickly switch between fake in-memory data locally, I use a compilation symbol LOCAL and a preprocessor directive #if LOCAL.
Same compilation symbol works when you deploy to Azure, so I can work on fake data until I'm ready to switch to real db. I can also use the app settings if I really want to make to switch it more easily.
The challenge was to keep a web.config with "secrets" (like connection string) locally and not expose it to Github. I added it to .gitignore, but then my deployments started failing on Azure because it could not find the web.config. Just copying it to wwwroot via ftp did not help - Azure was looking for web.config in the repository.
So, to make this work I "slightly" altered the deployment process by first copying the Web.config from wwwroot to the repository before running the default deploy.cmd. This was simple - this is what you do:
Create a .deployment file in the root of your repository with the following:
[config]
command = deploy.my.cmd
Create deploy.my.cmd with the following script:
xcopy %DEPLOYMENT_TARGET%\Web.config %DEPLOYMENT_SOURCE%\\ /Y
deploy.cmd
Now, I have web.config with secrets locally. Git ignores this file. I uploaded the correct web.config to Azure via FTP, and it gets used whenever I deploy.

Resources