GitLab CI .yml file and connection with Web server - continuous-integration

I have set GitLab on local server with Ubuntu instalation. Also, I managed to set GitLab CI with runners and now I am struggling a bit with them. So, I have PHP project on which couple of guys are working on. We have set .gitlab-ci.yml file to deploy files on web server (which is also on same local server just under different folder).
Main problem is that GitLab CI (runner basically) deploys all files everytime and not just pushed ones.
We would like to have an option to deploy only files which are changed.
Current yml file looks like:
pages:
stage: deploy
script:
mkdir -p /opt/lampp/htdocs/web/wp2/
mkdir -p .public
yes | cp -rf * .public /opt/lampp/htdocs/web/wp2/
only:
push
So, am I missing something huge here or there is a possibility just to deploy file which is pushed to repository? Runner is set to react on every push.
Thank you in advance for your kind replies.
cheers!

now, after some days and invested hours some notes did the trick...so with
pages:
stage: deploy
script:
- mkdir -p /opt/lampp/htdocs/web/wp2/
- mkdir -p .public
- cp -rfu * .public /opt/lampp/htdocs/web/wp2/
script I have managed to acheiev what I needed. "-rfu" part was the key which figures out should it replace the file if the source is newer than the destination (web server in my case).
So, this worked for me in .yml file and even that CI Lint gives the error that syntax is not correct runner gives an success. I hope that someone will find this thing useful :)

Related

GITLAB CI Pipeline not triggered

I have written this yml file for GitLab CI/CD. There is a shared runner configured and running.
I am doing this first time and not sure where I am going wrong. The angular js project I am having
on the repo has a gulp build file and works perfectly on local machine. This code just has to trigger
that on the vm where my runner is present. On commit the pipeline does not show any job. Let me know what needs to be corrected!
image: docker:latest
cache:
paths:
- node_modules/
deploy_stage:
stage: build
only:
- master
environment: stage
script:
- rmdir -rf "build"
- mkdir "build"
- cd "build"
- git init
- git clone "my url"
- cd "path of cloned repository"
- gulp build
What branch are you commiting to? You pipeline is configured to run only for commit on master branch.
...
only:
- master
...
If you want to have triggered jobs for other branches as well then remove this restriction from .gitlab-ci.yml file.
Do not forget to Enable shared Runners (they may not be enabled by default), setting can be found on GitLab project page under Settings -> CI/CD -> Runners.
Update: Did your pipeline triggers ever work for your project?
If not then I would try configuring simple pipeline just to test if triggers work fine:
test_simple_job:
script:
- echo I should execute for any pipeline trigger.
I solved the problem by renaming the .gitlab-ci.yaml to .gitlab-ci.yml
I just wanted to add that I ran into a similar issue. I was committing my code and I was not seeing the pipeline trigger at all.There was also no error statement on gitlab nor in my vscode. It had ran perfectly before.My problem was because I had made some recent edits to my yaml that were invalid.I reverted the changes to a known valid yaml code, and it worked again and passed.
I also had this issue. I thought I would document the cause, in the hopes it may help someone (although this is not strictly an answer for the original question because my deploy script is more complex).
So in my case, the reason was that I had multiple jobs with the same job ID in my .gitlab-ci.yml. The latter one basically rendered the earlier one invisible.
# This job will never run:
deploy_my_stuff:
script:
- do something for job one
# This job overwrites the above.
deploy_my_stuff:
script:
- do something for job two
Totally obvious... after I discovered the mistake.

Codebuild Workflow with environment variables

I have a monolith github project that has multiple different applications that I'd like to integrate with an AWS Codebuild CI/CD workflow. My issue is that if I make a change to one project, I don't want to update the other. Essentially, I want to create a logical fork that deploys differently based on the files changed in a particular commit.
Basically my project repository looks like this:
- API
-node_modules
-package.json
-dist
-src
- REACTAPP
-node_modules
-package.json
-dist
-src
- scripts
- 01_install.sh
- 02_prebuild.sh
- 03_build.sh
- .ebextensions
In terms of Deployment, my API project gets deployed to elastic beanstalk and my REACTAPP gets deployed as static files to S3. I've tried a few things but decided that the only viable approach is to manually perform this deploy step within my own 03_build.sh script - because there's no way to build this dynamically within Codebuild's Deploy step (I could be wrong).
Anyway, my issue is that I essentially need to create a decision tree to determine which project gets excecuted, so if I make a change to API and push, it doesn't automatically deploy REACTAPP to S3 unnecessarliy (and vica versa).
I managed to get this working on localhost by updating environment variables at certain points in the build process and then reading them in separate steps. However this fails on Codedeploy because of permission issues i.e. I don't seem to be able to update env variables from within the CI process itself.
Explicitly, my buildconf.yml looks like this:
version: 0.2
env:
variables:
VARIABLES: 'here'
AWS_ACCESS_KEY_ID: 'XXXX'
AWS_SECRET_ACCESS_KEY: 'XXXX'
AWS_REGION: 'eu-west-1'
AWS_BUCKET: 'mybucket'
phases:
install:
commands:
- sh ./scripts/01_install.sh
pre_build:
commands:
- sh ./scripts/02_prebuild.sh
build:
commands:
- sh ./scripts/03_build.sh
I'm running my own shell scripts to perform some logic and I'm trying to pass variables between scripts: install->prebuild->build
To give one example, here's the 01_install.sh where I diff each project version to determine whether it needs to be updated (excuse any minor errors in bash):
#!/bin/bash
# STAGE 1
# _______________________________________
# API PROJECT INSTALL
# Do if API version was changed in prepush (this is just a sample and I'll likely end up storing the version & previous version within the package.json):
if [[ diff ./api/version.json ./api/old_version.json ]] > /dev/null 2>&1
## then
echo "🤖 Installing dependencies in API folder..."
cd ./api/ && npm install
## Set a variable to be used by the 02_prebuild.sh script
TEST_API="true"
export TEST_API
else
echo "No change to API"
fi
# ______________________________________
# REACTAPP PROJECT INSTALL
# Do if REACTAPP version number has changed (similar to above):
...
Then in my next stage I read these variables to determine whether I should run tests on the project 02_prebuild.sh:
#!/bin/bash
# STAGE 2
# _________________________________
# API PROJECT PRE-BUILD
# Do if install was initiated
if [[ $TEST_API == "true" ]]; then
echo "🤖 Run tests on API project..."
cd ./api/ && npm run tests
echo $TEST_API
BUILD_API="true"
export BUILD_API
else
echo "Don't test API"
fi
# ________________________________
# TODO: Complete for REACTAPP, similar to above
...
In my final script I use the BUILD_API variable to build to the dist folder, then I deploy that to either Elastic Beanstalk (for API) or S3 (for REACTAPP).
When I run this locally it works, however when I run it on Codebuild I get a permissions failure presumably because my bash scripts cannot export ENV_VAR. I'm wondering either if anyone knows how to update ENV_VARIABLES from within the build process itself, or if anyone has a better approach to achieve my goals (conditional/ variable build process on Codebuild)
EDIT:
So an approach that I've managed to get working is instead of using Env variables, I'm creating new files with specific names using fs then reading the contents of the file to make logical decisions. I can access these files from each of the bash scripts so it works pretty elegantly with some automatic cleanup.
I won't edit the original question as it's still an issue and I'd like to know how/ if other people solved this. I'm still playing around with how to actually use the eb deploy and s3 cli commands within the build scripts as codebuild does not seem to come with the eb cli installed and my .ebextensions file does not seem to be honoured.
Source control repos like Github can be configured to send a post event to an API endpoint when you push to a branch. You can consume this post request in lambda through API Gateway. This event data includes which files were modified with the commit. The lambda function can then process this event to figure out what to deploy. If you’re struggling with deploying to your servers from the codebuild container, you might want to try posting an artifact to s3 with an installable package and then have your server grab it from there.

AWS CodeStar with Springboot, issues with automated WAR deployment

I have a multi module maven project i want do build and deploy with AWS Codestar on EC2. This almost works like a charm now.
Local build works, app can be accessed on port 5000.
Codestar build is ok, upload is ok, deployment seems to be ok.
But i cannot reach the app on port 80 (404 not found). SERVER_PORT is set to 5000, should be translated to 80 on AWS.
Now the funny thing about the story: if i deploy the WAR manually (local or downloaded from CodeBuild), both can be accessed on AWS on port 80. But the Codestar uploaded and deployed one cannot.
I'm pretty much out of ideas. The logs don't show anything usable. I'm willing to provide them, though.
Here is the buildspec.yml. I'm still convinced that there is something wrong in there...
version: 0.2
phases:
install:
commands:
- echo Entering install phase...
- wget http://mirror.olnevhost.net/pub/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
- tar xzvf apache-maven-3.3.9-bin.tar.gz -C /opt/
- export PATH=/opt/apache-maven-3.3.9/bin:$PATH
build:
commands:
- echo Entering BUILD phase...
- echo Build started on `date`
- mvn install
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- 'jweb-web/target/*.war'

Bitbucket Pipelines - is it possible to download additional file to project via curl?

We have a separate build for the frontend and backend of the application, where we need to pull the dist build of frontend to backend project during the build. During the build the 'curl' cannot write to the desired location.
In detail, we are using SpringBoot as backend for serving Angular 2 frontend. So we need to pull the frontend files to src/main/resources/static folder.
image: maven:3.3.9
pipelines:
default:
- step:
script:
- curl -s -L -v --user xxx:XXXX https://api.bitbucket.org/2.0/repositories/apprentit/rent-it/downloads/release_latest.tar.gz -o src/main/resources/static/release_latest.tar.gz
- tar -xf -C src/main/resources/static --directory src/main/resources/static release_latest.tar.gz
- mvn package -X
As a result of this the build fails with output of CURL.
* Failed writing body (0 != 16360)
Note: I've tried the same with maven-exec-plugin, the result was the same. The solution works on local machine naturally.
I would try running these commands from a local docker run of the image you're specifying (maven:3.3.9). I found that the most helpful way to debug things that were behaving differently in Pipelines vs. in my local environment.
To your specific question yes, you can download external content from the Pipeline run. I have a Pipeline that clones other repos from BitBucket via HTTP into the running container.

Simple docker deployment tactics

Hey guys so I've spend the past few days really digging into Docker and I've learned a ton. I'm getting to the point where I'd like to deploy to a digitalocean droplet but I'm starting to wonder about the strategy of building/deploying an image.
I have a perfect Dev setup where I've created a file volume tied to my app.
docker run -d -p 80:3000 --name pug_web -v $DIR/app:/Development test_web
I'd hate to have to run the app in production out of the /Development folder, where I'm actually building the app. This is a nodejs/express app and I'd love to concat/minify/etc. into a local dist folder ane add that build folder to a new dist ready image.
I guess what I'm asking is, A). can I have different dockerfiles, one for Dev and one for Dist? if not B). can I have if statements in my docker files that would do something like... if ENV == 'dist' add /dist... etc.
I'm struggling to figure out how to move this from a Dev environment locally to a tightened up production ready image without any conditionals.
I do both.
My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the image.
I also have a script that is executed from the ENTRYPOINT command. The script looks at the environment variable "ENV" and if it is set to "DEV" it will start my development server with debugging turned on, otherwise it will launch the production version of the server.
Alternatively, you can avoid using Docker in development, and instead have a Dockerfile at the root of your repo. You can then use your CI server (in our case Jenkins, but Dockerhub also allows for automated build repositories that can do that for you, if you're a small team or don't have access to a dedicated build server.
Then you can just pull the image and run it on your production box.

Resources