GItLab is not deploying laravel app to AWS ec2 - laravel

I am trying to deploy my Laravel app to AWS ec2 instance. I m using GitLab for code management and pipeline process.
Here is my .gitlab-ci.yml file.
# Node docker image on which this would be run
image: node:8.9.0
#This command is run before actual stages start running
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- npm install
stages:
- test
- deploy
- production
#Production stage
production:
stage: production
before_script:
#generate ssh key
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash .gitlab-deploy.sh
environment:
name: production
url: MY_HOST_IP_ADDRESS
when: manual
# lint and test are two different jobs in the same stage.
# This allows us to run these two in parallel and making build faste
And here is my .gitlab-deploy.sh file
#!/bin/bash
#Get servers list
set -f
string=$DEPLOY_SERVER
array=(${string//,/ })
#Iterate servers for deploy and pull last commit
for i in "${!array[#]}"; do
echo "Deploying information to EC2 and Gitlab"
echo "Deploy project on server ${array[i]}"
ssh ubuntu#${array[i]} "cd /var/www/html && git pull origin"
done
When I push my code it's processing fine. As you can see below image.
After a successful process when I list the directory of /var/www/html it's still empty. I am using Apache2 on Ubuntu. I want to deploy my code directly AWS EC2 instance.
Thanks in advance!

Related

gitlab-ci predefined is defined in script step of deploy stage but undefined inside bash script run via 'bash -s'

I am trying to deploy a branch other than the default (master) branch. For some reason the predefined variables are not visible inside the roll-out.sh script. But the echo statements before calling the script do print the variables correctly.
I have another script that rolls out the master branch. In this script it is able to run docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY with no problems.
I have tried with the branch both protected and not protected. The variables are still undefined for some reason.
What I am doing wrong here?
build:
stage: build
image: docker:20.10.12
services:
- docker:20.10-dind
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
allow_failure: true
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/client/base:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE/client/base:latest --cache-from $CI_REGISTRY_IMAGE/client:latest -t $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH .
- docker push $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH
deploy:
variables:
branch: $CI_COMMIT_BRANCH
stage: deploy
image: alpine
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- echo $CI_COMMIT_REF_NAME
- echo $CI_COMMIT_BRANCH
- echo $branch
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN 'bash -s' < ./script/roll_out.sh
If you are sshing to another machine within your ci script I don’t think it would have accsess to the variables you are echoing out in the lines before because it’s a new session on a different machine.
You could try a few different things though to achieve what you want;
Try to send the variables as arguments (not great sending information to a machine through ssh).
Install a gitlab runner on the host you are trying to ssh to, tag this runner so it only runs the specific deployment job and then you’ll have the variables available on the host.
The problem was solved by explicitly passing the environment variables to the script
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN "CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH bash -s" < ./script/roll_out_tagged.sh
I had another developer with lots of experience take a look at it.
He had no idea why docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY worked. That is what lead me to believe that all of the environment variables would be available on the remote. But seems like those three variables are some sort of exception.

Copying a war file from GitLab to EC2

I am trying to create a CI/CD pipeline to build war file and deploy it to EC2 from GitLab.
Once the war file is created, I would like to copy it to some folder in EC2 so that from there I would like to copy it to tomcat server.
The following is the ".gitlab-ci.yml" file.
stages:
- build
- deploy
build:
stage: build
image: maven:3-jdk-8
script:
- mvn install
artifacts:
paths:
- target/
deploy:
stage: deploy
before_script:
# Generate SSH Key
- mkdir -p ~/.ssh
- echo -e "$EC2_SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- scp target/gitlabec2pipeline.war ec2-user#$EC2_DEPLOY_SERVER:/gitlabec2pipeline.war
- bash .gitlab-deploy-ec2.sh
I have added the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY variables.
But when the above pipeline is run, in the deploy stage the scp command is giving "permission denied" error.
Any idea on how to solve this?
Error Message:
Running with gitlab-runner 14.5.2 (e91107dd)
on blue-3.shared.runners-manager.gitlab.com/default zxwgkjAP
Resolving secrets
00:00
Preparing the "docker+machine" executor
Using Docker executor with image ruby:2.5 ...
Pulling docker image ruby:2.5 ...
Using docker image sha256:27d049ce98db4e55ddfaec6cd98c7c9cfd195bc7e994493776959db33522383b for ruby:2.5 with digest ruby#sha256:ecc3e4f5da13d881a415c9692bb52d2b85b090f38f4ad99ae94f932b3598444b ...
Preparing environment
00:01
Running on runner-zxwgkjap-project-31676452-concurrent-0 via runner-zxwgkjap-shared-1639429231-955193ca...
Getting source from Git repository
00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/te2122/deploytoaws/.git/
Created fresh repository.
Checking out dc27fd6f as master...
Skipping Git submodules setup
Downloading artifacts
00:02
Downloading artifacts for build (1880203000)...
Downloading artifacts from coordinator... ok id=1880203000 responseStatus=200 OK token=9RSALYus
Executing "step_script" stage of the job script
00:02
Using docker image sha256:27d049ce98db4e55ddfaec6cd98c7c9cfd195bc7e994493776959db33522383b for ruby:2.5 with digest ruby#sha256:ecc3e4f5da13d881a415c9692bb52d2b85b090f38f4ad99ae94f932b3598444b ...
$ mkdir -p ~/.ssh
$ echo -e "$EC2_SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ scp target/gitlabec2pipeline.war ec2-user#$EC2_DEPLOY_SERVER:/gitlabec2pipeline.war
Warning: Permanently added '54.205.169.131' (ECDSA) to the list of known hosts.
scp: /gitlabec2pipeline.war: Permission denied
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
Thank you.
I have added the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY variables.
AWS keys don't matter here since you're using SSH key for authentication when you use ssh (or scp) to connect to the EC2 instance.
If you are sure you're using the correct key value, your environment variable $EC2_SSH_PRIVATE_KEY is not well-formed. Common issues come up with newlines in the variable and use of CRLF (often can be added when copy/pasting the key in the web UI) instead of LF.
To work around this more reliably, you could:
use a file type variable, then copy the file into ~/.ssh OR
base64 encode your key before putting it into the CI variable. This avoids any possible issues with newlines and CRLF. Then decode it in the job, placing the value into the file.
Update:
It looks like your user does not have permission to the destination directory on the server. You need to either give appropriate permission to the ec2-user or choose a different destination location where the user does have permission to write files.

How to pull image from Docker hub to EC2 using bitbucket pipeline

I am trying to implement CICD using bitbucket pipelines.
So far I was able to create the image and push it to docker hub. Seems straightforward and the internet is full of tutorials.
But, to pull the image from an EC2 instance and run the image I didnt find anything.
I have this bitbucket-pipeline.yml file:
image: atlassian/default-image:latest
pipelines:
default:
- step:
services:
- docker
script:
- export IMAGE_NAME=juanibe/vinimayapi:$BITBUCKET_COMMIT
- docker build -t $IMAGE_NAME .
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- docker push $IMAGE_NAME
And I have this script, but I dont know where tu put it:
#!bin/bash
sudo docker ps
echo 'Login in to docker'
docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD // How can I set env variable here?
echo 'Fetching latest image'
sudo docker pull user/vinimayapi:latest
echo 'Stoping current container'
sudo docker stop cont_docker_app_test
echo 'Removing old container'
sudo docker rm cont_docker_app_test-old
echo 'Rename stoped container'
sudo docker rename user/cont_docker_app_test user/cont_docker_app_test_old
echo 'Starting new container'
sudo docker run -d --name cont_docker_app_test -p 443:3333 -p 8001:8001 --link my-mongo-testing:my-mongo-testing user/vinimayapi:latest
Any help will be really appreciated, I've been trying to create a pipeline for days without success.
Add to your pipeline additional step:
- pipe: "atlassian/ssh-run:0.2.4"
variables:
SSH_USER: user
SERVER: ip_server
SSH_KEY: sshkey
MODE: script
COMMAND: script.sh
It should look like below:
image: atlassian/default-image:latest
pipelines:
default:
- step:
services:
- docker
script:
- export IMAGE_NAME=juanibe/vinimayapi:$BITBUCKET_COMMIT
- docker build -t $IMAGE_NAME .
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- docker push $IMAGE_NAME
- pipe: "atlassian/ssh-run:0.2.4"
variables:
SSH_USER: user
SERVER: ip_server
SSH_KEY: sshkey
MODE: script
COMMAND: script.sh
script.sh file located in that case is located in the same directory as bitbucket_pipelines.yml

./deploy.sh not working on gitlab ci

My problem is the bash script I created got this error "/bin/sh: eval: line 88: ./deploy.sh: not found" on gitlab. Below is my sample script .gitlab-ci.yml.
I suspect that gitlab ci is not supporting bash script.
image: docker:latest
variables:
IMAGE_NAME: registry.gitlab.com/$PROJECT_OWNER/$PROJECT_NAME
DOCKER_DRIVER: overlay
services:
- docker:dind
stages:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker pull $IMAGE_NAME:$CI_BUILD_REF_NAME || true
production-deploy:
stage: deploy
only:
- master#$PROJECT_OWNER/$PROJECT_NAME
script:
- echo "$PRODUCTION_DOCKER_FILE" > Dockerfile
- docker build --cache-from $IMAGE_NAME:$CI_BUILD_REF_NAME -t $IMAGE_NAME:$CI_BUILD_REF_NAME .
- docker push $IMAGE_NAME:$CI_BUILD_REF_NAME
- echo "$PEM_FILE" > deploy.pem
- echo "$PRODUCTION_DEPLOY" > deploy.sh
- chmod 600 deploy.pem
- chmod 700 deploy.sh
- ./deploy.sh
environment:
name: production
url: https://www.example.com
And this also my deploy.sh.
#!/bin/bash
ssh -o StrictHostKeyChecking=no -i deploy.pem ec2-user#targetIPAddress << 'ENDSSH'
// command goes here
ENDSSH
All I want is to execute deploy.sh after docker push but unfortunately got this error about /bin/bash thingy.
I really need your help guys. I will be thankful if you can solve my problem about gitlab ci bash script got error "/bin/sh: eval: line 88: ./deploy.sh: not found".
This is probably related to the fact you are using Docker-in-Docker (docker:dind). Your deploy.sh is requesting /bin/bash as the script executor which is NOT present in that image.
You can test this locally on your computer with Docker:
docker run --rm -it docker:dind bash
It will report an error. So rewrite the first line of deploy.sh to
#!/bin/sh
After fixing that you will run into the problem that the previous answer is addressing: ssh is not installed either. You will need to fix that too!
docker:latest is based on alpine linux which is very minimalistic and does not have a lot installed by default. For example, ssh is not available out of the box, so if you want to use ssh commands you need to install it first. In your before_script, add:
- apk update && apk add openssh
Thanks. This worked for me by adding bash
before_script:
- apk update && apk add bash
Let me know if that still doesn't work for you.

Unknown Command - LFTP

I'm using LFTP on Gitlab CI to deploy a set of files. I've got this working nicely on one server that I've set up (a staging server using SFTP). However, on my client's server, I can't seem to connect. The server is setup using FTP and I have to use plain/unsecure mode to connect via Filezilla - it does connect and work fine (although I'll be giving some advice to use SFTP in the future).
When I try to do the same using LFTP through the .gitlab-ci.yml file I get the following error:
Unknown command `ftp.example.com'.
mirror: Not connected
ERROR: Build failed: exit code 1
I suspect that this is because of using plain FTP but I've tried changing hosts, putting ftp:// infront of the host and a few other commands using set but having no luck.
Here's (an edited version of) my .gitlab-ci.yml file:
stages:
- build-staging
- build-production
variables:
EXCLUDE: "--exclude '.htaccess' --exclude-glob .git* --exclude '.git/' --exclude 'wp-config.php'"
SOURCE_DIR: "./"
# STAGING
DEST_DIR: "/"
HOST_STAGING: "sftp://123.456.789"
USERNAME_STAGING: "user"
PASSWORD_STAGING: "password"
# PRODUCTION
DEST_DIR_PROD: "/"
HOST_PROD: "ftp.example.com"
USERNAME_PROD: "user"
PASSWORD_PROD: "password"
job1:
stage: build-staging
environment: staging
script:
- apt-get update -qq && apt-get install -y -qq lftp
- echo "Deploying"
- lftp -c "set ftp:ssl-allow no; set sftp:auto-confirm yes; open -u $USERNAME_STAGING,$PASSWORD_STAGING $HOST_STAGING; mirror -Rv --ignore-time --parallel=10 $EXCLUDE $SOURCE_DIR $DEST_DIR_STAGING"
only:
- staging
tags:
- 2gb
job2:
stage: build-production
environment: production
when: manual
script:
- apt-get update -qq && apt-get install -y -qq lftp
- echo "Deploying"
- lftp -c "set ftp:ssl-allow no; open -u $USERNAME_PROD,$PASSWORD_PROD $HOST_PROD; mirror -Rv --ignore-time --parallel=10 $EXCLUDE $SOURCE_DIR $DEST_DIR_PROD"
only:
- production
tags:
- 2gb
Any help would be great, thanks!
This was due to a special characters in the password - my password ended with & which caused lftp to expect a different command. To fix this, I removed the quotes and escaped the & with a |, like so:
PASSWORD_PROD: password\&

Resources