Unable to connect to a target server via SSH from a GitLab pipeline? - continuous-integration

I have set up .gitlab-ci.yml. I am unable to login to the production server from gitlab. I have set the private and public key variables of my server in GITLAB but still getting timeout error in pipeline.
job1:
stage: build1
script:
- mvn package
variables:
SSH_PUBLIC_key: "$SSH_PUBLIC_key"
SSH_PRIVATE_KEY: "$SSH_PRIVATE_KEY"
artifacts:
paths:
- server
script:
- scp "myjar" root#"myIP":/tmp
job1:
stage: build1
script:
- mvn package
variables:
SSH_PUBLIC_key: "$SSH_PUBLIC_key"
SSH_PRIVATE_KEY: "$SSH_PRIVATE_KEY"
artifacts:
paths:
- server
script:
- scp "myjar" root#"myIP":/tmp

timeout error comes, when the instance (in your case the production instance) is not reachable from GitLab (can be hosted on VM, Kubernetes, etc). Please check if you are able to perform telnet/ssh manually from the GitLab hosted VM
Replace myIP with proper values and see if that helps.
telnet <myIP> 22
ssh <myIP>

Related

Spring boot (Non Docker Application) with Github Actions - Unable to run jar file

I'm trying to deploy & run a java spring boot application using github actions to a AWS Ec2 Instance. The application properties file of spring boot application points to environment variables where are present in the AWS Ec2 Instance. However, these environment variables are not available when the github action runs and so the execution of the jar fails with a null pointer exception.
What is the correct way to deploy a Spring boot (Non Docker Application) to Self hosted Ec2 Server? Can I do it without needing AWS Code Pipeline or AWS Elastic Beanstalk?
How do we read Ec2 instance environment variables while using github actions.
Thanks.
Sample Workflow file:
jobs:
build:
runs-on: [self-hosted]
steps:
- uses: actions/checkout#v3
- name: Set up JDK 11
uses: actions/setup-java#v3
with:
java-version: "11"
distribution: "temurin"
cache: maven
- name: Build with Maven
run: mvn clean -B package
deploy:
runs-on: [self-hosted]
needs: build
steps:
- name: Run Script file
working-directory: ./
run: |
chmod +x ./script.sh
./script.sh
shell: bash
// script.sh - Try to print the env variables inside ec2.
#!/bin/bash
whoami
printenv

Gitlab pipeline error With CD/CI for AWS ec2 debian instance: This job is stuck because you don't have any active runners online

I want to create a CI/CD pipeline between gitlab and aws ec2 deployment.
My repository is nodejs/express web server project.
And I created a gitlab-ci.yaml
image: node:latest
cache:
paths:
- node_modules/
stages:
- build
- test
- staging
- openMr
- production
before_script:
- apt-get update -qq && apt-get install
Build:
stage: build
tags:
- node
before_script:
- yarn config set cache-folder .yarn
- yarn install
script:
- npm run build
Test:
stage: test
tags:
- node
before_script:
- yarn config set cache-folder .yarn
- yarn install --frozen-lockfile
script:
- npm run test
Deploy to Production:
stage: production
tags:
- node
before_script:
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash ./gitlab-deploy/.gitlab-deploy.prod.sh
environment:
name: production
url: http://ec2-url.compute.amazonaws.com:81
When I push a new commit pipeline failed on build step. And I get a warning as :
This job is stuck because you don't have any active runners online or
available with any of these tags assigned to them: node
I checked my runner on gitlab settings/CI/CD
After that I checkked server
admin#ip-111.222.222.111:~$ gitlab-runner
statusRuntime platform arch=amd64 os=linux pid=18787 revision=98daeee0 version=14.7.0
FATAL: The --user is not supported for non-root users
You need to remove the tag node from your jobs. Runner tags are used to define which runner should pick up your jobs (https://docs.gitlab.com/ee/ci/runners/configure_runners.html#use-tags-to-control-which-jobs-a-runner-can-run). As there is no runner available which supports the tag node, your job gets stuck.
It doesn't look like your pipeline has any special requirements so you can just remove the tag so it can be picked up by every runner.
The runner that can be seen in your screenshot supports the tag shop_service_runner. So another option would be to change the tag node to shop_service_runner which would lead to this runner (and every runner with the same tags) being able to pick up this job.

How to deploy maven project on aws with Gitlab CI/CD

I'm trying to deploy a java maven project on aws with Gitlab CI/CD.
This is my .gitlab-ci.yml
image: maven:3-jdk-8
services:
- docker:dind
stages:
- test
- build
- deploy
maven-test:
stage: test
script:
- echo "Test stage"
- mvn clean validate compile test -B
maven-build:
stage: build
script:
- echo "Build stage"
- mvn install -B -DskipTests
artifacts:
paths:
- ./target/*.jar
maven-deploy:
stage: deploy
script:
- echo "Deploy stage"
- scp -v -o StrictHostKeyChecking=no -I "mykey.pem" ./target/*.jar ubuntu#xxxxxxx.com:*.jar
when: manual
If I execute the scp command on a terminal in my pc then the jar is uploaded in aws ec2 instance while in gitlab I have errors and the jar is not uploaded.
This is my first approach with Gitlab CI and aws, so can someone explain step by step what I need to do to deploy the project in aws ec2 instance with Gitlab CI?
Thanks!
Since you have not posted much about your problem nor did you post the error I will just suggest a few things to look at:
From a GitLab perspective:
Are you sure that the "mykey.pem" is available within the repository when running that command(maven-deploy) on the the gitlab-runner.?
Also are you sure that you are using a docker gitlab-runner, if you are not then you can't use the image: directive and therefore it might not not have mvn/scp locally.
You might want to look into the dependencies directive and ensure you make that artifact available in next task. This should be done by default!
From an AWS perspective:
Make sure that the ubuntu target machine/server has port 22 exposed to the EC2 machine running the gitlab-runner.
Edit:
If the error you are receiving is with the pem files permissions then take a look at this resolution for AWS EC2 pem file issue. Another similar resolution is here.
Seems like if you put chmod 400 mykey.pem before the scp it might fix your problem.

Remote postgres connection on circleci build to run laravel phpunit tests

We are using laravel 5.6, postgresql and circleci in our api production environment and still trying to implement some key unit tests to run before a commit is merged to master.
When trying to configure the remote postgresql database access on circle, there's the following problem:
Our .circleci/config.yml was supposed to pull a custom built image (edunicastro/docker:latest) and run phpunit tests in the "build" step
But we are getting the following error message:
PDOException: SQLSTATE[08006] [7] could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
The problem is this was supposed to connect to our remote database, but in our production environment we have the connection set up using .env and laravel.
I have tried copying the "DB_PGSQL_HOST" key to our config.yml but nothing changed, it kept trying to connect to 127.0.0.1.
Using the key "PGHOST" instead also had no effect.
This is the relevant, "build" part of our config.yml:
version: 2
jobs:
build:
docker:
- image: edunicastro/docker:latest
environment:
DB_PGSQL_CONNECTION: <prod_laravel_connection_name>
DB_PGSQL_HOST: <prod_db_host>
DB_PGSQL_PORT: 5432
DB_PGSQL_DATABASE: <prod_db_name>
working_directory: ~/repo
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "composer.json" }}
- v1-dependencies-
- run: composer install -n --prefer-dist
- run: ./vendor/bin/phpunit
- save_cache:
paths:
- ./vendor
key: v1-dependencies-{{ checksum "composer.json" }}
Okay, I was missing the command to copy the .env over, right under - checkout:
- checkout
- run: cp .env.test .env
Laravel was already configured and set to use it, so I didn't need to change anything else.

Gitlab CI multiple branches

I have two branches: master and test. When I push to the master branch, my code is deployed to the first server by gitlab-ci. I want to deploy to a different server whenever I push to the test branch. Is this possible using Gitlab CI?
master - 10.10.10.1
test - 10.10.10.2
My gitlab-ci.yml:
maven_build:
script:
- mvn install
- /opt/payara41/bin/./asadmin --passwordfile /home/asadminpass --user admin undeploy myApplication-ear-1.0-SNAPSHOT
- sudo /etc/init.d/glassfish restart
- /opt/payara41/bin/./asadmin --passwordfile /home/asadminpass --host localhost --user admin deploy --force /home/gitlab-runner/builds/10b25461/0/myapp/myAppPrototype/myApp-ear/target/myApplication-SNAPSHOT.ear
only:
- master
You're on the right track with only:.
Simply create two different steps, one with only: master and one with only: test.
Change the script: to deploy to a different server.
deploy_master:
script:
- <script to deploy to master server>
only:
- master
deploy_test:
script:
- <script to deploy to test server>
only:
- test
only
- dev
- staging
- master
If I understand what you are asking you can do the following
for master
Pushing changes:
stage: deploy
rules:
- if: $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "master"
for test
Pushing changes:
stage: deploy
rules:
- if: $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "test"
this will define the when based on your branch.
As for the how to deploy
in your script section you can add for master
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set region $AWS_DEFAULT_REGION
for test
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID_TEST
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY_TEST
- aws configure set region $AWS_DEFAULT_REGION_TEST
add all the variable in Settings->CICD ->Variables

Resources