Unknown Command - LFTP - ftp

I'm using LFTP on Gitlab CI to deploy a set of files. I've got this working nicely on one server that I've set up (a staging server using SFTP). However, on my client's server, I can't seem to connect. The server is setup using FTP and I have to use plain/unsecure mode to connect via Filezilla - it does connect and work fine (although I'll be giving some advice to use SFTP in the future).
When I try to do the same using LFTP through the .gitlab-ci.yml file I get the following error:
Unknown command `ftp.example.com'.
mirror: Not connected
ERROR: Build failed: exit code 1
I suspect that this is because of using plain FTP but I've tried changing hosts, putting ftp:// infront of the host and a few other commands using set but having no luck.
Here's (an edited version of) my .gitlab-ci.yml file:
stages:
- build-staging
- build-production
variables:
EXCLUDE: "--exclude '.htaccess' --exclude-glob .git* --exclude '.git/' --exclude 'wp-config.php'"
SOURCE_DIR: "./"
# STAGING
DEST_DIR: "/"
HOST_STAGING: "sftp://123.456.789"
USERNAME_STAGING: "user"
PASSWORD_STAGING: "password"
# PRODUCTION
DEST_DIR_PROD: "/"
HOST_PROD: "ftp.example.com"
USERNAME_PROD: "user"
PASSWORD_PROD: "password"
job1:
stage: build-staging
environment: staging
script:
- apt-get update -qq && apt-get install -y -qq lftp
- echo "Deploying"
- lftp -c "set ftp:ssl-allow no; set sftp:auto-confirm yes; open -u $USERNAME_STAGING,$PASSWORD_STAGING $HOST_STAGING; mirror -Rv --ignore-time --parallel=10 $EXCLUDE $SOURCE_DIR $DEST_DIR_STAGING"
only:
- staging
tags:
- 2gb
job2:
stage: build-production
environment: production
when: manual
script:
- apt-get update -qq && apt-get install -y -qq lftp
- echo "Deploying"
- lftp -c "set ftp:ssl-allow no; open -u $USERNAME_PROD,$PASSWORD_PROD $HOST_PROD; mirror -Rv --ignore-time --parallel=10 $EXCLUDE $SOURCE_DIR $DEST_DIR_PROD"
only:
- production
tags:
- 2gb
Any help would be great, thanks!

This was due to a special characters in the password - my password ended with & which caused lftp to expect a different command. To fix this, I removed the quotes and escaped the & with a |, like so:
PASSWORD_PROD: password\&

Related

gitlab-ci predefined is defined in script step of deploy stage but undefined inside bash script run via 'bash -s'

I am trying to deploy a branch other than the default (master) branch. For some reason the predefined variables are not visible inside the roll-out.sh script. But the echo statements before calling the script do print the variables correctly.
I have another script that rolls out the master branch. In this script it is able to run docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY with no problems.
I have tried with the branch both protected and not protected. The variables are still undefined for some reason.
What I am doing wrong here?
build:
stage: build
image: docker:20.10.12
services:
- docker:20.10-dind
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
allow_failure: true
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/client/base:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE/client/base:latest --cache-from $CI_REGISTRY_IMAGE/client:latest -t $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH .
- docker push $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH
deploy:
variables:
branch: $CI_COMMIT_BRANCH
stage: deploy
image: alpine
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- echo $CI_COMMIT_REF_NAME
- echo $CI_COMMIT_BRANCH
- echo $branch
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN 'bash -s' < ./script/roll_out.sh
If you are sshing to another machine within your ci script I don’t think it would have accsess to the variables you are echoing out in the lines before because it’s a new session on a different machine.
You could try a few different things though to achieve what you want;
Try to send the variables as arguments (not great sending information to a machine through ssh).
Install a gitlab runner on the host you are trying to ssh to, tag this runner so it only runs the specific deployment job and then you’ll have the variables available on the host.
The problem was solved by explicitly passing the environment variables to the script
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN "CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH bash -s" < ./script/roll_out_tagged.sh
I had another developer with lots of experience take a look at it.
He had no idea why docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY worked. That is what lead me to believe that all of the environment variables would be available on the remote. But seems like those three variables are some sort of exception.

How to write bitbucket pipeline correctly with rsync?

I am writing a bitbucket pipeline to deploy my angular project to the ec2 instance. This is my pipeline using rsync.
image: node:12.18.3
pipelines:
branches:
dev:
- step:
name: Build Test Environment
caches:
- node
script:
- npm install
- npm run build-qa
artifacts:
- dist/qa/**
deployment: test
- step:
name: Deploy
trigger: manual
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SERVER >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR/dist/qa
- ls
- rsync -v -e ssh . $SSH_USER#$SERVER:/var/www/html/myproject
- echo "Deployment is done...!"
But this is giving me this error.
+ rsync -v -e ssh . $SSH_USER#$SERVER:/var/www/html/myproject
skipping directory .
rsync: link_stat "/opt/atlassian/pipelines/agent/build/dist/qa/$SSH_USER#myip" failed: No such file or directory (2)
rsync: link_stat "/opt/atlassian/pipelines/agent/build/dist/qa/ecdsa-sha2-nistp256" failed: No such file or directory (2)
rsync: change_dir#3 "/opt/atlassian/pipelines/agent/build/dist/qa//AAAAE2VjZHNhLXNtYTItbmlzdHAyNTwAAqAIbmlzdHAyNsYAAABBBGqKvzLI7IolhgM1ZEfol3VuJX4CX6jzqSyM6AzUgPbpyERywu/7U/SioMc/SLeJyfhYnWAJVApt8oOsqIjLqDg=:/var/www/html/myproject" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(713) [Receiver=3.1.2]
I tried a lot to find out a solution to this I even tried with the rsync-deploy pipe but it also gives the same above error can someone help me to write this pipeline correctly to get my requirement done?
Seems similar to the issue here https://serverfault.com/questions/363555/why-is-rsync-skipping-the-main-directory
you probably have folders inside you want to recursively include in the transfer
you need to change the source to ./ so it gets it is a folder
and as a bonus
you can/should just run ls with the path, and same with rsync, and not cd into there. Remember the trailing /

How do you execute commands over ssh on cloudbuild?

I am looking for ways to execute commands on to a remote server using ssh, when I am on cloudbuild.
Below is my current cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- kms
- decrypt
- --ciphertext-file=build.pem.encrypted
- --plaintext-file=build.pem
- --location=asia-southeast1
- --keyring=keyring
- --key=build-key
- name: 'ubuntu'
args: ['chmod', '400', './build.pem']
- name: 'ubuntu'
args: ['bash', './deploy.bash']
And my deploy.bash looks like this
#! /bin/bash
apt update
apt install -y openssh-client
mkdir ~/.ssh
touch ~/.ssh/known_hosts
ssh-keyscan -H somedomain.com >> ~/.ssh/known_hosts
ssh -i build.pem -T -v somedomain.com 'bash -s deploy1.bash'
And my deploy1.bash looks like
#! /bin/bash
echo "Hello World!"
echo "It works"
I have been trying out different ways to make it work. But could not.
If anybody could recommend how to make it work, I am very appreciated.
Currently I am it stuck at this step -
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
I manage to resolve my issue.
The issue was actually from, sshguard, it's actually blocking the ssh session.

GItLab is not deploying laravel app to AWS ec2

I am trying to deploy my Laravel app to AWS ec2 instance. I m using GitLab for code management and pipeline process.
Here is my .gitlab-ci.yml file.
# Node docker image on which this would be run
image: node:8.9.0
#This command is run before actual stages start running
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- npm install
stages:
- test
- deploy
- production
#Production stage
production:
stage: production
before_script:
#generate ssh key
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash .gitlab-deploy.sh
environment:
name: production
url: MY_HOST_IP_ADDRESS
when: manual
# lint and test are two different jobs in the same stage.
# This allows us to run these two in parallel and making build faste
And here is my .gitlab-deploy.sh file
#!/bin/bash
#Get servers list
set -f
string=$DEPLOY_SERVER
array=(${string//,/ })
#Iterate servers for deploy and pull last commit
for i in "${!array[#]}"; do
echo "Deploying information to EC2 and Gitlab"
echo "Deploy project on server ${array[i]}"
ssh ubuntu#${array[i]} "cd /var/www/html && git pull origin"
done
When I push my code it's processing fine. As you can see below image.
After a successful process when I list the directory of /var/www/html it's still empty. I am using Apache2 on Ubuntu. I want to deploy my code directly AWS EC2 instance.
Thanks in advance!

./deploy.sh not working on gitlab ci

My problem is the bash script I created got this error "/bin/sh: eval: line 88: ./deploy.sh: not found" on gitlab. Below is my sample script .gitlab-ci.yml.
I suspect that gitlab ci is not supporting bash script.
image: docker:latest
variables:
IMAGE_NAME: registry.gitlab.com/$PROJECT_OWNER/$PROJECT_NAME
DOCKER_DRIVER: overlay
services:
- docker:dind
stages:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker pull $IMAGE_NAME:$CI_BUILD_REF_NAME || true
production-deploy:
stage: deploy
only:
- master#$PROJECT_OWNER/$PROJECT_NAME
script:
- echo "$PRODUCTION_DOCKER_FILE" > Dockerfile
- docker build --cache-from $IMAGE_NAME:$CI_BUILD_REF_NAME -t $IMAGE_NAME:$CI_BUILD_REF_NAME .
- docker push $IMAGE_NAME:$CI_BUILD_REF_NAME
- echo "$PEM_FILE" > deploy.pem
- echo "$PRODUCTION_DEPLOY" > deploy.sh
- chmod 600 deploy.pem
- chmod 700 deploy.sh
- ./deploy.sh
environment:
name: production
url: https://www.example.com
And this also my deploy.sh.
#!/bin/bash
ssh -o StrictHostKeyChecking=no -i deploy.pem ec2-user#targetIPAddress << 'ENDSSH'
// command goes here
ENDSSH
All I want is to execute deploy.sh after docker push but unfortunately got this error about /bin/bash thingy.
I really need your help guys. I will be thankful if you can solve my problem about gitlab ci bash script got error "/bin/sh: eval: line 88: ./deploy.sh: not found".
This is probably related to the fact you are using Docker-in-Docker (docker:dind). Your deploy.sh is requesting /bin/bash as the script executor which is NOT present in that image.
You can test this locally on your computer with Docker:
docker run --rm -it docker:dind bash
It will report an error. So rewrite the first line of deploy.sh to
#!/bin/sh
After fixing that you will run into the problem that the previous answer is addressing: ssh is not installed either. You will need to fix that too!
docker:latest is based on alpine linux which is very minimalistic and does not have a lot installed by default. For example, ssh is not available out of the box, so if you want to use ssh commands you need to install it first. In your before_script, add:
- apk update && apk add openssh
Thanks. This worked for me by adding bash
before_script:
- apk update && apk add bash
Let me know if that still doesn't work for you.

Resources