Deploying to FTP server in Github Actions does not work - ftp

As the title says, deploying to FTP server isn't working for me from a Github Action. I've tried using a couple of actions to accomplish this (FTP-Deploy and ftp-action), but FTP-Deploy just kept running with sporadic
curl: (7) Failed to connect to ftpservername.com port 21: Connection timed out
messages and ftp-action kept running without any output. Note: The server is available, I connected and transferred some files using Filezilla without any issues.
After that I tried using lftp, this is the command I used on a local Ubuntu machine
lftp -c "open -u username,password ftpservername.com; mirror -R locfolder remote/remotefolder"
and the file transfer worked, but when used in a Github Action it produced this output:
---- Connecting to ftpservername.com (123.456.789.123) port 21
mkdir `remote/remotefolder' [Connecting...]
**** Socket error (Connection timed out) - reconnecting
---- Closing control socket
---- Connecting to ftpservername.com (123.456.789.123) port 21
I tried setting both ftp:ssl-allow and ssl:verify-certificate to false, but this did not produce any results. Also, I do not have access to the server, so I can't check the server logs.
This is the workflow file:
name: Test
on:
push:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
- name: Install pip
run: python -m pip install --upgrade pip
- name: Install packages
run: |
sudo apt install lftp
sudo apt install expect
.
.
.
- name: FTP Deploy
run: |
echo Starting...
unbuffer lftp -c "debug; set ftp:ssl-allow false; set ssl:verify-certificate false; open -u username,${{ secrets.PASSWORD }} ftpservername.com; mirror -R -v locfolder remote/remotefolder"
echo Done transferring files.
Any help is appreciated, thank you!

Found the issue, the hosting service was blocking the IP address (as it was an IP address outside of the country). After setting up a self-hosted runner and whitelisting the IP of the runner everything works fine.

Related

Run Ansible playbook from Cloud-Init

I have been learning Cloud-Init for several days to do an automatic deployment. To achieve this, and apply certain configurations, I am using Ansible playbooks. The problem that I have found is that I am not able to make the playbook run directly on the operating system that is being installed.
I leave you the user-data file that I am using.
#cloud-config
autoinstall:
version: 1
identity:
hostname: hostname
password: "$6$cOciYeIErEet80Rv$YX8qt6vizXgcUkgIPSKD1qNZNxe77tSWOY3k/0.i8D8EpApaGNuyucxJvONmZiRj4rVM3L6EE4sLKcnzYVcMj/ "
username: ubuntu
storage:
layout:
name: direct
locale: es_ES
timezone: "Europe/Madrid"
keyboard:
layout: es
packages:
- sshpass
- ansible
- git
late-commands:
- git clone https://github.com/MarcOrfilaCarreras/dotfiles /target/root/dotfiles
- ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -u ubuntu -e "ansible_password=ubuntu" -e "ansible_become_pass=ubuntu"
PS: I am using Ubuntu Server 22.04, the Ansible command is temporary and only for testing and I know that I have to change the identity fields.
If you want to configure localhost, it's better to use local transport (which is -c local in command line).
Basically, change ansible call to:
ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -c local
This will bypass all SSH things and run locally.

Bitbucket fatal: Can't access remote

I tried to set up sftp connection between Bitbucket and Runcloud server. Runcloud only uses sftp connection. Bitbucket config:
image: php:7.3
pipelines:
branches:
master:
- step:
name: Deploy to production
deployment: production
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init --user $SFTP_username --passwd $FTP_password sftp://runcloud#1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
Connection always fails with error fatal: Can't access remote 'sftp://1.111.111.11', exiting...
I tried a different sftp Path combination but the result always the same.
sftp://1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
sftp://mywebsite/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
My website
Root Path: /home/runcloud/webapps/mywebsite
Public Path: /home/runcloud/webapps/mywebsite
Runcload have different as "normal" set up for ftp. For example to conect with FileZila HOST is my server ip. And to get to my website i have to navigate /webapps/mywebsite
Not sure what I doing wrong is my sftp path incorrect?

GItLab CI gives curl: (7) Failed to connect to localhost port 8090: Connection refused

The issue is I get the curl: (7) Failed to connect to localhost port 8090: Connection refused GItLab CI error but this does not happen on my laptop where I get the source html of the webpage. The .gitlab-ci.yml below is a simple reproduction of the issue. I have spent numerous hours trying to figure this out - i'm sure someone else has also.
Aside: This isn't a similar question - since they don't offer a solution.
GitLab Repo: https://gitlab.com/mudassir-ahmed/wordpress-testing-with-gitlab-ci/tree/another-approach but the only file it contains is the .gitlab-ci.yml shown below...
image: docker:stable
variables:
# When using dind service we need to instruct docker, to talk with the
# daemon started inside of the service. The daemon is available with
# a network connection instead of the default /var/run/docker.sock socket.
#
# The 'docker' hostname is the alias of the service container as described at
# https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services
#
# Note that if you're using the Kubernetes executor, the variable should be set to
# tcp://localhost:2375/ because of how the Kubernetes executor connects services
# to the job container
# DOCKER_HOST: tcp://localhost:2375/
#
# For non-Kubernetes executors, we use tcp://docker:2375/
DOCKER_HOST: tcp://docker:2375/
# When using dind, it's wise to use the overlayfs driver for
# improved performance.
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build:
stage: build
script:
- apk update
- apk add curl
#- hostname -i
- docker container ls
- docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
- curl localhost:8090 # This works on my laptop but not on a GitLab runner.
Referring to the answer from here : gitlab-ci.yml & docker-in-docker (dind) & curl returns connection refused on shared runner
There are two ways to fix this :
Option 1: Replace localhost in curl localhost:8090 with docker like this curl docker:8090
Option 2:
services:
- name: docker:dind
alias: localhost
docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
curl localhost:8090 # This works on my laptop but not on a GitLab runner.
Assuming that is your code i think that you should somehow add some timeout between docker run and curl.
I have similar issues some time ago after starting docker container on gitlab runner machine i wasnt able to accces my url to. When i added command which check if container is running for " about one minute " it resolved my problem.
"docker inspect -f {{.State.Running}} " + containerName" but in order to do that check, you should add some additional script

How can I install package on client machine using ansible

I am new to Ansible and setup a ansible server. I have also set up ssh communication between one client server and client. Now I am able to use few of ansible modules from server in order to make changes in the client one. Ping, Copy modules are working fine.
But when I am trying to install a package from ansible server to the client system using "yum" command it is not working. I am using the below command in order to execute the yum as sudo on my client machine.
Command:
ansible all -m yum -a "name=httpd state=present" -s
This command is throwing an error that -s is unidentified . Request you to please help me in this case.
You need replace -s with -b
sudo module was replaced by become module, but it does the same
tenhi#somehost:somedir$ ansible localhost -b -m yum -a 'name=mc state=present'
localhost | SUCCESS => {
"ansible_facts": {
"pkg_mgr": "apt"
},
"cache_update_time": 1557517026,
"cache_updated": false,
"changed": false
}

Using a VPN for a travis deploy script

I'm using a corporate install of Travis-CI for CI. So far triggering a build through a commit and using encrypted values work well.
For deployment however I need to connect to a server that I only can reach via a VPN tunnel (OpenVPN based).
I'm looking for a sample .travis.yml file that has a VPN connection. So far my file looks like this:
language: java
addons:
ssh_known_hosts: some.host.in.vpn.org
git:
depth: 3
before_install:
- sudo apt-get install -qq rpm
- openssl aes-256-cbc -K $encrypted_fancynumber_key -iv $encrypted_fancynumber_iv -in supersecret_rsa.enc -out supersecret_rsa -d
before_deploy:
- eval "$(ssh-agent -s)"
- chmod 600 $TRAVIS_BUILD_DIR/supersecret_rsa
- ssh-add $TRAVIS_BUILD_DIR/supersecret_rsa
deploy:
provider: script
skip_cleanup: true
script: rsync -r --delete-after --quiet $TRAVIS_BUILD_DIR/build travisdeploy#some.host.in.vpn.org:/opt/coolapp/war
on:
branch: master
The script runs a maven script (language Java makes travis look for a pom.xml) and rsyncs the build directory onto a server. Works nice without the VPN inbetween.

Resources