Using a VPN for a travis deploy script - continuous-integration

I'm using a corporate install of Travis-CI for CI. So far triggering a build through a commit and using encrypted values work well.
For deployment however I need to connect to a server that I only can reach via a VPN tunnel (OpenVPN based).
I'm looking for a sample .travis.yml file that has a VPN connection. So far my file looks like this:
language: java
addons:
ssh_known_hosts: some.host.in.vpn.org
git:
depth: 3
before_install:
- sudo apt-get install -qq rpm
- openssl aes-256-cbc -K $encrypted_fancynumber_key -iv $encrypted_fancynumber_iv -in supersecret_rsa.enc -out supersecret_rsa -d
before_deploy:
- eval "$(ssh-agent -s)"
- chmod 600 $TRAVIS_BUILD_DIR/supersecret_rsa
- ssh-add $TRAVIS_BUILD_DIR/supersecret_rsa
deploy:
provider: script
skip_cleanup: true
script: rsync -r --delete-after --quiet $TRAVIS_BUILD_DIR/build travisdeploy#some.host.in.vpn.org:/opt/coolapp/war
on:
branch: master
The script runs a maven script (language Java makes travis look for a pom.xml) and rsyncs the build directory onto a server. Works nice without the VPN inbetween.

Related

Run Ansible playbook from Cloud-Init

I have been learning Cloud-Init for several days to do an automatic deployment. To achieve this, and apply certain configurations, I am using Ansible playbooks. The problem that I have found is that I am not able to make the playbook run directly on the operating system that is being installed.
I leave you the user-data file that I am using.
#cloud-config
autoinstall:
version: 1
identity:
hostname: hostname
password: "$6$cOciYeIErEet80Rv$YX8qt6vizXgcUkgIPSKD1qNZNxe77tSWOY3k/0.i8D8EpApaGNuyucxJvONmZiRj4rVM3L6EE4sLKcnzYVcMj/ "
username: ubuntu
storage:
layout:
name: direct
locale: es_ES
timezone: "Europe/Madrid"
keyboard:
layout: es
packages:
- sshpass
- ansible
- git
late-commands:
- git clone https://github.com/MarcOrfilaCarreras/dotfiles /target/root/dotfiles
- ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -u ubuntu -e "ansible_password=ubuntu" -e "ansible_become_pass=ubuntu"
PS: I am using Ubuntu Server 22.04, the Ansible command is temporary and only for testing and I know that I have to change the identity fields.
If you want to configure localhost, it's better to use local transport (which is -c local in command line).
Basically, change ansible call to:
ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -c local
This will bypass all SSH things and run locally.

Can Cloudinit be used to automate complex configuration such as UFW and Apache

Cloudinit can handle basic configuration like creating users and groups, installing packages, mount storage points, and more (see Cloud Config Examples). But can it handle more complex tasks like the below, and if so, how? A minimal working example would be appreciated.
# KNOWN: Replicating the below user creation with sudo privelges and a home
# directory is possible through cloudinit
sudo adduser johnny
sudo usermod -aG sudo johnny
# KNOWN: Replicating the below public/private key creation is possible through
# cloudinit
ssh johny#10.0.0.1 "ssh-keygen -t rsa"
# UNKNOWN: Is it possible to update the firewall rules in cloudinit or should
# one simply SSH in afterwards like so
ssh johnny#10.0.0.1 "
sudo ufw enable
sudo ufw allow http
sudo ufw allow https"
# UNKNOWN: Is it possible to deploy LetsEncrypt cetrificates or should one
# simply SSH in afterwrds like so
ssh johnny#10.0.0.1 "
sudo service apache2 restart
sudo certbot --apache"
# UNKNOWN: Is it possible to clone and install git repositories or should one
# simply SSH in afterwards like so
ssh johnny#10.0.0.1 "
GIT_NAME=johnny
GIT_EMAIL=johnny.rico#citizen.federation
git confing --global user.name $GIT_NAME
git confing --global user.email $GIT_EMAIL
git clone git#github.com:Federation:clandathu.git
cd clandathu/install
make --kill-em-all
sudo make install"
If you're referring specifically to the cloud-config, then all of the unknowns that you have listed don't have specific modules for them. However, you can also run arbitrary shell scripts via the runcmd module, or by specifying a script as your user data instead of a cloud config. It just has to start with #! rather than #cloud-config. If you want both a cloud config and a custom shell script, you can build a mime multi part archive with a cloud-init helper command.

Deploying to FTP server in Github Actions does not work

As the title says, deploying to FTP server isn't working for me from a Github Action. I've tried using a couple of actions to accomplish this (FTP-Deploy and ftp-action), but FTP-Deploy just kept running with sporadic
curl: (7) Failed to connect to ftpservername.com port 21: Connection timed out
messages and ftp-action kept running without any output. Note: The server is available, I connected and transferred some files using Filezilla without any issues.
After that I tried using lftp, this is the command I used on a local Ubuntu machine
lftp -c "open -u username,password ftpservername.com; mirror -R locfolder remote/remotefolder"
and the file transfer worked, but when used in a Github Action it produced this output:
---- Connecting to ftpservername.com (123.456.789.123) port 21
mkdir `remote/remotefolder' [Connecting...]
**** Socket error (Connection timed out) - reconnecting
---- Closing control socket
---- Connecting to ftpservername.com (123.456.789.123) port 21
I tried setting both ftp:ssl-allow and ssl:verify-certificate to false, but this did not produce any results. Also, I do not have access to the server, so I can't check the server logs.
This is the workflow file:
name: Test
on:
push:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
- name: Install pip
run: python -m pip install --upgrade pip
- name: Install packages
run: |
sudo apt install lftp
sudo apt install expect
.
.
.
- name: FTP Deploy
run: |
echo Starting...
unbuffer lftp -c "debug; set ftp:ssl-allow false; set ssl:verify-certificate false; open -u username,${{ secrets.PASSWORD }} ftpservername.com; mirror -R -v locfolder remote/remotefolder"
echo Done transferring files.
Any help is appreciated, thank you!
Found the issue, the hosting service was blocking the IP address (as it was an IP address outside of the country). After setting up a self-hosted runner and whitelisting the IP of the runner everything works fine.

Bitbucket fatal: Can't access remote

I tried to set up sftp connection between Bitbucket and Runcloud server. Runcloud only uses sftp connection. Bitbucket config:
image: php:7.3
pipelines:
branches:
master:
- step:
name: Deploy to production
deployment: production
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init --user $SFTP_username --passwd $FTP_password sftp://runcloud#1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
Connection always fails with error fatal: Can't access remote 'sftp://1.111.111.11', exiting...
I tried a different sftp Path combination but the result always the same.
sftp://1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
sftp://mywebsite/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
My website
Root Path: /home/runcloud/webapps/mywebsite
Public Path: /home/runcloud/webapps/mywebsite
Runcload have different as "normal" set up for ftp. For example to conect with FileZila HOST is my server ip. And to get to my website i have to navigate /webapps/mywebsite
Not sure what I doing wrong is my sftp path incorrect?

CI with Gitlab and Digital Ocean

I have my website hosted on Digital Ocean and my repo on gitlab. I do not have an instance of gitlab installed on my Digital Ocean Server. I am just using the .gitlab-ci.yml file.
In my CI script, I ssh into digital ocean, CD into my project and attempt to pull the latest code.
I have also generated an ssh key on the digital ocean server and added it to my ssh-keys on Gitlab.
I'm not sure if there is a firewall that I can't get past or something.
unfortunately, it errors out with this error.
Running with gitlab-ci-multi-runner 1.9.0 (82714ae)
Using Docker executor with image ruby:2.1 ...
Pulling docker image ruby:2.1 ...
Running on runner-4e4528ca-project-1209495-concurrent-0 via runner- 4e4528ca-machine-1484021348-29523945-digital-ocean-4gb...
Cloning repository...
Cloning into '/builds/Rchampin/ryan_the_developer_django'...
Checking out b3783fbf as master...
$ ssh root#myIP
Pseudo-terminal will not be allocated because stdin is not a terminal.
Host key verification failed.
ERROR: Build failed: exit code 1
Here is my CI script.
before_script:
- ssh root#myIP
- cd /home/rchampin/ryan_the_developer_django
pull:
script:
- git pull
You have some optinos to try in this question
ssh -t -t
# or
ssh -T
That should avoid requesting a pseudo terminal.

Resources