Bitbucket fatal: Can't access remote - ftp

I tried to set up sftp connection between Bitbucket and Runcloud server. Runcloud only uses sftp connection. Bitbucket config:
image: php:7.3
pipelines:
branches:
master:
- step:
name: Deploy to production
deployment: production
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init --user $SFTP_username --passwd $FTP_password sftp://runcloud#1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
Connection always fails with error fatal: Can't access remote 'sftp://1.111.111.11', exiting...
I tried a different sftp Path combination but the result always the same.
sftp://1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
sftp://mywebsite/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
My website
Root Path: /home/runcloud/webapps/mywebsite
Public Path: /home/runcloud/webapps/mywebsite
Runcload have different as "normal" set up for ftp. For example to conect with FileZila HOST is my server ip. And to get to my website i have to navigate /webapps/mywebsite
Not sure what I doing wrong is my sftp path incorrect?

Related

Deploying to FTP server in Github Actions does not work

As the title says, deploying to FTP server isn't working for me from a Github Action. I've tried using a couple of actions to accomplish this (FTP-Deploy and ftp-action), but FTP-Deploy just kept running with sporadic
curl: (7) Failed to connect to ftpservername.com port 21: Connection timed out
messages and ftp-action kept running without any output. Note: The server is available, I connected and transferred some files using Filezilla without any issues.
After that I tried using lftp, this is the command I used on a local Ubuntu machine
lftp -c "open -u username,password ftpservername.com; mirror -R locfolder remote/remotefolder"
and the file transfer worked, but when used in a Github Action it produced this output:
---- Connecting to ftpservername.com (123.456.789.123) port 21
mkdir `remote/remotefolder' [Connecting...]
**** Socket error (Connection timed out) - reconnecting
---- Closing control socket
---- Connecting to ftpservername.com (123.456.789.123) port 21
I tried setting both ftp:ssl-allow and ssl:verify-certificate to false, but this did not produce any results. Also, I do not have access to the server, so I can't check the server logs.
This is the workflow file:
name: Test
on:
push:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
- name: Install pip
run: python -m pip install --upgrade pip
- name: Install packages
run: |
sudo apt install lftp
sudo apt install expect
.
.
.
- name: FTP Deploy
run: |
echo Starting...
unbuffer lftp -c "debug; set ftp:ssl-allow false; set ssl:verify-certificate false; open -u username,${{ secrets.PASSWORD }} ftpservername.com; mirror -R -v locfolder remote/remotefolder"
echo Done transferring files.
Any help is appreciated, thank you!
Found the issue, the hosting service was blocking the IP address (as it was an IP address outside of the country). After setting up a self-hosted runner and whitelisting the IP of the runner everything works fine.

bitbucket pipeline with laravel & shared hosting

i am trying to deploy laravel 5.4 app with bitbucket pipeline and get eror
"fatal: Could not get last commit. Network down? Wrong URL? Use 'git ftp init' for the inital push., exiting..."
i read an article on this site
i create this yaml file
image: samueldebruyn/debian-git
pipelines:
default:
- step:
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD ftp://site.com
and got eror
git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD site_url
fatal: Could not get last commit. Network down? Wrong URL? Use 'git ftp init' for the inital push., exiting...
some hosting provider don't allow external app and block all port just open 80 and 4 more for ftp ssl and ssh if you want to deploy your laravel in share hosting just upload all your data to root except public folder and than upload all data of public folder to your index folder www or public_html
here is screenshot of example

CI with Gitlab and Digital Ocean

I have my website hosted on Digital Ocean and my repo on gitlab. I do not have an instance of gitlab installed on my Digital Ocean Server. I am just using the .gitlab-ci.yml file.
In my CI script, I ssh into digital ocean, CD into my project and attempt to pull the latest code.
I have also generated an ssh key on the digital ocean server and added it to my ssh-keys on Gitlab.
I'm not sure if there is a firewall that I can't get past or something.
unfortunately, it errors out with this error.
Running with gitlab-ci-multi-runner 1.9.0 (82714ae)
Using Docker executor with image ruby:2.1 ...
Pulling docker image ruby:2.1 ...
Running on runner-4e4528ca-project-1209495-concurrent-0 via runner- 4e4528ca-machine-1484021348-29523945-digital-ocean-4gb...
Cloning repository...
Cloning into '/builds/Rchampin/ryan_the_developer_django'...
Checking out b3783fbf as master...
$ ssh root#myIP
Pseudo-terminal will not be allocated because stdin is not a terminal.
Host key verification failed.
ERROR: Build failed: exit code 1
Here is my CI script.
before_script:
- ssh root#myIP
- cd /home/rchampin/ryan_the_developer_django
pull:
script:
- git pull
You have some optinos to try in this question
ssh -t -t
# or
ssh -T
That should avoid requesting a pseudo terminal.

How to clone from a local git repository to a vm using ansible

I have a local git repository which I am trying to clone onto a vagrant machine. I'm trying to use ansible's "git" module to do this, I have the following task,
- name: Clone repository
git: repo=git://../.git dest=/home/vagrant/source accept_hostkey=True
When I run this task I receive the error,
failed: [webserver] => {"cmd": "/usr/bin/git ls-remote git://../.git -h refs/heads/HEAD", "failed": true, "rc": 128}
stderr: fatal: unable to connect to ..:
..[0: 42.185.229.96]: errno=Connection timed out
msg: fatal: unable to connect to ..:
..[0: 42.185.229.96]: errno=Connection timed out
FATAL: all hosts have already failed -- aborting
It looks like it's trying to find the repository on my VM rather than on my local machine? How to I clone from my local repo?
The git module executes completely inside the VM- you have to give it a path that's reachable by the VM. Either do a vagrant NFS shared/synced folder with your host, or expose it to the VM over the network via http/ssh. Be aware that non-NFS shared folders in vagrant with Virtualbox (and possibly other providers) just do dumb copies back and forth, not true "sharing" (ie, depending on how big your repo is, you might be sorry if it's not NFS).
The git commands will be run from the remote machine, in this case your Vagrant VM, not your local machine.
One way to accomplish this is through SSH remote port forwarding. You can forward connections from a port on the remote (Vagrant VM) to a host+port from your local machine.
Your local machine needs to make the git repository available. This can be done with sshd, but I will use the relatively obscure git-daemon, as it is easier to set up.
In your Ansible inventory file, add the following options to your Vagrant VM host. This will forward requests from your remote machine on port 9418 to your local machine at port 9418 (git-daemon) for the duration of the connection.
# inventory
webserver ansible_ssh_extra_args="-R 9418:localhost:9418"
# *OR* for a group of hosts
[webservers:vars]
ansible_ssh_extra_args="-R 9418:localhost:9418"
For this example, I will assume the GIT_DIR on your local machine is located at /home/you/repos/your-git-repo/.git. Before running your Ansible playbook, start the following command in another terminal (add a --verbose option if you want to see output):
git daemon \
--listen=127.0.0.1 \
--export-all \
--base-path=/home/you/repos \
/home/you/repos/your-git-repo/.git
Your task would look like this:
- git: repo=git://localhost/your-git-repo dest=/home/vagrant/source
Now when git connects to localhost (relative to your Vagrant VM), requests are forwarded to the git daemon running on your local machine.

Using ssh agent forwarding with salt states

So I have the following in my vagrant file:
config.ssh.forward_agent = true
And the following salt state:
git+ssh://git#bitbucket.org/xxx/repo.git:
git.latest:
- rev: rest
- target: /home/vagrant/src
However I get a public-key error when this salt state is executed.
The annoying thing is that if I manually perform git clone git+ssh://git#bitbucket.org/xxx/repo.git from within my instance, everything works fine. Any ideas?
Is bitbucket.org in known_hosts file?
git+ssh://git#bitbucket.org/xxx/repo.git:
git.latest:
- rev: rest
- target: /home/vagrant/src
- require:
- ssh_known_hosts: bitbucket.org
I had the similar requirement with capistrano. I used ssh-forwarding to checkout repo from github to the remote server. I had to add the host in ~/.ssh/config file on my machine as below.
vim ~/.ssh/config
Content
Host <some host or IP>
ForwardAgent yes
I used * as host so that It works with any server.

Resources