Adding local ssh option to Capistrano task - ruby

I have a Capistrano deployment script that exports my local database to a remote server and vice versa.
Here is one such task...
desc "Imports the remote database into your local environment"
task :pull do
# Create dump of remote db
invoke 'db:backup'
on roles(:app) do
run_locally do
# Create dump of current local db
execute "mysqldump -u #{fetch(:local_db_user)} -p#{fetch(:local_db_password)} #{fetch(:local_db_name)} > #{fetch(:local_backup_file)}"
# Import remote db into local
execute "mysql -u #{fetch(:local_db_user)} -p#{fetch(:local_db_password)} #{fetch(:local_db_name)} < db_backups/#{fetch(:curr_stage)}/#{fetch(:backup_filename)}.sql"
end
end
Rake::Task['db:cleanup_local'].execute
end
However I'm running a virtual host (Vagrant box) locally so to do local database tasks need to SSH into it.
What I want to do is add an option in the Capistrano options that if present will add something like ssh vagrant#192.168.10.10 to the start of the mysqldump and mysql commands and if it's not defined not add anything.
So I would want my local database connection details to look something like
set :local_db_host, "127.0.0.1"
set :local_db_name, "database_dev"
set :local_db_user, "homestead"
set :local_db_password, "secret"
set :local_ssh, "vagrant#192.168.10.10"
How could I add this option within the Capistrano task in a tidy way? Much thanks for any help.

Related

How to run a batch file in a remote machine from the Teamcity

I am building a C# project and deployed the package using Teamcity.
I have added a step in the build configuration that I need to execute a batch file present in the remote machine.
The batch file is copied to the remote machine along with the deployed package.
I am getting the error "The system cannot find the path specified."
If you are directly entering the path of the batch file, it will look for the same path on teamcity server and not on the remote machine.
you would have to install psexec on team city machine. See https://learn.microsoft.com/en-us/sysinternals/downloads/psexec
This command will help you to execute the script on remote machine
Now create a Command line step like this
and then in custom script use following :-
psexec \10.0.111.111 -d -accepteula -u domainname\userid -p password cmd.exe "Path/To/BatchFile.bat"
Here 10.0.111.111 is the ip adress of remote machine(Can use hostname as well).
domainname\userid is domain name and user id used to login the remote machine.
password is the password used to login remote machine.
Path/To/BatchFile.bat is the path to the batch file.

Using Jenkins to SSH into EC2 Ubuntu instance and run shell scripts

I have installed Jenkins on my local, I have created my own EC2 instance, I can ssh into my instance and run some shell scripts to shut down my Wildfly server installed on my instance.
This is what I do when I do it manually on my Mac.
open my mac terminal, type
ssh -i /Users/xxx/tools/xxxx.pem ubuntu#10.206.xxx.xx
It will login to my Instance, and then I type:
cd /srv/wildfly-10.1.0.Final/bin
sudo -s
source /etc/profile
./jboss-cli.sh --connect command=:shutdown
The screen will output
{"outcome" => "success"}
Now, I want to using Jenkins, when I click build button, it will ssh into that instance and run these shell scripts for me. The output is expected the same as I run it after I ssh into the instance.
My question is: what steps should I follow, after I login to my Jenkins local environment: localhost:8080
Create a New Item, which one? Is there some plugin I can use? Where to put my shell scripts, will it run successfully?
A guide would be helpful, thanks a lot!
Additon:
when I try to login: using my ssh command, I get this error:
Pseudo-terminal will not be allocated because stdin is not a terminal.
Host key verification failed.
Too many questions to answer in one post. but this should get you started.
ssh from jenkins to your ec2 should be password less, should you need to set the keys in jenkins. use the credential manager and create one, by pasting the private key
https://www.cloudbees.com/blog/using-ssh-jenkins
Refer remote command execution over ssh for the rest of the task.
you will find how to do this in tons.. but this should give you an idea. https://www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/
For the question on job type, at this point just go with the freestyle .. And later, you may plan for fancy stuff.
You need to add the PEM file details in place where it asks for Private Key

Capistrano error : could not connect to ssh-agent

I'm using Bedrock with Capistrano deploys.
When I use command bundle exec cap staging deploy:check I get an authentication error :
...
D, [2015-05-09T15:39:53.878464 #15636] DEBUG -- net.ssh.authentication.session[1e34a58]: trying publickey
D, [2015-05-09T15:39:53.878464 #15636] DEBUG -- net.ssh.authentication.agent[1e30d2c]: connecting to ssh-agent
E, [2015-05-09T15:39:53.879447 #15636] ERROR -- net.ssh.authentication.agent[1e30d2c]: could not connect to ssh-agent
E, [2015-05-09T15:39:53.879447 #15636] ERROR -- net.ssh.authentication.session[1e34a58]: all authorization methods failed (tried publickey)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#SERVER_IP: Authentication failed for user deploy#SERVER_IP
Tasks: TOP => git:check => git:wrapper
Capistrano could not connect to ssh-agent on my server.
But I can log in on my server via SSH like this ssh deploy#SERVER_IP without password. I dit all the instructions in Capistrano Authentication & Authorisation Docs page, so I can use command like me#localhost $ ssh deploy#one-of-my-servers.com 'hostname; uptime'.
If I enter command ssh -A deploy#SERVER_IP 'env | grep SSH_AUTH_SOCK' I get result
SSH_AUTH_SOCK=/tmp/ssh-UweQkw7578/agent.7578
Here is my deploy.rb file :
set :application, 'APP'
set :repo_url, 'URL'
set :branch, :master
set :tmp_dir, '~/tmp'
set :log_level, :info
set :linked_files, fetch(:linked_files, []).push('.env')
set :linked_dirs, fetch(:linked_dirs, []).push('web/app/uploads')
Here is my staging.rb file :
set :stage, :staging
set :deploy_to, -> { "/var/www/vhosts/project/dev" }
server 'SERVER_IP', user: 'deploy', roles: %w{web app}
set :ssh_options, {
user: 'deploy',
keys: %w('/c/Users/alexander/.ssh/id_rsa'),
forward_agent: true,
auth_methods: %w(publickey),
verbose: :debug
}
fetch(:default_env).merge!(wp_env: :staging)
Apache's agent forwarding agent instruction is enabled in sshd_config file : AllowAgentForwarding yes
What should do with my config files to make my deploy work?
Windows 8.1
Ruby 2.2.0
Capistrano 3.2.1
Git Bash
OK so I had the same issue, and I spent way too long working out exactly what is happening here, and the upshot is -
for ruby on windows, you must run pagent, not ssh-agent, for Capistrano and agent forwarding to work - in fact pretty much any tool that uses the Ruby net-ssh library on Windows.
And I dont think that will change, at least not for a while.
Agent Forwarding
See An Illustrated Guide to SSH Agent Forwarding for more about agent forwarding, and how the key challenge ends back up on our workstation.
Terminology
workstation - the machine (Windowa server/desktop/laptop) our SSH
client software is running from, and, most importantly, our PKI
private key is stored on (with or without a passphrase)
deployment node - the target of our Capistrano deployment task, most
like defined in the 'server' key in our config/deploy.rb, or
config/deploy/.rb file
git repo - where we will pull the code from, first queried via "git
ls-remote" - we will access this git repo via SSH, and the deployment
node will use agent forwarding to pass the key challenge back to the
workstation
SSH client software - how we reach out to sshd on remote servers, and
which has access to our private key. Might be putty, an OpenSSH ssh
client or the net-ssh library in Ruby.
Setup
I have a Windows 7 workstation box, with Git-Bash, and its OpenSSH ssh client, plus the script from Joe Reagle that sets up some environmental variables that say which port and pid the ssh-agent is operating on.
I also have Putty and Pageant, but I focussed, initially, on just the OpenSSH/Git-Bash tools.
I have set up passwordless ssh from the workstation to the deployment node, I have the ssh-agent running, I have my key added through ssh-add, and I have my public key registered as a read-only access key to the git repo.
Basics
So we are trying to use SSH agent forwarding to have Capistrano pull from our Git repo onto our deployment node.
Now we can test this all ourselves by setting up our public SSH key on the deployment node and using, say, the OpenSSH ssh client, to confirm we have passwordless ssh working. Then we can setup ssh-agent by
starting ssh-agent and setting the SSH_AUTH_SOCK and SSH_AGENT_PID as required.
adding our private key to the ssh-agent via ssh-add
add our public key as an authorised key to the git repo
ssh to the deployment node, and from there do a "git ls-remote git#" (or a ssh -T git#)
If everything is setup correctly, this will all work, and so we will think "ok I can do a 'cap deploy:check'" - and it will fail.
What Went Wrong
We will get an error
"Error reading response length from authentication socket"
Who is telling us this ? It isnt immediately clear, but it
isn't the git repo
it isnt the git client on the deployment node
it isnt the sshd daemon on the deployment node, that wants to pass the key challenge back to the workstation.
Its the Ruby ssh client library on the workstation.
How do we know this
In the ssh_options hash in the deploy.rb file, we add the following :
verbose: :debug
When we do this we see this message
Pageant not running.
Why is Capistrano trying to use Pageant instead of ssh-agent
When running via Capistrano, the ssh client is different to the one you used when verifying things by hand.
When verifying by hand, it was an OpenSSH ssh client. Now it is the net-ssh library in Ruby.
And on Windows, net-ssh has these lines
if Net::SSH::Authentication::PLATFORM == :win32
require 'net/ssh/authentication/pageant'
end
or
case Net::SSH::Authentication::PLATFORM
when :java_win32
require 'net/ssh/authentication/agent/java_pageant'
else
require 'net/ssh/authentication/agent/socket'
So loading pageant is hard-coded into net-ssh. It doesnt even try to see if you are running under a unix-like shell (like git-bash or cygwin), and to then use the unix-domain ssh-agent SSH_AUTH_SOCK
At present net-ssh doesnt try to open a unix-domain named socket. In theory I think it could, through the UNIXSocket class in the stdlib. But I haven't experimented with that on a Windows machine yet.

Accessing Riak node from a remote machine (riak-admin backup)

While trying to run a riak-admin backup riak#ec2-xxx.compute-1.amazonaws.com riak /home/user/backup.dat all on a remote machine (ec2 instance) I encounter the following error message
{"init terminating in do_boot",{{nocatch,{could_not_reach_node,'riak#ec2-xxx.compute-1.amazonaws.com'}},[{riak_kv_backup,ensure_connected,1,[{file,"src/riak_kv_backup.erl"},{line,171}]},{riak_kv_backup,backup,3,[{file,"src/riak_kv_backup.erl"},{line,40}]},{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,572}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
I assume there's a connection / permission error since the same backup command will work if run locally on the instance (with a local node ip of course), I should note the server (Node.js) can remotely connect to that ip so the port is open and accessible 8098). Any advice on how to make the backup operational remotely?
It would appear that the riak-admin backup command doesn't work remotely - and certainly it's not something I've ever tried to do. I'd recommend setting up a periodic backup (via cron or similar) and then use rsync to get your backup file down to local.
Alternatively, you could try the following hacky untested idea for a single script.
#!/bin/bash
ssh ec2-xxx.compute-1.amazonaws.com "riak-admin backup riak#ip-local-ec2 /home/user/backup.dat all"
rsync -avP ec2-xxx.compute-1.amazonaws.com:/home/user/backup.dat .

Capistrano, Firewalls and Tunnel

We're using Capistrano to automate pushing new versions of a PHP application to a production server. The production server (we'll call it production) is public, while our repository server (we'll call it repo) sits behind our corporate firewall, along with our own machines.
Capistrano, as configured by default, won't work, as production can't talk to repo.
I was wondering if there was someway I could setup capistrano to SSH to repo first, then SSH to production opening a tunnel on a port that I can then use to SSH from production back to repo to pull the changes from SCM.
I just can't figure out how to set this up or figure out a better solution. Ideas?
Edit:
I've tried this:
role :web, "deploy.com"
namespace :deploy do
task :remote_tunnel do
run 'Creating SSH tunnel...' do |channel, stream, data|
ssh = channel.connection
ssh.forward.remote(22, 'server.com', 10000, '127.0.0.1')
ssh.loop {!ssh.forward.active_remotes.include?([10000, '127.0.0.1'])}
end
end
end
before "deploy:update_code", "deploy:remote_tunnel"
But I keep getting this error:
failed: "sh -c 'Creating SSH tunnel...'" on deploy.com
Here's are 2 ways to accomplish it.
1st way
not sure if you've seen this thread?
https://groups.google.com/forum/?fromgroups=#!topic/capistrano/RVwMim-qnMg
It makes use of the net-ssh-gateway library, but creates copies of the local forwarding methods but they're geared for remote access.
class Net::SSH::Gateway
# Opens a SSH tunnel from a port on a remote host to a given host and port
# on the local side
# (equivalent to openssh -R parameter)
def open_remote(port, host, remote_port, remote_host = "127.0.0.1")
ensure_open!
#session_mutex.synchronize do
#session.forward.remote(port, host, remote_port, remote_host)
end
if block_given?
begin
yield [remote_port, remote_host]
ensure
close_remote(remote_port, remote_host)
end
else
return [remote_port, remote_host]
end
rescue Errno::EADDRINUSE
retry
end
# Cancels port-forwarding over an open port that was previously opened via
# open_remote.
def close_remote(port, host = "127.0.0.1")
ensure_open!
#session_mutex.synchronize do
#session.forward.cancel_remote(port, host)
end
end
end
2nd way
Outlined in an answer to this SO question:
Is it possible to do have Capistrano do a checkout over a reverse SSH tunnel?
This technique is very similar to the 1st way. First you need to create 2 paths to the repository:
# deploy.rb
set :local_repository, "ssh://git#serverbehindfirewall/path/to/project.git"
set :repository, "ssh://git#localhost:9000/path/to/project.git"
Then before you deploy you'll need to setup the remote forward:
% ssh -R 9000:serverbehindfirewall:22 deploybot#deployserver.com
# CTRL + C + A (Screen) or ⌘ + T (Terminal.app) to open new tab
Followed by your deploy:
% cap HOSTFILTER=deployserver.com deploy # HOSTFILTER reduces set to specified host. Only useful if you have multiple servers.
See this answer to that SO question for more details:
https://stackoverflow.com/a/3953351/33204
Using Capistrano 3.x, the following works for me:
namespace :deploy do
desc "Open SSH Tunnel to GitLab"
task :open_tunnel do
on roles(:app) do
info "Opening SSH Remote Tunnel..."
self.send(:with_ssh) do |ssh|
# ssh -R 9000:192.168.1.123:22
ssh.forward.remote(22, "192.168.1.123", 9000)
end
end
end
before "deploy:check", "deploy:open_tunnel"
end
Please note that ssh.forward.remote expects parameters in a different order than ssh -R, the above is equivalent to ssh -R 9000:192.168.1.123:22
This task calls a private method, if anyone knows an official way to get the access Capistrano's ssh connection, please comment or edit.
Edit: Also see the section Tunneling and other related SSH themes of SSHKit's README

Resources