How do I get this Capistrano task to run on the server instead of locally? - ruby

I want to pull the last modified file from a directory. This capistrano task works locally just fine, but how do I make this run on the server so I can pull the servers data?
namespace :pull do
desc "Hello Pull data from the server"
task :hello, roles: :db do
## Want this to return what's on the server. Not locally.
puts "Getting filename of last created database backup"
db_backups_directory_path = "/home/deployer/backups"
last_db_backup_archived = Dir.glob(File.join(db_backups_directory_path, '*')).
select {|f| File.file? f }.
sort_by {|f| File.mtime f }.
last
puts last_db_backup_archived
end
end

I'd just go with run. Capistrano executes commands in parallel over a bunch of servers, so you'll have to translate your ruby into shell code. Thankfully, in your case it's more or less a straightforward translation.
task :hello, roles: :db do
## Want this to return what's on the server. Not locally.
puts "Getting filename of last created database backup"
db_backups_directory_path = "/home/deployer/backups"
run <<-CMD
find #{db_backups_directory_path} -type f -printf '%A# %p\n'|
sort -n | tail -n1 | cut -d" " -f2
CMD
end

The capture command will also run on a remote server. In addition to running a command remotely, it can write the stdout of the command to a ruby variable. So you could manipulate it with ruby methods, and then pass it back in with
some_variable = capture ("pwd")
capture ("cd #{some_variable}/.. && ls -alh")
This isn't the best example, but you get the general idea. The second capture is obviously not necessary, and you could substitute it with run and it wouldn't make a difference.
However, you should know that this will not work if you are running this task against multiple servers.
From the documentation:
Executes the given command on the first server targetted by the
current task, collects it's stdout into a string, and returns the
string. The command is invoked via #invoke_command.
http://rdoc.info/github/capistrano/capistrano/Capistrano/Configuration/Actions/Inspect#capture-instance_method

Related

Ruby command download multiple folder from S3 use AWS CLI

I'm working on backup some accounts from my bucket with the prefix is id
When I exec once id is working correctly but when I used for multiple ids it will quit my ruby command. I was check, the error is when it run exec command. I was trying to research why it breaks but it takes more time. Anybody can help me why?
test.txt with 1 id:
1
test.txt with multiple ids:
1,2,3
My code:
file_names = ["test.txt"]
Dir.mkdir("logs") unless Dir.exist?("logs")
Dir.mkdir("data") unless Dir.exist?("data")
file_names.each do |file|
out_file = File.new("logs/#{file}", "w")
out_file.puts("Start read file #{file}")
member_ids = File.read("#{file}").strip!.split(",")
member_ids.each do |id|
Dir.mkdir("data/#{id}") unless Dir.exist?("data/#{id}")
command = "aws s3 sync s3://mybucket/#{id}/ data/#{id}/"
exec command
out_file.puts("#{id}")
end
out_file.puts("Finished read file #{file}")
out_file.close
end
This error by use exec command, change it to system command it will working :D

How to pass command line parameters to capistrano

I am using Capistrano v2.9.0.
I run this command:
cap deploy:tryout -S testvar=thing
and my deploy.rb contains this:
namespace :deploy do
task :tryout do
if defined? testvar
puts "param: #{testvar}\n"
else
puts "no branch!\n"
end
end
end
The output is "no branch!". How do I pass values from the command line? I tried looking into the code, and I can see options.rb where it adds the passed parameter to options[:pre_vars], but that seems to be an instance variable, and I can't figure out how to access it from my deploy script.
Solution:
The options can be accessed via #parent.variables hash, so if the command line string is testvar=thing, then #parent.variables[:testvar] has the value string.
This seems really ugly and hacky, but it works.
Edit:
Turns out it is also available locally via variables[:testvar]

How to find most recently modified file in a remote directory (via ssh)?

I found this answer helpful:
How can you find the most recently modified folder in a directory using Ruby?
But what I need is to do the same for a remote directory (via SSH). What is the easiest way to do this in Ruby?
Here's what I have so far:
paths = (IO.popen("ssh -A user#yo.mammas.house.com ls /install/")).read.split("\n")
I only want these folders:
if p =~ /^release-MC-.*$/
I'm currently parsing the result of the ls command, splitting on new lines, matching on the regex and the next step is to build a hash of the date string embedded in the folder name. I really don't want to have to do this last step but it will work.
Is there a better way?
This is less a Net::SSH question as it is "What command can I issue to find the most recently modified file?"
SSH connections can issue a command, so once you know what command to send, or execute, you're done. I'd look at:
ls -Alt path/to/files | sed -n '2p'
Fleshing out something more usable results in:
require 'net/ssh'
HOST = 'hostname.domain'
USER = 'user'
PASSWORD = "password"
output = Net::SSH.start(HOST, USER, :password => PASSWORD) { |ssh|
ssh.exec!('ls -alt . | grep pattern_to_find')
}
puts output
Which, after filling in the fields with the right values and running it, connected to one of my hosts at work and returned something like:
drwxr-xr-x 11 xxxxxxxxxxxx xxxxxxxxx 4096 Oct 2 16:20 development
If you have multiple hits you need to retrieve, either expand the pattern after grep or discard the pipe to grep and parse your resulting output in Ruby once the command returns. You can also discard the t flag from ls if you want to sort locally, though it's a better idea to offload as much of the processing to the far-side host, rather than have it return a huge glob of data and process it locally. The less you return, the faster your overall code will be.

Ruby Dump all cron jobs to text file

I want a ruby script that will dump all the existing cron jobs to a text file using "crontab -l" or anything else that will achieve the same objective. Also the text file should be possible to use with crontab txtfile to create the cron jobs again.
Below is the code I already wrote:
def dump_pre_cron_jobs(file_path)
begin
cron_list = %x[crontab -l]
if(cron_list.size > 0)
cron_list.each do |crl|
mymethod_that_writes_tofile(file_path, crl) unless crl.chomp.include?("myfilter")
end
end
rescue Exception => e
raise(e.message)
end
end
Why does this need to be a Ruby script?
As you say, you can dump the crontab to a file with crontab -l > crontab.txt.
To read them back in again, simply use crontab crontab.txt, or cat crontab.txt | crontab -
I agree with #Vortura that you do not need to create a Ruby script to do this.
If you really want to, here is a probable way:
File.open('crontab.txt', 'w') do |crontab|
crontab << `crontab -l`
end
NOTE: Running this as root, or using sudo should capture all the cron jobs on a system, not just a single users' jobs. Run it as yourself or as that user and it might capture just those jobs. I haven't test that aspect of it.
Trying to run crontab -l to capture crontab files for all the users and packages seems the indirect way to do the task and could have the hassle of dealing with password requests hanging your code. I'd write code to comb through the directories that store them, rather than mess with prompts. Run the code using sudo and you shouldn't have any problems accessing the files.
Take a look at the discussion at: http://www.linuxquestions.org/questions/linux-newbie-8/etc-crontab-vs-etc-cron-d-vs-var-spool-cron-crontabs-853881/ for information on where the actual cron tab files are stored on disk.
Also https://superuser.com/questions/389116/how-to-recover-crontab-jobs-from-filesystem/389137 has similar information.
Mac OS varies a little from Linux in where Apple puts the cron files. Run man cron at the command-line for the definitive details on either OS.
Here's slightly-tested code for how I'd back up the files. How you restore them is for you to figure out, but it shouldn't be hard to figure out:
require 'fileutils'
BACKUP_PATH = '/path/to/some/safe/storage/directory'
CRONTAB_DIRS = %w[
/usr/lib/cron/tabs
/var/spool/cron
/etc/anacrontab
/etc/cron.d
]
CRONTAB_FILES = %w[
/etc/cron_list
]
def dump_pre_cron_jobs(file_path)
full_backup_path = File.join(
BACKUP_PATH,
File.dirname(file_path)
)
FileUtils.mkdir_p(full_backup_path) unless Dir.exist?(full_backup_path)
File.write(
File.join(
full_backup_path,
file_path
),
File.read(file_path)
)
rescue Exception => e
STDERR.puts e.message
end
CRONTAB_DIRS.each do |ct|
next unless Dir.exist?(ct)
begin
Dir.entries(File.join(ct, '*')).each { |fn| dump_pre_cron_jobs(fn) }
rescue Errno::EACCES => e
STDERR.puts e.message
end
end
CRONTAB_FILES.each do |fn|
dump_pre_cron_jobs(fn)
end
You'll need to run this as root via sudo to access the directories and files as they're usually locked down from unauthorized prying eyes.
The code creates a repository of crontabs, in BACKUP_PATH, based on their original file paths. No changes are made to the file contents so they can be restored as-is by copying them back via cp or writing code to reverse this process.

Ruby, run linux commands one by one, by SSH and LOG everything

I want to write code in Ruby witch net::ssh that run commands one by one on remote linux machine and log everything (called command, stdout and stderr on linux machine).
So I write function:
def rs(ssh,cmds)
cmds.each do |cmd|
log.debug "[SSH>] #{cmd}"
ssh.exec!(cmd) do |ch, stream, data|
log.debug "[SSH:#{stream}>] #{data}"
end
end
end
For example if I want to create on remote linux new folders and file: "./verylongdirname/anotherlongdirname/a.txt", and list files in that direcotry, and find firefox there (which is stupid a little :P) so i call above procedure like that:
Net::SSH.start(host, user, :password => pass) do |ssh|
cmds=["mkdir verylongdirname", \ #1
"cd verylongdirname; mkdir anotherlongdirname, \ #2
"cd verylongdirname/anotherlongdirname; touch a.txt", \ #3
"cd verylongdirname/anotherlongdirname; ls -la", \ #4
"cd verylongdirname/anotherlongdirname; find ./ firefox" #5 that command send error to stderr.
]
rs(ssh,cmds) # HERE we call our function
ssh.loop
end
After run code above i will have full LOG witch informations about executions commands in line #1,#2,#3,#4,#5. The problem is that state on linux, between execude commands from cmds array, is not saved (so I must repeat "cd" statement before run proper command). And I'm not satisfy with that.
My purpose is to have cmds tables like that:
cmds=["mkdir verylongdirname", \ #1
"cd verylongdirname", \
"mkdir anotherlongdirname", \ #2
"cd anotherlongdirname", \
"touch a.txt", \ #3
"ls -la", \ #4
"find ./ firefox"] #5
As you see, te state between run each command is save on the linux machine (and we don't need repeat apropriate "cd" statement before run proper command). How to change "rs(ssh,cmds)" procedure to do it and LOG EVERYTHING (comand,stdout,stdin) like before?
Perhaps try it with an ssh channel instead to open a remote shell. That should preserve state between your commands as the connection will be kept open:
http://net-ssh.github.com/ssh/v1/chapter-5.html
Here's also an article of doing something similar with a little bit different approach:
http://drnicwilliams.com/2006/09/22/remote-shell-with-ruby/
Edit 1:
Ok. I see what you are saying. SyncShell was removed from Net::SSH 2.0. However I found this, which looks like it does pretty much what SyncShell did:
http://net-ssh-telnet.rubyforge.org/
Example:
s = Net::SSH.start(host, user)
t = Net::SSH::Telnet.new("Session" => s, "Prompt" => %r{^myprompt :})
puts t.cmd("cd /tmp")
puts t.cmd("ls") # <- Lists contents of /tmp
I.e. Net::SSH::Telnet is synchronous, and preserves state, because it runs in a pty with your remote shell environment. Remember to set the correct prompt detection, otherwise Net::SSH::Telnet will appear to hang once you call it (it's trying to find the prompt).
You can use pipe instead:
require "open3"
SERVER = "..."
BASH_PATH = "/bin/bash"
BASH_REMOTE = lambda do |command|
Open3.popen3("ssh #{SERVER} #{BASH_PATH}") do |stdin, stdout, stderr|
stdin.puts command
stdin.close_write
puts "STDOUT:", stdout.read
puts "STDERR:", stderr.read
end
end
BASH_REMOTE["ls /"]
BASH_REMOTE["ls /no_such_file"]
Ok, finally with the help of #Casper i get the procedure (maby someone use it):
# Remote command execution
# t=net::ssh:telnet, c="command_string"
def cmd(t,c)
first=true
d=''
# We send command via SSH and read output piece by piece (in 'cm' variable)
t.cmd(c) do |cm|
# below we cleaning up output piece (becouse it have strange chars)
d << cm.gsub(/\e\].*?\a/,"").gsub(/\e\[.*?m/,"").gsub(/\r/,"")
# when we read entire line(composed of many pieces) we write it to log
if d =~ /(^.*?)\n(.*)$/m
if first ;
# instead of the first line (which has repeated commands) we log commands 'c'
#log.info "[SSH]>"+c;
first=false
else
#log.info "[SSH] "+$1;
end
d=$2
end
end
# We print lines that were at the end (in last piece)
d.each_line do |l|
#log.info "[SSH] "+l.chomp
end
end
And we call it in code:
#!/usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
require 'net/ssh/telnet'
require 'log4r'
...
...
...
Net::SSH.start(host, user, :password => pass) do |ssh|
t = Net::SSH::Telnet.new("Session" => ssh)
cmd(t,"cd /")
cmd(t,"ls -la")
cmd(t,"find ./ firefox")
end
Thanks, bye.
Here's wrapper around Net/ssh here's article http://ruby-lang.info/blog/virtual-file-system-b3g
source https://github.com/alexeypetrushin/vfs
to log all commands just overwrite the Box.bash method and add logging there

Resources