Ruby, run linux commands one by one, by SSH and LOG everything - ruby

I want to write code in Ruby witch net::ssh that run commands one by one on remote linux machine and log everything (called command, stdout and stderr on linux machine).
So I write function:
def rs(ssh,cmds)
cmds.each do |cmd|
log.debug "[SSH>] #{cmd}"
ssh.exec!(cmd) do |ch, stream, data|
log.debug "[SSH:#{stream}>] #{data}"
end
end
end
For example if I want to create on remote linux new folders and file: "./verylongdirname/anotherlongdirname/a.txt", and list files in that direcotry, and find firefox there (which is stupid a little :P) so i call above procedure like that:
Net::SSH.start(host, user, :password => pass) do |ssh|
cmds=["mkdir verylongdirname", \ #1
"cd verylongdirname; mkdir anotherlongdirname, \ #2
"cd verylongdirname/anotherlongdirname; touch a.txt", \ #3
"cd verylongdirname/anotherlongdirname; ls -la", \ #4
"cd verylongdirname/anotherlongdirname; find ./ firefox" #5 that command send error to stderr.
]
rs(ssh,cmds) # HERE we call our function
ssh.loop
end
After run code above i will have full LOG witch informations about executions commands in line #1,#2,#3,#4,#5. The problem is that state on linux, between execude commands from cmds array, is not saved (so I must repeat "cd" statement before run proper command). And I'm not satisfy with that.
My purpose is to have cmds tables like that:
cmds=["mkdir verylongdirname", \ #1
"cd verylongdirname", \
"mkdir anotherlongdirname", \ #2
"cd anotherlongdirname", \
"touch a.txt", \ #3
"ls -la", \ #4
"find ./ firefox"] #5
As you see, te state between run each command is save on the linux machine (and we don't need repeat apropriate "cd" statement before run proper command). How to change "rs(ssh,cmds)" procedure to do it and LOG EVERYTHING (comand,stdout,stdin) like before?

Perhaps try it with an ssh channel instead to open a remote shell. That should preserve state between your commands as the connection will be kept open:
http://net-ssh.github.com/ssh/v1/chapter-5.html
Here's also an article of doing something similar with a little bit different approach:
http://drnicwilliams.com/2006/09/22/remote-shell-with-ruby/
Edit 1:
Ok. I see what you are saying. SyncShell was removed from Net::SSH 2.0. However I found this, which looks like it does pretty much what SyncShell did:
http://net-ssh-telnet.rubyforge.org/
Example:
s = Net::SSH.start(host, user)
t = Net::SSH::Telnet.new("Session" => s, "Prompt" => %r{^myprompt :})
puts t.cmd("cd /tmp")
puts t.cmd("ls") # <- Lists contents of /tmp
I.e. Net::SSH::Telnet is synchronous, and preserves state, because it runs in a pty with your remote shell environment. Remember to set the correct prompt detection, otherwise Net::SSH::Telnet will appear to hang once you call it (it's trying to find the prompt).

You can use pipe instead:
require "open3"
SERVER = "..."
BASH_PATH = "/bin/bash"
BASH_REMOTE = lambda do |command|
Open3.popen3("ssh #{SERVER} #{BASH_PATH}") do |stdin, stdout, stderr|
stdin.puts command
stdin.close_write
puts "STDOUT:", stdout.read
puts "STDERR:", stderr.read
end
end
BASH_REMOTE["ls /"]
BASH_REMOTE["ls /no_such_file"]

Ok, finally with the help of #Casper i get the procedure (maby someone use it):
# Remote command execution
# t=net::ssh:telnet, c="command_string"
def cmd(t,c)
first=true
d=''
# We send command via SSH and read output piece by piece (in 'cm' variable)
t.cmd(c) do |cm|
# below we cleaning up output piece (becouse it have strange chars)
d << cm.gsub(/\e\].*?\a/,"").gsub(/\e\[.*?m/,"").gsub(/\r/,"")
# when we read entire line(composed of many pieces) we write it to log
if d =~ /(^.*?)\n(.*)$/m
if first ;
# instead of the first line (which has repeated commands) we log commands 'c'
#log.info "[SSH]>"+c;
first=false
else
#log.info "[SSH] "+$1;
end
d=$2
end
end
# We print lines that were at the end (in last piece)
d.each_line do |l|
#log.info "[SSH] "+l.chomp
end
end
And we call it in code:
#!/usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
require 'net/ssh/telnet'
require 'log4r'
...
...
...
Net::SSH.start(host, user, :password => pass) do |ssh|
t = Net::SSH::Telnet.new("Session" => ssh)
cmd(t,"cd /")
cmd(t,"ls -la")
cmd(t,"find ./ firefox")
end
Thanks, bye.

Here's wrapper around Net/ssh here's article http://ruby-lang.info/blog/virtual-file-system-b3g
source https://github.com/alexeypetrushin/vfs
to log all commands just overwrite the Box.bash method and add logging there

Related

Getting Console Input with Ruby's Docopt

I'm using Docopt in Ruby to parse my command options, and later in the script I am getting console input using gets.chomp. The problem is that all of the args from the running the program are still left in ARGF after Docopt does its parsing with options = Docopt::docopt(doc), and doing a gets command takes from ARGF before it tries gets'ing from STDIN.
I've tried to clear ARGF, but doing ARGF.gets for some reason tries to run the input as a command. I think clearing ARGF or using another input method could both be solutions, but I haven't found anything yet. I have to imagine that I'm not the first to try to get interactive command line input in Ruby with Docopt, so I'm hoping the answer is out there.
Some more code for those who would like it:
#!/usr/bin/ruby
require 'docopt'
doc=<<eos
Usage:
account_creator.rb --noldap [options]
Options:
-h, --help Show help page
-l, --ldap
-n, --noldap
-s SERVER With or without http[s]://, followed by port
--ad
--od
-d NUM
-u USER
-p PASS
-o OUTPUT-FILE Default behavior is to append output to file if it exists
eos
options = {}
begin
options = Docopt::docopt(doc)
rescue Docopt::Exit => e
puts e.message
exit 1
end
if options['-u']
username = options['-u']
else
while username.eql? '' or username == nil
puts "Enter Username:"
username = Kernel.gets.chomp.strip
end
end
This is unrelated to docopt. Try it on its own:
$ cat test.rb
#!/usr/bin/ruby
puts "Enter Username:"
username = gets
$ ./test.rb something
Enter Username:
./test.rb:4:in `gets': No such file or directory - something (Errno::ENOENT)
from ./test.rb:4:in `gets'
from ./test.rb:4:in `<main>'
Kernel.gets in ruby uses ARGF.gets. Using STDIN.gets should get you your expected behavior. See this SO question.

Make Net:SSH update returned data packets/chunks in exec block more often

I have a ruby script on a remote server that I'm running via Net:SSH on my local pc.
The remote script takes a few minutes to run and outputs it's progress to stdout.
The problem I have is the block in my exec command only gets called when the packet/chunk is full.
So I get the progress all in one hit about each minute.
Here is some cut down examples that illustrate my problem:
Server Script:
(0.999).each do |i|
puts i
sleep 1
end
puts 1000
Local Script:
Net::SSH.start('ip.v.4.addr', 'user', :keys => ['my_key']) do |ssh|
ssh.exec("ruby count_to_1000.rb") do |ch, stream, data|
puts data if stream == :stdout
end
ssh.loop(1)
end
Is there any way from the remote script to force the sending of the packet/chunk?
Or is there a way to set a limit of say a second (or n bits) before it's flushed? (within Net:SSH)
Thanks for all your help!
Try flush:
http://www.ruby-doc.org/core-2.1.5/IO.html#method-i-flush
(0.999).each do |i|
puts i
STDOUT.flush
sleep 1
end
Or sync:
http://www.ruby-doc.org/core-2.1.5/IO.html#method-i-sync
STDOUT.sync = true
(0.999).each do |i|
puts i
sleep 1
end
(Untested, btw. Maybe they need to be used on the client-side instead, or on some other IO stream. But those are the two methods that immediately come to mind.)
In my test setup this works as expected (tested with localhost). However, there might be some issues with the STDOUT flush.
You can try to to write to STDOUT in stead of using puts (I have heard that there is some difference that I don't really understand).
Thus, you can on your server use:
(0.999).each do |i|
STDOUT.puts i
sleep 1
end
STDOUT.puts 1000
#You could possibly also use "STDOUT.write 1000", but it will not append a newline, like puts does.
If that does not work, then you can try to force-flush the STDOUT by using STDOUT.flush(). I believe the same can be achieved by writing an empty string to STDOUT, but I am not 1000% sure.
It might also happen that the exec command actually waits for the entire process to terminate for some reason(I was not able to figure out from the docs). In which case, you won't be able to achieve what you want. Then you can consider setting up websockets, use DRB, or some other means to pass the data.

Ruby Dump all cron jobs to text file

I want a ruby script that will dump all the existing cron jobs to a text file using "crontab -l" or anything else that will achieve the same objective. Also the text file should be possible to use with crontab txtfile to create the cron jobs again.
Below is the code I already wrote:
def dump_pre_cron_jobs(file_path)
begin
cron_list = %x[crontab -l]
if(cron_list.size > 0)
cron_list.each do |crl|
mymethod_that_writes_tofile(file_path, crl) unless crl.chomp.include?("myfilter")
end
end
rescue Exception => e
raise(e.message)
end
end
Why does this need to be a Ruby script?
As you say, you can dump the crontab to a file with crontab -l > crontab.txt.
To read them back in again, simply use crontab crontab.txt, or cat crontab.txt | crontab -
I agree with #Vortura that you do not need to create a Ruby script to do this.
If you really want to, here is a probable way:
File.open('crontab.txt', 'w') do |crontab|
crontab << `crontab -l`
end
NOTE: Running this as root, or using sudo should capture all the cron jobs on a system, not just a single users' jobs. Run it as yourself or as that user and it might capture just those jobs. I haven't test that aspect of it.
Trying to run crontab -l to capture crontab files for all the users and packages seems the indirect way to do the task and could have the hassle of dealing with password requests hanging your code. I'd write code to comb through the directories that store them, rather than mess with prompts. Run the code using sudo and you shouldn't have any problems accessing the files.
Take a look at the discussion at: http://www.linuxquestions.org/questions/linux-newbie-8/etc-crontab-vs-etc-cron-d-vs-var-spool-cron-crontabs-853881/ for information on where the actual cron tab files are stored on disk.
Also https://superuser.com/questions/389116/how-to-recover-crontab-jobs-from-filesystem/389137 has similar information.
Mac OS varies a little from Linux in where Apple puts the cron files. Run man cron at the command-line for the definitive details on either OS.
Here's slightly-tested code for how I'd back up the files. How you restore them is for you to figure out, but it shouldn't be hard to figure out:
require 'fileutils'
BACKUP_PATH = '/path/to/some/safe/storage/directory'
CRONTAB_DIRS = %w[
/usr/lib/cron/tabs
/var/spool/cron
/etc/anacrontab
/etc/cron.d
]
CRONTAB_FILES = %w[
/etc/cron_list
]
def dump_pre_cron_jobs(file_path)
full_backup_path = File.join(
BACKUP_PATH,
File.dirname(file_path)
)
FileUtils.mkdir_p(full_backup_path) unless Dir.exist?(full_backup_path)
File.write(
File.join(
full_backup_path,
file_path
),
File.read(file_path)
)
rescue Exception => e
STDERR.puts e.message
end
CRONTAB_DIRS.each do |ct|
next unless Dir.exist?(ct)
begin
Dir.entries(File.join(ct, '*')).each { |fn| dump_pre_cron_jobs(fn) }
rescue Errno::EACCES => e
STDERR.puts e.message
end
end
CRONTAB_FILES.each do |fn|
dump_pre_cron_jobs(fn)
end
You'll need to run this as root via sudo to access the directories and files as they're usually locked down from unauthorized prying eyes.
The code creates a repository of crontabs, in BACKUP_PATH, based on their original file paths. No changes are made to the file contents so they can be restored as-is by copying them back via cp or writing code to reverse this process.

Can I get continuous output from system calls in Ruby?

When you use a system call in a Ruby script, you can get the output of that command like this:
output = `ls`
puts output
That's what this question was about.
But is there a way to show the continuous output of a system call? For example, if you run this secure copy command, to get a file from a server over SSH:
scp user#someserver:remoteFile /some/local/folder/
... it shows continuous output with the progress of the download. But this:
output = `scp user#someserver:remoteFile /some/local/folder/`
puts output
... doesn't capture that output.
How can I show the ongoing progress of the download from inside my Ruby script?
Try:
IO.popen("scp -v user#server:remoteFile /local/folder/").each do |fd|
puts(fd.readline)
end
I think you would have better luck using the ruby standard library to handle SCP (as opposed to forking a shell process). The Net::SCP library (as well as the entire Net::* libraries) are full featured and used with Capistrano to handle remote commands.
Checkout http://net-ssh.rubyforge.org/ for a rundown of what is available.
Tokland answered the question as I asked it, but Adam's approach was what I ended up using. Here was my completed script, which does show a running count of bytes downloaded, and also a percentage complete.
require 'rubygems'
require 'net/scp'
puts "Fetching file"
# Establish the SSH session
ssh = Net::SSH.start("IP Address", "username on server", :password => "user's password on server", :port => 12345)
# Use that session to generate an SCP object
scp = ssh.scp
# Download the file and run the code block each time a new chuck of data is received
scp.download!("path/to/file/on/server/fileName", "/Users/me/Desktop/") do |ch, name, received, total|
# Calculate percentage complete and format as a two-digit percentage
percentage = format('%.2f', received.to_f / total.to_f * 100) + '%'
# Print on top of (replace) the same line in the terminal
# - Pad with spaces to make sure nothing remains from the previous output
# - Add a carriage return without a line feed so the line doesn't move down
print "Saving to #{name}: Received #{received} of #{total} bytes" + " (#{percentage}) \r"
# Print the output immediately - don't wait until the buffer fills up
STDOUT.flush
end
puts "Fetch complete!"
have you tried with IO.popen ?
you should be able to read the output while the process is still running and parse it accordingly.
Redirecting stderr to stdout may work for you:
output = `scp user#someserver:remoteFile /some/local/folder/ 2>&1`
puts output
That should capture both stderr and stdout. You can capture stderr only by throwing away stdout:
output = `scp user#someserver:remoteFile /some/local/folder/ 2>&1 >/dev/null`
puts output
You can then use IO.popen.

Exposing console apps to the web with Ruby

I'm looking to expose an interactive command line program via JSON or another RPC style service using Ruby. I've found a couple tricks to do this, but im missing something when redirecting the output and input.
One method at least on linux is to redirect the stdin and stdout to a file then read and write to that file asynchronously with file reads and writes. Another method ive been trying after googling around was to use open4. Here is the code I wrote so far, but its getting stuck after reading a few lines from standard output.
require "open4"
include Open4
status = popen4("./srcds_run -console -game tf +map ctf_2fort -maxplayers 6") do |pid, stdin, stdout, stderr|
puts "PID #{pid}"
lines=""
while (line=stdout.gets)
lines+=line
puts line
end
while (line=stderr.gets)
lines+=line
puts line
end
end
Any help on this or some insight would be appreciated!
What I would recommend is using Xinetd (or similar) to run the command on some socket and then using the ruby network code. One of the problems you've already run into in your code here is that your two while loops are sequential, which can cause problems.
Another trick you might try is to re-direct stderr to stdout in your command, so that your program only has to read the stdout. Something like this:
popen4("./srcds_run -console -game tf +map ctf_2fort -maxplayers 6 2>&1")
The other benefit of this is that you get all the messages/errors in the order they happen during the program run.
EDIT
Your should consider integrating with AnyTerm. You can then either expose AnyTerm directly e.g. via Apache mod_proxy, or have your Rails controller act as the reverse proxy (handling authentication/session validation, then playing back controller.request minus any cookies to localhost:<AnyTerm-daemon-port>, and sending back as a response whatever AnyTerm replies with.)
class ConsoleController < ApplicationController
# AnyTerm speaks via HTTP POST only
def update
# validate session
...
# forward request to AnyTerm
response = Net::HTTP.post_form(URI.parse('http://localhost:#{AnyTermPort}/', request.params))
headers['Content-Type'] = response['Content-Type']
render_text response.body, response.status
end
Otherwise, you'd need to use IO::Select or IO::read_noblock to know when data is available to be read (from either network or subprocess) so you don't deadlock. See this too. Also check that either your Rails is used in a multi-threaded environment or that your Ruby version is not affected by this IO::Select bug.
You can start with something along these lines:
status = POpen4::popen4("ping localhost") do |stdout, stderr, stdin, pid|
puts "PID #{pid}"
# our buffers
stdout_lines=""
stderr_lines=""
begin
loop do
# check whether stdout, stderr or both are
# ready to be read from without blocking
IO.select([stdout,stderr]).flatten.compact.each { |io|
# stdout, if ready, goes to stdout_lines
stdout_lines += io.readpartial(1024) if io.fileno == stdout.fileno
# stderr, if ready, goes to stdout_lines
stderr_lines += io.readpartial(1024) if io.fileno == stderr.fileno
}
break if stdout.closed? && stderr.closed?
# if we acumulated any complete lines (\n-terminated)
# in either stdout/err_lines, output them now
stdout_lines.sub!(/.*\n/m) { puts $& ; '' }
stderr_lines.sub!(/.*\n/m) { puts $& ; '' }
end
rescue EOFError
puts "Done"
end
end
To also handle stdin, change to:
IO.select([stdout,stderr],[stdin]).flatten.compact.each { |io|
# program ready to get stdin? do we have anything for it?
if io.fileno == stdin.fileno && <got data from client?>
<write a small chunk from client to stdin>
end
# stdout, if ready, goes to stdout_lines

Resources