I am trying to learn how to use the net-ssh gem for ruby. I want to execute the commands below, after I login to the directory - /home/james.
cd /
pwd
ls
When I do this with putty, it works and i can see a list of directories. But, when I do it with ruby code, it does not give me the the same output.
require 'rubygems'
require 'net/ssh'
host = 'server'
user = 'james'
pass = 'password123'
def get_ssh(host, user, pass)
ssh = nil
begin
ssh = Net::SSH.start(host, user, :password => pass)
puts "conn successful!"
rescue
puts "error - cannot connect to host"
end
return ssh
end
conn = get_ssh(host, user, pass)
def exec(linux_code, conn)
puts linux_code
result = conn.exec!(linux_code)
puts result
end
exec('cd /', conn)
exec('pwd', conn)
exec('ls', conn)
conn.close
Output -
conn successful!
cd /
nil
pwd
/home/james
ls
nil
I was expecting pwd to give me / instead of /home/james. That is how it works in putty. What is the mistake in the ruby code?
It seems like every command runs on it's own environment, so the current directory is not carried over exec to exec. You can verify this if you do:
exec('cd / && pwd', conn)
It will print /. It is not clear from the documentation how to make all the commands execute on the same environment or if this is even possible at all.
This is because net/ssh is stateless, so it opens a new connection with each command execution.
You can use the rye gem that implements a work around for this. but I do not know if it works with ruby > 2, since its development is not that active.
Another way is to use a pty process, in which you'll open a pseudo terminal with the ssh command, than use the input and output files to write commands for the terminal and read the results. To read the results you need to use the select method of the IO class. But you need to learn how to use those utilities since its not that obvious for a non experienced programmer.
And, Yey, I found how to do that, and in fact it is so simple. I think I did not get to this solution last time, because I was a little new to this thing of net-ssh, pty terminal. But yey, I found it finally, and here and example.
require 'net/ssh'
shell = {} #this will save the open channel so that we can use it accross threads
threads = []
# the shell thread
threads << Thread.new do
# Connect to the server
Net::SSH.start('localhost', 'your_user_name', password: 'your_password') do |session|
# Open an ssh channel
session.open_channel do |channel|
# send a shell request, this will open an interactive shell to the server
channel.send_channel_request "shell" do |ch, success|
if success
# Save the channel to be used in the other thread to send commands
shell[:ch] = ch
# Register a data event
# this will be triggered whenever there is data(output) from the server
ch.on_data do |ch, data|
puts data
end
end
end
end
end
end
# the commands thread
threads << Thread.new do
loop do
# This will prompt for a command in the terminal
print ">"
cmd = gets
# Here you've to make sure that cmd ends with '\n'
# since in this example the cmd is got from the user it ends with
#a trailing eol
shell[:ch].send_data cmd
# exit if the user enters the exit command
break if cmd == "exit\n"
end
end
threads.each(&:join)
and here we are, an interactive terminal using net-ssh ruby gem.
For more info look here its for the previous version 1, but it is so useful for you to understand how every piece works. And here
Related
I would like to set Linux environment variables in net-ssh start and use them further down in my code. But I am losing the scope of the variables. Can you please advise how that can be achieved.
I am using net-ssh and logging into Linux via rsa key. I have set a environment variable which I would like to use further down but I am losing the scope of the variable.
ssh = Net::SSH.start(host,
username
)
result = ssh.exec!('setenv SYBASE /opt/sybase && printenv') ### Can See environment variable SYBASE
puts result
puts "**********************************************************************************"
result = ssh.exec!('printenv') #### Lost the environment variable SYBASE set above
puts result
puts "&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"
Each exec creates an environment on it's own, environmont variables are lost.
Like you do with your && (execute next command if first succeeds) or with ; (execute anyway) you can chain commands.
You can also send a block like this to do multiple actions
Net::SSH.start("host", "user") do |ssh|
ssh.exec! "cp /some/file /another/location"
hostname = ssh.exec!("hostname")
ssh.open_channel do |ch|
ch.exec "sudo -p 'sudo password: ' ls" do |ch, success|
abort "could not execute sudo ls" unless success
ch.on_data do |ch, data|
print data
if data =~ /sudo password: /
ch.send_data("password\n")
end
end
end
end
ssh.loop
end
Or use the gem net-ssh-session.
#peter thank you for suggesting net-ssh-session.
However net-ssh-session needed recompile to make it compatible with net-ssh 5.2.0 version.
Example here works great and is what needed for me.
I am trying to run multiple ruby scripts simultaneously on my mac, and I'm not having any luck. I can see the the ruby processes start up, but then they immediately stop. The script works fine as a single process, no errors. Here are some examples of things I've tried.
10.times do
system "nohup ruby program.rb \"arg1 arg2\" &"
end
10.times do
`nohup ruby program.rb \"arg1 arg2\" &`
end
10.times do
system "ruby program.rb \"arg1 arg2\""
end
Do you need to start it from ruby for any specific reason? Why don't you start it 10 times directly from bash? Like:
$ for i in seq 1 10; do nohup ruby foo.rb \&; done
Let me know..
nohup redirects its output to a file $HOME/nohup.out, unless it is explicitly redirected. You should redirect the output of each invocation to a different file.
Also, for the safe side, I would redirect stdin to /dev/null - just in case the called program reads from stdin.
10.times do |i|
system "nohup ruby program.rb 'arg1 arg2' </dev/null >#{ENV['HOME']}/nohup#{i}.out &"
end
BTW (and off topic): Are you sure, that you want to pass arg1 arg2 as a SINGLE argument to program.rb?
You can build a solution with fork, exec and wait of the module Process.
# start child processes
10.times { fork { exec(cmd) } }
# wait for child processes
10.times { |pid| Process.wait }
Or a bit longer to play around with (Tested with Ruby 1.8.7 on Ubuntu). Added rescue nil to suppress error when waiting.
10.times do |i|
fork do
ruby_cmd = "sleep(#{10-i});puts #{i}"
exec("ruby -e \"#{ruby_cmd}\"")
end
end
10.times { Process.wait rescue nil }
puts "Finished!"
I'm using:
- Ruby 1.9.3-p448
- Windows Server 2008
I have a file that contains commands that is used by a program, I'm using it in this way
C:\> PATH_TO_FOLDER/program.exe file.txt
File.txt have some commands so program.exe will do the following:
- Execute commands
- Reads from a DB using an ODBC method used by program
- Outputs result in a txt file
Using powershell this command works fine and as expected.
Now I have this in a file (app.rb):
require 'sinatra'
require 'open3'
get '/process' do
program_path = "path to program.exe"
file_name = "file.txt"
Open3.popen3(program_path, file_name) do |i, o, e, w|
# I have some commands here to execute but just as an example I'm using o.read
puts o.read
end
end
Now when using this by accessing http://localhost/process, Open3 works by doing this (I'm not 100% sure but after trying several times I think is the only option)
Reads commands and executes them (this is ok)
Tries to read from DB by using ODBC method (Here is my problem. I
need to receive some output from Open3 so I can show it in a browser, but I guess when it tries to read it starts another process that Open3 is not aware of, so Open3 goes on and finish without waiting for it)
Exits
Exits
I've found about following:
Use Thread.join (in this case, w.join) in order to wait for process to finish, but it doesn't work
Open4 seems to handle child process but doesn't work on Windows
Process.wait(pid), in this case pid = w.pid, but also doesn't work
Timeout.timeout(n), the problem here is that I'm not sure how long
will it take.
Is there any way of handling this? (waiting for Open3 subprocess so I get proper output).
We had a similar problem getting the exit status and this is what we did
Open3.popen3(*cmd) do |stdin, stdout, stderr, wait_thr|
# print stdout and stderr as it comes in
threads = [stdout, stderr].collect do |output|
Thread.new do
while ((line = output.gets rescue '') != nil) do
unless line.blank?
puts line
end
end
end
end
# get exit code as a Process::Status object
process_status = wait_thr.value #.exitstatus
# wait for logging threads to finish before continuing
# so we don't lose any logging output
threads.each(&:join)
# wait up to 5 minutes to make sure the process has really exited
Timeout::timeout(300) do
while !process_status.exited?
sleep(1)
end
end rescue nil
process_status.exitstatus.to_i
end
Using Open3.popen3 is easy only for trivial cases. I do not know the real code for handling the input, output and error channels of your subprocess. Neither do I know the exact behaviour of your subprocesses: Does it write on stdout? Does it write on stderr? Does it try to read from stdin?
This is why I assume that there are problems in the code that you replaced by puts o.read.
A good summary about the problems you can run into is on http://coldattic.info/shvedsky/pro/blogs/a-foo-walks-into-a-bar/posts/63.
Though I disagree with the author of the article, Pavel Shved, when it comes to finding a solution. He recommends his own solution. I just use one of the wrapper functions for popen3 in my projects: Open3.capture*. They do all the difficult things like waiting for stdout and stderr at the same time.
Often I find myself needing to write scripts that have to execute some portions as a normal user and other portions as a super user. I am aware of one similar question on SO where the answer was to run the same script twice and execute it as sudo, however that is not sufficient for me. Some times I need to revert to being a normal user after a sudo operation.
I have written the following in Ruby to do this
#!/usr/bin/ruby
require 'rubygems'
require 'highline/import'
require 'pty'
require 'expect'
def sudorun(command, password)
`sudo -k`
PTY.spawn("sleep 1; sudo -u root #{command} 2>&1") { | stdin, stdout, pid |
begin
stdin.expect(/password/) {
stdout.write("#{password}\n")
puts stdin.read.lstrip
}
rescue Errno::EIO
end
}
end
Unfortunately, using that code if the user enters the wrong password the script crashes. Ideally it should give the user 3 tries to get the sudo password right. How do I fix this?
I am running this on Linux Ubuntu BTW.
In my opinion, running a script that does stuff internally with sudo is wrong. A better approach is to have the user run the whole script with sudo, and have the script fork lesser-privileged children to do stuff:
# Drops privileges to that of the specified user
def drop_priv user
Process.initgroups(user.username, user.gid)
Process::Sys.setegid(user.gid)
Process::Sys.setgid(user.gid)
Process::Sys.setuid(user.uid)
end
# Execute the provided block in a child process as the specified user
# The parent blocks until the child finishes.
def do_as_user user
unless pid = fork
drop_priv(user)
yield if block_given?
exit! 0 # prevent remainder of script from running in the child process
end
puts "Child running as PID #{pid} with reduced privs"
Process.wait(pid)
end
at_exit { puts 'Script finished.' }
User = Struct.new(:username, :uid, :gid)
user = User.new('nobody', 65534, 65534)
do_as_user(user) do
sleep 1 # do something more useful here
exit! 2 # optionally provide an exit code
end
puts "Child exited with status #{$?.exitstatus}"
puts 'Running stuff as root'
sleep 1
do_as_user(user) do
puts 'Doing stuff as a user'
sleep 1
end
This example script has two helper methods. #drop_priv takes an object with username, uid, and gid defined and properly reduces the permissions of the executing process. The #do_as_user method calls #drop_priv in a child process before yielding to the provided block. Note the use of #exit! to prevent the child from running any part of the script outside of the block while avoiding the at_exit hook.
Often overlooked security concerns to think about:
Inheritance of open file descriptors
Environment variable filtering
Run children in a chroot?
Depending on what the script is doing, any of these may need to be addressed. #drop_priv is an ideal place to handle all of them.
If it is possible, you could move the stuff you want executed as root to a seperate file and use the system() function to run it as sudo, including the sudo prompt etc:
system("sudo ruby stufftorunasroot.rb")
The system() function is blocking, so the flow of your program doesn't need to be changed.
I do not know if this is what you want or need, but have you tried sudo -A (search the web or the man page for SUDO_ASKPASS which might have a value like /usr/lib/openssh/gnome-ssh-askpass or similar)? This is what I use when I need to present a graphical password dialogue to users in GUI environments.
Sorry if this is the wrong answer, maybe you really want to remain on the console.
#!/usr/bin/ruby
# ... blabla, other code
# part which requires sudo:
system "sudo -p 'sudo password: ' #{command}"
# other stuff
# sudo again
system "sudo -p 'sudo password: ' #{command}"
# usually sudo 'remembers' that you just authenticated yourself successfuly and doesn't ask for the PW again...
# some more code...
I have the following script:
#!/usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
Net::SSH.start('host1', 'root', :password => "mypassword1") do |ssh|
stdout = ""
ssh.exec("cd /var/example/engines/")
ssh.exec!( "pwd" ) do |channel, stream, data|
stdout << data if stream == :stdout
end
puts stdout
ssh.loop
end
and i get /root, instead of /var/example/engines/
ssh.exec("cd /var/example/engines/; pwd")
That will execute the cd command, then the pwd command in the new directory.
I'm not a ruby guy, but I'm going to guess there are probably more elegant solutions.
In Net::SSH, #exec & #exec! are the same, e.g. they execute a command (with the exceptions that exec! blocks other calls until it's done). The key thing to remember is that Net::SSH essentially runs every command from the user's directory when using exec/exec!. So, in your code, you are running cd /some/path from the /root directory and then pwd - again from the /root directory.
The simplest way I know how to run multiple commands in sequence is to chain them together with && (as mentioned above by other posters). So, it would look something like this:
#!/usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
Net::SSH.start('host1', 'root', :password => "mypassword1") do |ssh|
stdout = ""
ssh.exec!( "cd /var/example/engines/ && pwd" ) do |channel, stream, data|
stdout << data if stream == :stdout
end
puts stdout
ssh.loop
end
Unfortunately, the Net::SSH shell service was removed in version 2.
You can just give different commands separated by a new line. Something like:
#result = ssh.exec!("cd /var/example/engines/
pwd
")
puts #result
Its probably easier (and clearer) to pass the command to a variable, then pass the variable into exec. Same principle though.
see if there's something analogous to the file(utils?) cd block syntax, otherwise just run the command in the same subshell, e.g. ssh.exec "cd /var/example/engines/; pwd" ?
Im not a ruby programmer, but you could try to concatenate your commands with ; or &&
There used to have a shell service which allow stateful command like your trying to do in net/ssh v1 but it has been remove in v2. However there's a side project of the author of net/ssh that allows you to do that. Have a look here: http://github.com/jamis/net-ssh-shell
The current location of net-ssh-shell is changed.
What I decided to use though to call a random shell script is to scp a file to the remote machine and source it into shell. Basically doing this:
File.write(script_path, script_str)
gear.ssh.scp_to(script_path, File.dirname(script_path))
gear.ssh.exec(". script_path")