Ruby: how to pass an arg to system command during execution - ruby

Good day!
It's a very simple question i think but i can't figure out how to handle it. So i ask you for advice or direction.
I make a system call to unix command and during its execution it asks me to input a string description. How can i do that?
Thank you!
Seems like my problem is solved half a way. To make it absolutely clear can anybody tell me why this code:
#/usr/local/bin/ruby19
#Process.daemon(true)
exec "/bin/cp src dst"
works fine but if # from the Process.daemon(true) is removed it does nothing?

You can use IO.pipe and spawn (in Ruby 1.9.3) to create a pipe to the other process for writing to it. For example,
r, w = IO.pipe
spawn("cat", :in => r)
r.close
# the write to the pipe, which `cat` will read from
w.write("hello\n")

Related

Ruby spawn process, capturing STDOUT/STDERR, while behaving as if it were spawned regularly

What I'm trying to achieve:
From a Ruby process, spawning a subprocess
The subprocess should print as normal back to the terminal. By "normal", I mean the process shouldn't miss out color output, or ignore user input (STDIN).
For that subprocess, capturing STDOUT/STDERR (jointly) e.g. into a String variable that can be accessed after the subprocess is dead. Escape characters and all.
Capturing STDOUT/STDERR is possible by passing a different IO pipe, however the subprocess can then detect that it's not in a tty. For example git log will not print characters that influence text color, nor use it's pager.
Using a pty to launch the process essentially "tricks" the subprocess into thinking it's being launched by a user. As far as I can tell, this is exactly what I want, and the result of this essentially ticks all the boxes.
My general tests to test if a solution fits my needs is:
Does it run ls -al normally?
Does it run vim normally?
Does it run irb normally?
The following Ruby code is able to check all the above:
to_execute = "vim"
output = ""
require 'pty'
require 'io/console'
master, slave = PTY.open
slave.raw!
pid = ::Process.spawn(to_execute, :in => STDIN, [:out, :err] => slave)
slave.close
master.winsize = $stdout.winsize
Signal.trap(:WINCH) { master.winsize = $stdout.winsize }
Signal.trap(:SIGINT) { ::Process.kill("INT", pid) }
master.each_char do |char|
STDOUT.print char
output.concat(char)
end
::Process.wait(pid)
master.close
This works for the most part but it turns out it's not perfect. For some reason, certain applications seem to fail to switch into a raw state. Even though vim works perfectly fine, it turned out neovim did not. At first I thought it was a bug in neovim but I have since been able to reproduce the problem using the Termion crate for the Rust language.
By setting to raw manually (IO.console.raw!) before executing, applications like neovim behave as expected, but then applications like irb do not.
Oddly spawning another pty in Python, within this pty, allows the application to work as expected (using python -c 'import pty; pty.spawn("/usr/local/bin/nvim")'). This obviously isn't a real solution, but interesting nonetheless.
For my actual question I guess I'm looking towards any help to resolving the weird raw issue or, say if I've completely misunderstood tty/pty, any different direction to where/how I should look at the problem.
[edited: see the bottom for the amended update]
Figured it out :)
To really understand the problem I read up a lot on how a PTY works. I don't think I really understood it properly until I drew it out. Basically PTY could be used for a Terminal emulator, and that was the simplest way to think of the data flow for it:
keyboard -> OS -> terminal -> master pty -> termios -> slave pty -> shell
|
v
monitor <- OS <- terminal <- master pty <- termios
(note: this might not be 100% correct, I'm definitely no expert on the subject, just posting it incase it helps anybody else understand it)
So the important bit in the diagram that I hadn't really realised was that when you type, the only reason you see your input on screen is because it's passed back (left-wards) to the master.
So first thing's first - this ruby script should first set the tty to raw (IO.console.raw!), it can restore it after execution is finished (IO.console.cooked!). This'll make sure the keyboard inputs aren't printed by this parent Ruby script.
Second thing is the slave itself should not be raw, so the slave.raw! call is removed. To explain this, I originally added this because it removes extra return carriages from the output: running echo hello results in "hello\r\n". What I missed was that this return carriage is a key instruction to the terminal emulator (whoops).
Third thing, the process should only be talking to the slave. Passing STDIN felt convenient, but it upsets the flow shown in the diagram.
This brings up a new problem on how to pass user input through, so I tried this. So we basically pass STDIN to the master:
input_thread = Thread.new do
STDIN.each_char do |char|
master.putc(char) rescue nil
end
end
that kind of worked, but it has its own issues in terms of some interactive processes weren't receiving a key some of the time. Time will tell, but using IO.copy_stream instead appears to solve that issue (and reads much nicer of course).
input_thread = Thread.new { IO.copy_stream(STDIN, master) }
update 21st Aug:
So the above example mostly worked, but for some reason keys like CTRL+c still wouldn't behave correctly. I even looked up other people's approach to see what I could be doing wrong, and effectively it seemed the same approach - as IO.copy_stream(STDIN, master) was successfully sending 3 to the master. None of the following seemed to help at all:
master.putc 3
master.putc "\x03"
master.putc "\003"
Before I went and delved into trying to achieve this in a lower level language I tried out 1 more thing - the block syntax. Apparently the block syntax magically fixes this problem.
To prevent this answer getting a bit too verbose, the following appears to work:
require 'pty'
require 'io/console'
def run
output = ""
IO.console.raw!
input_thread = nil
PTY.spawn('bash') do |read, write, pid|
Signal.trap(:WINCH) { write.winsize = STDOUT.winsize }
input_thread = Thread.new { IO.copy_stream(STDIN, write) }
read.each_char do |char|
STDOUT.print char
output.concat(char)
end
Process.wait(pid)
end
input_thread.kill if input_thread
IO.console.cooked!
end
Bundler.send(:with_env, Bundler.clean_env) do
run
end

Ruby: 'no implicit conversion of nil into String(TypeError) from Zed Shaw's LRTHW

Just beginning to learn programming, and starting out with Ruby. Doing some copy typing exercises from Zed Shaw's Learn Ruby the Hard Way
In doing exercise 15 and 16, covering opening files, I run into the same problem when I try to run it.
target = open(filename, 'w')
I get the message:
say.rb:10:in open': no implicit conversion of nil into String
(TypeError) from say.rb:10:in'
What does this mean? And how do I correct this?
Thanks in advance!
Direct screen shot from the LRTHW website
I think your filename variable value is not initialized or nil.
filename = "path/to/file"
target = open(filename, 'w') if filename.present?
Everyone: thanks for peering into my question. I had a friend of mine figure out what the problem was. I had forgotten that the whole point of the exercise was to use the ARGV, and that was the missing link, and thus the error. The first line of code is: filename = ARGV.first. When I was running this on the Terminal, I ran it as >ruby say.rb, without submitting a filename afterwards. So, on the prompt, I should have submitted it as >ruby say.rb whateverfilename.doc. So, without that, the program had nothing to chew on. Thanks again!

Ruby - Running rb file from script [duplicate]

This question already has answers here:
Running another ruby script from a ruby script
(7 answers)
Closed 7 years ago.
I'd like to write a ruby script that then calls another ruby script.
Example, I'd like to run the "test1.rb" from my script.
The test1.rb has been simplified to just do this:
print "1"
Then get the result (-> 1).
I tried to complete this problem with backticks or another executing command (%x[#{"test1.rb"}], system("test1.rb") etc.), but it didn't work.
So any idea how I call one script that then calls another script (either relinquishing total control or forking), and get the results?
Thanks
You can simply require the file, which would load the code and execute it:
require_relative "path/test1"
For the sake of having controll over the the run code, I would advice to place your script in a method:
# In test1.rb
def exec_my_script
puts 1
end
# In your main script
require_relative "path/test1"
exec_my_script
EDIT: Ok, since this does not seem to work for your usecase, you can read the file as string and eval the string as so:
result = eval(File.read("path/test1.rb"))
# do something with result
I do NOT like this approach, because it feels kinda "hacky" and is by all means insecure and it will only work if the last thing called in your test1 script returns the result you need...
You may want to use open3
require 'open3'
cmd = 'ruby test1.rb'
#You may change the contents of cmd like you would run it from the command line; like ruby [directory]/filename
Open3.popen3(cmd) do |stdin, stdout|
var = stdout.read
puts var
end
system('ruby test1.rb') # should do the trick
You can also make test1 executable (with chmod) and add the shebang line on top of test1.
You then can call
system("./test1.rb")
Ok, the system('ruby test1.rb') and system("ruby", "test1.rb") command worked.
And now, I like to set the returning value ("1") to a variable.
How I do it? It's possible to do that with backticks?

How to prevent capistrano replacing newlines?

I want to run some shell scripts remotely as part of my capistrano setup. To test that functionality, I use this code:
execute <<SHELL
cat <<TEST
something
TEST
SHELL
However, that is actually running /usr/bin/env cat <<TEST; something; TEST which is obviously not going to work. How do I tell capistrano to execute the heredoc as I have written it, without converting the newlines into semicolons?
I have Capistrano Version: 3.2.1 (Rake Version: 10.3.2) and do not know ruby particularly well, so there might be something obvious I missed.
I think it might work to just specify the arguments to cat as a second, er, argument to execute:
cat_args = <<SHELL
<<TEST
something
TEST
SHELL
execute "cat", cat_args
From the code #DavidGrayson posted, it looks like only the command (the first argument to execute) is sanitized.
I agree with David, though, that the simpler way might be to put the data in a file, which is what the SSHKit documentation suggests:
Upload a file from a stream
on hosts do |host|
file = File.open('/config/database.yml')
io = StringIO.new(....)
upload! file, '/opt/my_project/shared/database.yml'
upload! io, '/opt/my_project/shared/io.io.io'
end
The IO streaming is useful for uploading something rather than "cat"ing it, for example
on hosts do |host|
contents = StringIO.new('ALL ALL = (ALL) NOPASSWD: ALL')
upload! contents, '/etc/sudoers.d/yolo'
end
This spares one from having to figure out the correct escaping sequences for something like "echo(:cat, '...?...', '> /etc/sudoers.d/yolo')".
This seems like it would work perfectly for your use case.
The code responsible for this sanitization can be found in SSHKit::Command#sanitize_command!, which is called by that class's initialize method. You can see the source code here:
https://github.com/capistrano/sshkit/blob/9ac8298c6a62582455b1b55b5e742fd9e948cefe/lib/sshkit/command.rb#L216-226
You might consider monkeypatching it to do nothing by adding something like this to the top of your Rakefile:
SSHKit::Command # force the class to load so we can re-open it
class SSHKit::Command
def sanitize_command!
return if some_condition
super
end
end
This is risky and could introduce problems in other places; for example there might be parts of Capistrano that assume that the command has no newlines.
You are probably better off making a shell script that contains the heredoc or putting the heredoc in a file somewhere.
Ok, so this is the solution I figured out myself, in case it's useful for someone else:
str = %x(
base64 <<TEST
some
thing
TEST
).delete("\n")
execute "echo #{str} | base64 -d | cat -"
As you can see, I'm base64 encoding my command, sending it through, then decoding it on the server side where it can be evaluated intact. This works, but it's a real ugly hack - I hope someone can come up with a better solution.

How do I ensure only one instance of a Ruby script is running at a time?

I have a process that runs on cron every five minutes. Usually, it takes only a few seconds to run, but sometimes it takes several minutes. I want to ensure that only one version of this is running at a time.
I tried an obvious way...
File.open("/tmp/indexer_lock.tmp",'w') do |f|
exit unless f.flock(File::LOCK_EX)
end
...but it's not testing to see if it can get the lock, it's blocking until the lock is released.
Any idea what I'm missing? I'd rather not hack something using ps, but that's an alternative.
I know this is old, but for anyone interested, there's a non-blocking constant that you can pass to flock so that it returns instead of blocking.
File.new("/tmp/foo.lock").flock( File::LOCK_NB | File::LOCK_EX )
Update for slhck
flock will return true if this process received the lock, false otherwise. So to ensure just one process is running at a time you just want to try to get the lock, and exit if you weren't able to. It's as simple as putting an exit unless in front of the line of code I have above:
exit unless File.new("/tmp/foo.lock").flock( File::LOCK_NB | File::LOCK_EX )
Depending on your needs, this should work just fine and doesn't require creating another file anywhere.
exit unless DATA.flock(File::LOCK_NB | File::LOCK_EX)
# your script here
__END__
DO NOT REMOVE: required for the DATA object above.
Although this isn't directly answering your question, if I were you I'd probably write a daemon script (you could use http://daemons.rubyforge.org/)
You could have your indexer (assuming its indexer.rb) be run through a wrapper script named script/index for example:
require 'rubygems'
require 'daemons'
Daemons.run('indexer.rb')
And your indexer can do almost the same thing, except you specify a sleep interval
loop do
# code executing your indexing
sleep INDEXING_INTERVAL
end
This is how job processors in tandem with a queue server usually function.
You could create and delete a temporary file and check for existence of this file.
Please check the answer to this question :
one instance shell script
There's a lockfile gem for exactly this situation. I've used it before and it's dead simple.
If your using cron it might be easier to do something like this in the shell script that cron calls:
#!/usr/local/bin/bash
#
if ps -C $PROGRAM_NAME &> /dev/null ; then
: #Program is already running.. appropriate action can be performed here (kill it?)
else
#Program is not running.. launch it.
$PROGRAM_NAME
fi
Here's a one-liner that should work at the top of any Ruby script:
exit unless File.new(__FILE__)).tap {|f| f.autoclose = false}.flock(File::LOCK_NB | File::LOCK_EX)
There are two issues with the original code.
First, the reason it's blocking is that the call to #flock is missing File::LOCK_NB:
Don't block when locking. May be combined
with other lock options using logical or.
Second, if a File object is closed (whether at the end of an #open block as in the code above, via explicit #close, or implicitly auto-closed when the File is garbage-collected), the underlying file descriptor is closed and the lock is released. To prevent this you can set #autoclose =false.
Ok, working off notes from #shodanex's pointer, here's what I have. I rubied it up a little bit (though I don't know of a touch analogue in Ruby).
tmp_file = File.expand_path(File.dirname(__FILE__)) + "/indexer.lock"
if File.exists?(tmp_file)
puts "quitting"
exit
else
`touch #{tmp_file}`
end
.. do stuff ..
File.delete(tmp_file)
Can you not add File::LOCK_NB to your lock, to make it non-blocking (i.e. it fails if it can't get the lock)
That would work in C, Perl etc.
At a higher level, you might find the lock_method gem useful:
def the_method_my_cron_job_calls
# something really expensive
end
lock_method :the_method_my_cron_job_calls
It uses lockfiles stored on the local filesystem (what was being discussed above) by default, but you can also configure remote lock storage:
LockMethod.config.storage = Redis.new([...]) # a remote RedisToGo instance, perhaps?
Also...
def the_method_my_cron_job_calls
# something really expensive
end
lock_method :the_method_my_cron_job_calls, (60*60) # automatically expire lock after an hour

Resources