Ruby file output from fork - ruby

I have one simple script:
fork do
STDOUT.reopen(File.open('/tmp/log', 'w+'))
STDOUT.sync = true
exec 'bundle exec ruby script.rb'
end
script.rb:
loop do
sleep 1
puts "MESSAGE"
end
When it work, all outputs is buffering(?) and writes to /tmp/log by big pices.
It works only if I modify script:
$stdout.puts "MESSAGE"
$stdout.flush
How can I do the same without modifying script.rb ?
Thanks.

When you call exec, you create a new process, and although this process inherits the file you set as standard out, it doesn't inherit the other settings, in particular the sync setting.
In order to get unbuffered output in the new process, you need to set it in that process. If you don't want to modify script.rb one workaround could be to create another file, named somethig like sync.rb containing just:
STDOUT.sync = true
which you can then require when running your command:
exec 'bundle exec ruby -r./sync script.rb'
The new Ruby process will now require sync.rb, which simply sets sync mode on STDOUT to true before executing your script.

Related

Running several 'exec' in a ruby loop

I'm scanning a folder for audio files and converting them to mp3.
Works great in RUBY.
However, once the first transcoding is done, it stops the whole loop
Here's a breakdown of my code process.
def scanFolder
# lots of code above to get folder list, check for incorrect files etc..
audioFileList.each {
|getFile|
exec_command = "ffmpeg #{getFile} #{newFileName}"
exec exec_command
}
end
What's happening is that it's transcoding the first file it finds, then it stops the whole function. Is there a way to force it to continue?
The ffmpeg does run and finish correctly at the moment, so it's not breaking anything
exec replaces the current process by running the given command. Example:
2.0.0-p598 :001 > exec 'echo "hello"'
hello
shivam#bluegene:$
You can see how exec replaces the irb with system echo which then exits automatically.
Therefore try using system instead. Here the same example using system:
2.0.0-p598 :003 > system 'echo "hello"'
hello
=> true
2.0.0-p598 :004 >
You can see I am still in irb and its not exited after executing the command.
This makes your code as follows:
def scanFolder
# lots of code above to get folder list, check for incorrect files etc..
audioFileList.each {
|getFile|
exec_command = "ffmpeg #{getFile} #{newFileName}"
system exec_command
}
end
Along with shivam's answer about using system, the spawn method may also be useful here:
http://ruby-doc.org//core-2.1.5/Process.html#method-c-spawn

How is the exit command properly executed in Ruby?

I have been trying to create a Ruby gem that simply exits my terminal whenever "x" is entered. Here is my main project file:
module SwissKnife
VERSION = '0.0.1'
class ConsoleUtility
def exit
`exit`
end
end
end
and my executable:
#!/usr/bin/env ruby
require 'swissknife'
util = SwissKnife::ConsoleUtility.new
util.exit
For some reason whenever I run this nothing appears to happen. I debugged it by adding in a simple puts 'Hello World!' in there, and it would print "Hello World!" but not exit. What am I doing wrong? Any help is greatly appreciated!
Backticks execute code in a new shell
exit isn't an executable that your shell runs, it's a special command understood by your shell - telling it to exit
So when you do
`exit`
it starts a shell which immediately exits. Not very useful. To exit the shell, you can instead kill Ruby's parent process.
Process.kill 'HUP', Process.ppid

How do I to run a command in Linux as a Ruby script?

Let's say I have some terminal commands like:
sudo mycommand1
mycommand2
#.....
What should I do run them via ruby script (not bash) in Ubuntu?
UPDATE:
I have a ruby script:
def my_method1()
#calculating something.....
end
def method2(var1, var2)
#how do I sudo mycommand1 and any other Lunix command from here?
end
def method3(var4)
#calculating something2....
end
You can do system, exec, or place the command in backticks.
exec("mycommand") will replace the current process so that's really only pratical at the end of your ruby script.
system("mycommand") will create a new process and return true if the command succeeded and nil otherwise.
If you need to use the output of your command in your Ruby script use backticks:
response = 'mycommand`
There are many questions on SO that answer this. However you can run a command in many ways using system, exec, (backticks), %x{} or using open3. I prefer to use open3 -
require 'open3'
log = File.new("#{your_log_dir}/script.log", "w+")
command = "ls -altr ${HOME}"
Open3.popen3(command) do |stdin, stdout, stderr|
log.puts "[OUTPUT]:\n#{stdout.read}\n"
unless (err = stderr.read).empty? then
log.puts "[ERROR]:\n#{err}\n"
end
end
If you want to know more about other options you can refer to Ruby, Difference between exec, system and %x() or Backticks for links to relevant documentation.
You can try these approaches:
%x[command]
Kernel.system"command"
run "command"
make some file.rb with:
#!/path/to/ruby
system %{sudo mycommand1}
system %{mycommand2}
and the chmod the file with exec permissions (e.g. 755)
It you need to pass variables between the two commands, run them together:
system %{sudo mycommand1; \
mycommand2}

Log process output

I used the method system to start a process. the pid of that process is being stored in a file worker.pid
however I need to generate the log of this process, how can I store the output of this process?
the process is being created with this command:
system "bundle exec rake resque:work >> ./resque.log QUEUE=* PIDFILE=#{pid_file} &"
P.S.: I am using ruby 1.8, BACKGROUND=yes won`t work.
P.S.2: platform linux
Maybe what you're looking for is IO.popen
This lets you fork off a subprocess and access it's output via an IO object
# fork off a one-off task
# and return the output as a string
ls = IO.popen("ls")
ls.read
# or return an array of lines
IO.popen("ls").readlines
# or create a continuing task
tail = IO.popen("tail -f /some/log/file.log")
loop do
puts tail.gets
end
I suggest you read the documentation,
but you can also write to the stream, and do all sorts of clever stuff.
If I'm understanding what you are trying to achieve correctly, you are looking for the Open3 class. http://www.ruby-doc.org/stdlib-1.8.7/libdoc/open3/rdoc/Open3.html

How can i execute 2 or more commands in the same ssh session?

I have the following script:
#!/usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
Net::SSH.start('host1', 'root', :password => "mypassword1") do |ssh|
stdout = ""
ssh.exec("cd /var/example/engines/")
ssh.exec!( "pwd" ) do |channel, stream, data|
stdout << data if stream == :stdout
end
puts stdout
ssh.loop
end
and i get /root, instead of /var/example/engines/
ssh.exec("cd /var/example/engines/; pwd")
That will execute the cd command, then the pwd command in the new directory.
I'm not a ruby guy, but I'm going to guess there are probably more elegant solutions.
In Net::SSH, #exec & #exec! are the same, e.g. they execute a command (with the exceptions that exec! blocks other calls until it's done). The key thing to remember is that Net::SSH essentially runs every command from the user's directory when using exec/exec!. So, in your code, you are running cd /some/path from the /root directory and then pwd - again from the /root directory.
The simplest way I know how to run multiple commands in sequence is to chain them together with && (as mentioned above by other posters). So, it would look something like this:
#!/usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
Net::SSH.start('host1', 'root', :password => "mypassword1") do |ssh|
stdout = ""
ssh.exec!( "cd /var/example/engines/ && pwd" ) do |channel, stream, data|
stdout << data if stream == :stdout
end
puts stdout
ssh.loop
end
Unfortunately, the Net::SSH shell service was removed in version 2.
You can just give different commands separated by a new line. Something like:
#result = ssh.exec!("cd /var/example/engines/
pwd
")
puts #result
Its probably easier (and clearer) to pass the command to a variable, then pass the variable into exec. Same principle though.
see if there's something analogous to the file(utils?) cd block syntax, otherwise just run the command in the same subshell, e.g. ssh.exec "cd /var/example/engines/; pwd" ?
Im not a ruby programmer, but you could try to concatenate your commands with ; or &&
There used to have a shell service which allow stateful command like your trying to do in net/ssh v1 but it has been remove in v2. However there's a side project of the author of net/ssh that allows you to do that. Have a look here: http://github.com/jamis/net-ssh-shell
The current location of net-ssh-shell is changed.
What I decided to use though to call a random shell script is to scp a file to the remote machine and source it into shell. Basically doing this:
File.write(script_path, script_str)
gear.ssh.scp_to(script_path, File.dirname(script_path))
gear.ssh.exec(". script_path")

Resources