Ruby script equivalent to the bash `set` builtin's `-x` flag? - ruby

When a bash script is running, the set -x command can be used to print all commands and output as they're being executed.
I'm writing a Ruby script. Is there a way to also have this print the commands and output as they're being executed?

TL;DR
While interpreted, Ruby parses and runs code very differently from languages like Bash or Tcl, so there's no built-in way to do exactly what you want. You'll have to use a debugger or REPL to get something that approximates what you're trying to do, but it won't really be the same as using flags like -x or -v in Bash. An external or IDE-based debugger will probably come closest, though.
A Couple of Options
There is no built-in way to do this, as Ruby is not really a line-by-line interpreted language in the same way as Bash or Tcl. While Ruby is generally considered an interpreted language, it actually uses a tokenizer and parser to generate code that runs on a virtual machine such as YARV or GraalVM. You do have a couple of options, though:
Use the -d flag or set $DEBUG to a truthy value in your code, and then do some level of introspection based on whether the debug flag is enabled. For example:
# 1 is printed because $DEBUG is truthy
$ ruby -e 'BEGIN { $DEBUG = true }; puts 1 if $DEBUG'
1
# nothing is printed because $DEBUG is falsey
$ ruby -e 'puts 1 if $DEBUG'
Please note that Ruby 3.0.3 and 3.1.0 seem to have an issue with the -d flag, so the first example uses a BEGIN statement to set the value of the flag inside the program.
Use the debug gem (now standard with Ruby 3). You can either step through the code with rdbg and use the list command liberally, or (if you're clever) script a series of list commands on specific lines using the ~/.rdbgrc file.
Use an external debugger, with or without rdbg. Note that the new debugger supports IDE-based debugging (e.g. with RubyMine or VS Code) and remote debugging, but setting up IDE or remote debugging is likely a topic outside the scope of a reasonable SO answer.
Use irb or pry with the debugger of your choice, which usually gives you a number of ways to inspect source code, frames, expressions, variables, and so on, although you need to run from an on-disk file rather than a REPL to access some of the functionality you may be looking for.
For the most part, if you're not using an IDE or a debugger, you will generally need to rely on return values in a REPL or Kernel#pp statements in your code to inspect return values as you go along. However, short of a debugger or REPL that supports listing methods or lines of code on request, you'll either need to use external tools to solve whatever problem you're trying to solve via this approach another way.
Other Options
If you use pry, the pry-rescue gem along with pry-stack_explorer will allow you to automatically trigger a REPL session that allows you to traverse up and down the stack if you hit an exception without requiring you to start your session in the REPL or explicitly call binding.pry. On supported versions of Ruby, this can be very useful, especially since Pry supports a show-source -l command that will do something similar to what you want (at least interactively), although the line numbers may not be what you expect if the code is entered directly in the REPL rather than loaded from a Ruby program on disk.

Related

How to more accurate detect one file is a ruby script file in linux

I know the extname is rb is worked.
I know linux file command is worked to in some case.
But all not accurate enough to decide a file is a ruby scripts
EDIT:
What I want to do is: more accuretely amount the ruby lines I wrote
with a bash shell scripts like followings:
find -name '*.rb' |xargs -n100 cat |grep -v '\s*#' |wc -l
but, in fact, I wrote some executable ruby scripts, and others, e.g.
.rake, Gemfile Capfile jbuider etc ...
Thanks
Use ruby -c. From the man page:
-c Causes Ruby to check the syntax of the script and exit
This will tell you if the file is a valid Ruby script without executing it. If it is, it will print "Syntax OK" to STDOUT and exit with status code 0; otherwise it will print a syntax error to STDERR and exit with a nonzero code. (You can of course suppress the messages using I/O redirection, e.g. &>/dev/null.)
Of course, false positives are possible (the fact that a file is valid Ruby doesn't necessarily mean it was intended to be a Ruby script), but unlikely except with very short files.
What you want is impossible. For example, the following is a valid, and semantically identical program in at least Ruby, PHP, Scala, and Perl:
print("Hello");
It is also valid in Python, although semantically slightly different: it prints a newline (i.e. it prints the string "Hello\n") while the others don't (the others print "Hello" without a newline).
It is also at least syntactically valid in ECMAScript, and may be semantically equivalent assuming a suitable print function exists in the standard library.
It is probably valid in a lot more languages than that, some that I can think of are AmbientTalk, Atomy, CoffeeScript, Converge, Dart, Dylan, E, Elixir, Falcon, Fancy, Groovy, Hack, Io, Ioke, Julia, Lua, Monte, Neko, Pico, Pike, and Seph. It is also a valid fragment, although not a complete program, in at least Perl6, C, C++, Objective-C, Objective-C++, D, Java, C♯, Spec♯, Sing♯, M♯, Cω, X♯, Kotlin, Ceylon, Rust, and Rust.
There is no way of knowing whether this is a Ruby program except asking the person who wrote it.

Interpretation of additional arguments to Ruby's Kernel::system method

Why does the first excerpt succeed and the second fail?
system 'emacs', '--batch', '--quick', '--eval="(require \'package)"'
system 'emacs --batch --quick --eval="(require \'package)"'
(If it matters, I'm executing the code on Mac OS X Mountain Lion with Ruby version 1.8.7 and Emacs version 22.1.1.)
First of all, those two system calls are different in ways that you may not expect. A quick example will probably explain the difference better than a bunch of words and hand waving. Start with a simple shell script:
#!/bin/sh
echo $1
I'll call that pancakes.sh because I like pancakes more than foo. Then we can step into irb and see what's going on:
>> system('./pancakes.sh --where-is="house?"')
--where-is=house?
>> system('./pancakes.sh', '--where-is="house?"')
--where-is="house?"
Do you see the significant difference? The single argument form of system hands the whole string to /bin/sh for processing and /bin/sh will deal with the double quotes in its own way so the program being called will never see them. The multi-argument form of system doesn't invoke /bin/sh to process the command line so the arguments are passed as-is with double quotes intact.
Back to your system calls. The first one will send this exact argument to emacs (note that Ruby will take care of converting \' to just '):
--eval="(require 'package)"
and emacs will try to evaluate "(require 'package)"; that looks more like a string than an elisp snippet to me and evaluating a string literal doesn't do much of anything. Your second will send this to emacs:
--eval=(require 'package)
and emacs will complain that it
Cannot open load file: package
Note that my elisp knowledge is buried under about 20 years of rust and forgetfulness so some of the emacs details may be a bit off.

what is the use of "#!/usr/local/bin/ruby -w" at the start of a ruby program

what is the use of writing the following command at the start of a ruby program ?
#!/usr/local/bin/ruby -w
Is it OS specific command? Is it valid for ruby on windows ? if not, then what is an equivalent command in windows ?
It is called a Shebang. It tells the program loader what command to use to execute the file. So when you run ./myscript.rb, it actually translates to /usr/local/bin/ruby -w ./myscript.rb.
Windows uses file associations for the same purpose; the shebang line has no effect (edit: see FMc's answer) but causes no harm either.
A portable way (working, say, under Cygwin and RVM) would be:
#!/usr/bin/env ruby
This will use the env command to figure out where the Ruby interpreter is, and run it.
Edit: apparently, precisely Cygwin will misbehave with /usr/bin/env ruby -w and try to look up ruby -w instead of ruby. You might want to put the effect of -w into the script itself.
The Shebang line is optional, and if you run the ruby interpreter and pass the script to it as a command line argument, then the flags you set on the command line are the flags ruby runs with.
A Shebang line is not ruby at all (unless you want to call it a ruby comment). It's really shell scripting. Most linux and unix users are running the BASH shell (stands for Borne Again SHell), but pretty much every OS has a command interpreter that will honor the Shebang.
“#!/usr/local/bin/ruby -w”
The "she" part is the octothorp (#), aka pound sign, number sign, hash mark, and now hash tag (I still call it tic-tac-toe just cuz).
The "bang" part is the exclaimation mark (!), and it's like banging your fist on the table to exclaim the command.
On Windows, the "Shell" is the command prompt, but even without a black DOS window, the command interpreter will run the script based on file associations. It doesn't really matter if the command interpreter or the programming langue is reading the shebang and making sure the flags are honored, the important point is, they are honored.
The "-w" is a flag. Basically it's an instruction for ruby to follow when it runs the script. In this case "-w" turns on warnings, so you'll get extra warnings (script keeps running) or errors (script stops running) during the execution of the script. Warnings and exceptions can be caught and acted upon during the program. These help programmers find problems that lead to unexpected behavior.
I'm a fan of quick and dirty scripts to get a job done, so no -w. I'm also a fan of high quality reusable coding, so definitely use -w. The right tool for the right job. If you're learning, then always use -w. When you know what you're doing, and stop using -w on quick tasks, you'll start to figure out when it would have helped to use -w instead of spending hours trouble shooting. (Hint, when the cause of a problem isn't pretty obvious, just add -w and run it to see what you get).
"-w" requires some extra coding to make it clear to ruby what you mean, so it doesn't immediately solve things, but if you already write code with -w, then you won't have much trouble adding the necessary bits to make a small script run with warnings. In fact, if you're used to using -w, you're probably already writing code that way and -w won't change anything unless you've forgotten something. Ruby requires far less "plumbing code" then most (maybe all) compiled languages like C++, so choosing to not use -w doesn't allow you to save much typing, it just lets you think less before you try running the script (IMHO).
-v is verbose mode, and does NOT change the running of the script (no warnings are raised, no stopping the script in new places). Several sites and discussions call -w verbose mode, but -w is warning mode and it changes the execution of the script.
Although the execution behavior of a shebang line does not translate directly to the Windows world, the flags included on that line (for example the -w in your question) do affect the running Ruby script.
Example 1 on a Windows machine:
#!/usr/local/bin/ruby -w
puts $VERBOSE # true
Example 2 on a Windows machine:
#!/usr/local/bin/ruby
puts $VERBOSE # false

Ruby popen3 and ANSI colour

I am attempting to get watchr running tests automatically as files change, and got most of what I need working except for the fact that all ANSI colours from RSpec are being disregarded. The offending code is as follows:
stdin, stdout, stderr = Open3.popen3(cmd)
stdout.each_line do |line|
last_output = line
puts line
end
When cmd is equal to something like rspec spec/**/*.rb then the above code runs RSpec fine except that all output is in monochrome. I've looked at using Kernel.system instead, however system does not return the output which I need to determine if a test failed / succeeded. How can I get the output form a script that is executed from within Ruby including the ANSI color and output this to the console?
I would guess that rspec is examining the stream to which it is writing output to see if it is a tty (ie the console) or not, and if it's not, disabling colour. Many commands do this - GNU ls and grep, for example. Since the stream from the child process to your script is not a tty, colour will be disabled.
Hopefully, rspec has a flag which will force colour to be used, regardless of the stream type. If it doesn't, you will have to resort to some truly weird stty shenanigans.
Chances are good that the rspec tool is checking to see if it is running interactively on a terminal or being run automatically from a script before deciding to use color output or not.
The documentation says you can force color with --color command line option.
Try: rspec --color spec/**/*.rb.
It's possible to run commands in a pseudo terminal via the PTY module in order to preserve a user facing terminal-like behaviour. Credits go to the creator of the tty-command gem (see this issue) who implemented this behaviour in his gem:
require 'tty-command'
cmd = TTY::Command.new(pty: true)
cmd.run('rspec', 'spec/**/*.rb')
Keep in mind that using a pseudo terminal may have unwanted side effects, such as certain git commands using a pager which will essentially cause commands to hang. So introducing the functionality might be a breaking change.

For ruby/webrick, I need windows to recognize shebang (#!) notation

(Bear with me, I promise this gets to shebang and windows.)
I have about the simplest of WEBRick servers put together:
require 'webrick'
include WEBrick
s = HTTPServer.new(:Port=>2000, :DocumentRoot=>Dir::pwd)
s.start
Couldn't be simpler. This basic server does accept http connections (firefox, internet exploder, wget, TELENT) and deals with them appropriately, as long as I'm just fetching static documents. If, however, I set one of the files in the directory to have a .cgi extension, I get a 500 back and the following on the server's terminal:
ERROR CGIHandler: c:/rubyCGI/test.cgi:
C:/...[snip]...webrick/httpservlet/cgi_runner.rb:45: in 'exec': Exec format error - ...[snip]...
I've done a few things on the command line to mimic what is going on in line 45 of cgi_runner.rb
c:\>ruby
exec "c:/rubyCGI/test.cgi"
^Z
(same error erupts)
c:\>ruby
exec "ruby c:/rubyCGI/test.cgi"
^Z
Content-type: text/html
Mares eat oats and does eat oats and I'll be home for Christmas.
Clearly, WEBrick hasn't been cleared for landing on windows. Your usual headaches of corporate paranoia prevent me from modifying webrick, so can I get the shebang notation in c:/rubyCGI/test.cgi recognized by the OS (windows) so I don't have to explicitly tell it each time which interpreter to use? I could assign all .cgi files to be associated with ruby, but that would be limiting in the long run.
UPDATE:
Since posting this, it has occurred to me that it may not be possible at all to run a cgi web server from ruby; ruby has no forking support. With no ability to fork a process, a cgi server would have to execute each cgi script one-at-a-time, neglecting all concurrent requests while the first one completed. While this may be acceptable for some, it would not work for my application. Nevertheless, I would still be very interested in an answer to my original question—that of getting shebang working under windows.
I think what you want is to associate the file extension with Ruby. I don't think it's possible to get the !# notation to work on Windows but it is possible to get Windows to automatically launch a script with a particular interpreter (as in your second example). A good step by step discussion of what you'd want to do is here. You specifically want the section headed: "To create file associations for unassociated file types". I think that will accomplish what you're trying to do.
A generic solution that works for both Ruby 1.8.6.pxxx and 1.9.1.p0 on
Windows is the following:
Edit the file: c:\ruby\lib\ruby\1.9.1\webrick\httpservlet\cgi_runner.rb
Add the following lines at the top of the file:
if "1.9.1" == RUBY_VERSION
require 'rbconfig' #constants telling where Ruby runs from
end
Now, locate the last line where is says: exec ENV["SCRIPT_FILENAME"]
Comment that line out and add the following code:
# --- from here ---
if "1.9.1" == RUBY_VERSION #use RbConfig
Ruby = File::join(RbConfig::CONFIG['bindir'],
RbConfig::CONFIG['ruby_install_name'])
Ruby << RbConfig::CONFIG['EXEEXT']
else # use ::Config
Ruby = File::join(::Config::CONFIG['bindir'],
::Config::CONFIG['ruby_install_name'])
Ruby << ::Config::CONFIG['EXEEXT']
end
if /mswin|bccwin|mingw/ =~ RUBY_PLATFORM
exec "#{Ruby}", ENV["SCRIPT_FILENAME"]
else
exec ENV["SCRIPT_FILENAME"]
end
# --- to here ---
Save the file and restart the webrick server.
Explanation:
This code just builds a variable 'Ruby' with the full path to
"ruby.exe", and
(if you're running on Windows) it passes the additional parameter
"c:\ruby\bin\ruby.exe" , to the Kernel.exec() method, so that your
script can be executed.
Not really to argue... but why bother webrick when mongrel is much faster and with native compiled with windows? And of coz, that means no shebang is needed.
Actually, it is possible to get Windows to recognize shebang notation in script files. It can be done in a relatively short script in say, Ruby or AutoIt. Only a rather simple parser for the first line of a script file is required, along with some file manipulation. I have done this a couple times when either cross-compatibilty of script files was required or when Windows file extensions did not suffice.

Resources