using ruby popen wrapped in a shell script - ruby

I finished my short file for a homework assignment which uses IO.popen("command").readlines to grab the STDOUT of that command. However, I need to write a shell script to wrap my ruby file in. No problem, but somehow putting it in the shell script makes readlines hang.
ruby script.rb foo example > example.out
this works
script.sh foo example >example.out
this hangs on readlines. ruby script.rb is all that script.sh contains.

Looks like you forgot to pass your arguments to the ruby command. You may also be failing to specify an interpreter
script.sh
#!/bin/sh
ruby script.rb "$#"
Alternatively you could just add #!/usr/bin/ruby to the top of script.rb and make it executable (chmod +x script.rb). It's not a shell script. But it's generally the preferred way of executing a script in an interpretive language.
Once that's done you can run it with
./script.rb

Related

run python script in current shell

When I need to run a bash script that runs cd somedir to affect the current shell I run it with . scriptname. However, if scriptname is a python script even with #!/usr/bin/env python3 in the first line, it doesn't work, it seems it expects the script to be a bash script. How can I make it work with python scripts (or any other language with the appropriate shebang)?
It is not possible.
The only thing that can affect current process environment is the process itself. Because the current shell is bash, the only thing that can be executed that could affect bash environment is something that can be run by bash itself. That "something" are statements interpreted by bash. Because bash doesn't support interpreting and running python statements, it is not possible.
The usual way around this, is to output from your python script properly escaped assignment statements that would assign environment variables. Then the output from your script is evalulated by bash. This is for example how eval "$(docker-machine ...)" works.

Passing Arguments to a Ruby Script Via Traveling Ruby

I'm trying to pass command line arguments into a Ruby script that's being called with Traveling Ruby and having trouble making it work. I'm just using their standard wrapper.sh file for scripts with gems:
#!/bin/bash
set -e
# Figure out where this script is located.
SELFDIR="`dirname \"$0\"`"
SELFDIR="`cd \"$SELFDIR\" && pwd`"
# Tell Bundler where the Gemfile and gems are.
export BUNDLE_GEMFILE="$SELFDIR/lib/vendor/Gemfile"
unset BUNDLE_IGNORE_CONFIG
# Run the actual app using the bundled Ruby interpreter, with Bundler activated.
exec "$SELFDIR/lib/ruby/bin/ruby" -rbundler/setup "$SELFDIR/lib/app/test.rb"
When I run it, my Ruby script doesn't see any command line arguments. I've tried changing the last line to:
exec "$SELFDIR/lib/ruby/bin/ruby" -rbundler/setup "$SELFDIR/lib/app/test.rb $#"
which I thought would work, but when I test it with an arg "arg1" it's giving me the error:
/[pathtofile]/test-1.0.0-osx/lib/ruby/bin.real/ruby: No such file or directory -- /[pathtofile]/test-1.0.0-osx/lib/app/test.rb arg1 (LoadError)
So it seems like it's treating the command line argument as part of the filename.
Is there a way to modify this script to properly pass in arguments?
Thanks!
Obviously, I'm an idiot. Of course it was treating the argument as part of the filename because the $# was inside the quotes. The correct modification to the last line to make everything work is:
exec "$SELFDIR/lib/ruby/bin/ruby" -rbundler/setup "$SELFDIR/lib/app/test.rb" $#

Why won't my Ruby script execute?

So, I made a simply ruby script,
#!/usr/bin/env ruby
puts "Hello!"
When I try to run it in terminal it doesn't put "Hello!" on the screen. I have tried entering chmod +x test.rb (test.rb is the name of my file). When I run it, it doesn't give me an error, it just doesn't display "Hello!". Any help will be much appreciated. I have looked everywhere for a possible answer, and I have found nothing so far.
I'd guess that you're trying to run it as just test like this:
$ test
But test is a bash builtin command that doesn't produce any output, it just sets a return value. If you run your script properly:
$ ./test.rb
then you'll see something. Note the explicit ./ path, the current directory is rarely (and hopefully never) in your PATH so you need to say ./ to run something in the current directory (unless of course you're in /bin, /usr/bin, etc.).
In the comments you say that there are some Ctrl+M characters in your script:
$ cat -e test.rb
#!/usr/bin/env ruby^M^Mputs "Hello!"
I don't see any $s in that cat -e output so you don't have any actual end-of-line markers, just some carriage-return characters (that's the ^M). A single CR is an old MacOS end-of-line, Windows uses a CR-LF pair, and Unix (including OSX) uses just a single LF to mark the end of a line of text. Since you don't have any EOLs, the shell just sees a single line that looks like:
#!/usr/bin/env ruby ...
without an actual script for ruby to run, the shell just sees the shebang comment and nothing else. The result is that nothing noticeable happens when you run your script. Fix your EOLs and your script will start working sensibly. You might also want to look at your editor's settings so that it starts writing proper EOLs.
How are you calling this method. You would want to call this out by something like ruby test.rb if you are in the directory with the test.rb file. Another general tip for trying something out that doesn't work would be to go into irb on command line and try your program, like puts "Hello! to see if it is that particular code that is the problem.

Can a script be used as an interpreter by the #! hashbang line?

I'm trying to write a bash script which will behave as a basic interpreter, but it doesn't seem to work: The custom interpreter doesn't appear to be invoked. What am I doing wrong?
Here's a simple setup illustrating the problem:
/bin/interpreter: [owned by root; executable]
#!/bin/bash
echo "I am an interpreter running " $1
/Users/zeph/script is owned by me, and is executable:
#!/bin/interpreter
Here are some commands for the custom interpreter.
From what I understand about the mechanics of hashbangs, the script should be executable as follows:
$ ./script
I am an interpreter running ./script
But this doesn't work. Instead the following happens:
$ ./script
./script: line 3: Here: command not found
...It appears that /bin/bash is trying to interpret the contents of ./script. What am I doing wrong?
Note: Although it appears that /bin/interpreter never invoked, I do get an error if it doesn't exist:
$ ./script
-bash: ./script: /bin/interpreter: bad interpreter: No such file or directory
(Second note: If it makes any difference, I'm doing this on MacOS X).
To make this work you could add the interpreter's interpreter (i.e. bash) to the shebang:
#!/bin/bash /bin/interpreter
Here are some commands for the custom interpreter.
bash will then run your interpreter with the script path in $1 as expected.
You can't use a script directly as a #! interpreter, but you can run the script indirectly via the env command using:
#!/usr/bin/env /bin/interpreter
/usr/bin/env is itself a binary, so is a valid interpreter for #!; and /bin/interpreter can be anything you like (a script of any variety, or binary) without having to put knowledge of its own interpreter into the calling script.
Read the execve man page for your system. It dictates how scripts are launched, and it should specify that the interpreter in a hash-bang line is a binary executable.
I asked a similar question in comp.unix.shell that raised some pertinent information.
There was a second branch of the same thread that carried the idea further.
The most general unix solution is to have the shebang point to a binary executable. But that executable program could be as simple as a single call to execl(). Both threads lead to example C source for a program called gscmd, which is little more than a wrapper to execv("gs",...).

Why can't I call `history` from within Ruby?

I can run Bash shell commands from with a Ruby program or irb using backticks (and %x(), system, etc). But that does not work with history for some reason.
For example:
jones$ irb --simple-prompt
>> `whoami`
=> "jones\n"
>> `history`
(irb):2: command not found: history
=> ""
From within a Ruby program it produces this error:
/usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31: command not found: history
In bash itself, those commands work fine
It's not that the Ruby call is invoking a new shell - it simply does not find that command...
Anyone know why? I'm stumped...
Most unix commands are implemented as executable files, and the backtick operator gives you the ability to execute these commands from within your script. However, some commands that are interpreted by bash are not executable files; they are features built-in to the bash command itself. history is one such command. The only way to execute this command is to first execute bash, then ask it to run that command.
You can use the command type to tell you the type of a particular command in order to know if you can exec it from a ruby (or python, perl, Tcl, etc script). For example:
$ type history
history is a shell builtin
$ type cat
cat is /bin/cat
You'll also find that you can't exec aliases defined in your .bashrc file either, since those aren't executable files either.
It helps to remember that exec'ing a command doesn't mean "run this shell command" but rather "run this executable file". If it's not an executable file, you can't exec it.
It's a built-in. In general, you can run built-ins by manually calling the shell:
`bash -c 'history'`
However, in this case, that will probably not be useful.
{~} ∴ which history
history: shell built-in command

Resources