It's my understanding that there's no "bridge" between Ruby and Perl to let you call into Perl functions directly from Ruby. It's also my understanding that to call a Perl program from Ruby, you simply put it in backticks(i.e. result = `./helloWorld.pl`). However, this doesn't allow interaction with the Perl program(i.e. you can't interact with prompts or provide input). My quesitons are as follows:
Is there any way to provide input to Perl programs from Ruby(aside from arguments)?
Am I wrong that there is no bridge between Ruby and Perl? Interacting with a program's stdin seems like the wrong way to go when navigating prompts, and the programs I'm dealing with are well-designed and have libraries with the appropriate Perl functions in them.
There's the Inline::Ruby module, though I don't have any direct experience with it that I can share.
EDIT: I did try it out last night -- here's the review: Inline::Ruby was last updated in 2002, when v5.6 was the latest stable release. Docs say it has only been tested on Linux; I was trying to use it with v5.10.1 on Cygwin. Got it to build after some hacking of the XS/C code that comes with the module. Passed some unit tests but failed others. Seemed to import Ruby class's into Perl's namespace ok but was less successful importing standalone functions. Summary: if you need a quick and dirty bridge for Perl and Ruby, Inline::Ruby will probably disappoint you. If you have the patience to figure out how to build the module on your system, and to massage your Ruby code to work with the module, you could find it useful.
Use Ruby's exec()
rubypl.rb
#!/usr/bin/ruby -w
script = 'perlscript.pl'
exec("/usr/bin/perl #{script}")
perlscript.pl
#!/usr/bin/perl -w
use strict;
print "Please enter your name: ";
my $name = <STDIN>;
chop($name);
if ($name eq "")
{
print "You did not enter a name!\n";
exit(1);
} else {
print "Hello there, " . $name . "\n";
exit(0);
}
perldoc perlipc states:
DESCRIPTION
The basic IPC facilities of Perl are built out of the good old Unix
signals, named pipes, pipe opens, the Berkeley socket routines, and
SysV IPC calls. Each is used in slightly different situations.
Ruby is capable of operating each of these.
Here's how ruby can use a python script, interacting with the script's stdin and stdout.
foo.py reads two integers (each on its own line) from standard input, adds them, and writes the result to standard-out. I don't know perl, so be nice to me:
#!/usr/bin/perl
$a = <STDIN>;
$b = <STDIN>;
$c = int($a) + int($b);
print $c;
foo.rb executes foo.py, giving it two numbers to add, getting back the result and printing it:
#!/usr/bin/ruby1.8
a = 1
b = 2
c = IO.popen('./foo.py', 'w+') do |pipe|
pipe.puts(a)
pipe.puts(b)
pipe.close_write
pipe.read
end
raise "foo.py failed" unless $? != 0
print "#{a} + #{b} = #{c}" # => 1 + 2 = 3
What you want doesn't really exist, to my knowledge.
The closest thing to what you want, on a generic level, is XDebug. It turns a process into a little server that will accept debugging commands. This is generally used for debugging and profiling and not as interprocess communication, but its a possibility. I believe ActiveState's Perl can be run as an XDebug server.
Otherwise, you need explicitly program in some sort of side-channel that your Perl program listens to for commands (which is what XDebug does). It can be as simple as opening a socket that reads a string, evals it, encodes the result as YAML (or whatever) and writes it back. A REPL, but on a socket rather than on a terminal.
There are, obviously, security implications which will be left as an exercise for the reader. You also don't want listening to the socket to interrupt the program so you will now need something event-driven or threaded.
Sorry I don't have anything more specific. It would make a great CPAN module.
Perl and Ruby both have various "glues". Perl programs using the Expect module can do a lot more than just "wait for output". More likely though, you could communicate with a well-known protocol like HTTP... a Ruby daemon could put up a listener, and Perl could contact it, or vice versa.
But no, you can't just "embed" ruby code into Perl, or vice versa. You've got to run separate processes and communicate somehow.
Try Open4 . It is not designed specific for interacting with perl but with any program that need input and ouput. I am still learning to use it ,but I feel it might suit your need.
You mentioned in one of your comments that you want to run Perl code inside Ruby. This will not work unless you can make the Ruby interpreter understand Perl syntax. Can I get a BNF/yacc/RE for the Perl language? will help you understand the challenges you will face.
Related
This problem is to get into an internship within a devops department:
"Write a ruby library that executes arbitrary system calls (eg: “dmesg", "ping -c 1 www.google.com”) and provides separated output streams of stderr and stdout as well are providing the final return code of the process. Show your work with unit tests.”
Am I supposed to use already established system calls and replicate them in Ruby code? That seems silly to me. Am I supposed to come up with my own arbitrary calls and write a library complete with errors and status calls?
I am not looking for someone to write this for me. I feel that the first step to solving this problem is understanding it.
Get Clarification from the Source
The assignment is poorly worded, and misuses a number of terms. In addition, we can only guess what they really expect; the appropriate thing to do would be to ask the company directly for clarification.
Re-Implement Open3
With that said, what they probably want is a way to process any given command and its arguments as a decorated method, akin to the way Open3#capture3 from the standard library works. That means the code you write should take the command and any arguments as parameters.
For example, using Open3#capture3 from the standard library:
require 'open3'
def command_wrapper cmd, *args
stdout_str, stderr_str, status = Open3.capture3 "#{cmd} #{args.join ' '}"
end
command_wrapper "/bin/echo", "-n", "foo", "bar", "baz"
#=> ["foo bar baz", "", #<Process::Status: pid 31661 exit 0>]
I sincerely doubt that it's useful to re-implement this library, but that certainly seems like what they're asking you to do. Shrug.
You're also supposed to write unit tests for the re-implementation, so you will have to swot something up with a built-in framework like Test::Unit or MiniTest, or an external testing framework like RSpec, or Wrong. See the Ruby Toolbox for a more comprehensive list of available unit testing frameworks.
Luckily for you, Ruby makes it very easy to interact with the underlying operative system.
You can start by reading the documentation for these methods:
Kernel#`
Kernel#system
Kernel#exec
Kernel#spawn
IO#popen
Also, there is the Open3 module from the stdlib.
I am trying to run gnuplot from ruby (not using an external gem) and parsing its textual output also. I tried IO.popen, PTY.spawn and Open3.popen3 but whenever I try to get the output it just "hangs" -I guess waiting for more output to come. I feel like its somehow done using Thread.new but I couldn't find the correct way to implement it.
Anyone know how it is done?
I guess this is what you want:
require 'pty'
require 'expect'
PTY.spawn('gnuplot') do |input, output, pid|
str = input.expect(/gnuplot>/)
puts str
output.puts "mlqksdf"
str = input.expect(/gnuplot>/)
puts str
output.puts "exit"
end
The problem is that the sub-program is waiting for input that isn't being sent.
Typically, when we call a program that expects input on STDIN, we have to close STDIN, which then signals that program to begin processing. Look through the various Open3 methods and you'll see where stdin.close occurs in many examples, but they don't explain why.
Open3 also includes capture2 and capture3, which make it nice when trying to deal with a program that wants STDIN and you don't have anything to send to it. In both methods, STDIN is immediately closed, and the method returns the STDOUT, STDERR and exit status of the called program.
You need "expect" functionality. Ruby's Pty class includes an expect method.
Creates and managed pseudo terminals (PTYs). See also en.wikipedia.org/wiki/Pseudo_terminal
It's not very well documented though, and doesn't offer a lot of functionality from what I've seen. An example of its use is available at "Using Ruby Expect Library to Reboot Ruckus Wireless Access Points via ssh".
Instead, you might want to look at RubyExpect which is better documented and appears to be current.
I have a folder on my desktop called "images".
Inside are a bunch of sub-folders called various things "Flower_Shape", "Smile_Shape", "Dog_Shape", etc. and each folder contains 3 images "Variant1.PNG", "Variant2.PNG", and "Variant3.PNG" and then some other file formats that are called various things but they do not end in .png, they end in things like ".psd"...
How can I take all of the contents of these hundreds of folders, and move them all into 1 folder while changing the names from "Variant1.png" to whatever the name of the folder it was in is called (for example the "Variant1.png" inside the folder "Flower_Shape" would be re-named "Flower_Shape-Variant1.png") and then delete every file that doesn't end in ".png".
My first main question is what language would I use to do something like this? Would it be PERL?
After that is established, does anyone have any tips for how to go about doing this... I'm assuming just a for loop with some if statements should be all it really takes but I know nothing of perl (or any other languages that deal with changing files on my own computer for that matter)
Thanks!
Mac OS X comes with a whole assortment of scripting languages. It comes with Perl, Python, PHP, Ruby, and BASH.
Mac OS X also comes with Automator.
What should you choose?
If you're really, really interested in learning to program, I would suggest you start with Bash. BASH is what is called the command shell. If you type a command in at a prompt, it will be executed. The Unix shell is a powerful tool, and the BASH shell includes a lot of control structures that can be used. In BASH, you could use the find command and the mv command and write a loop to do what you want.
.
Shell scripts are limited in their power. They aren't suppose to be full service languages and make no apologies for it. If you need more than a few dozen lines or more complex control, you should switch to a scripting language.
I am a Perl programmer, but I would recommend, if you're really interested in learning how to program, to learn Python. Python is probably the most popular language and its use is certainly growing.
Perl is a nice language because much of its syntax structure comes from shell scripting languages. Despite what you've heard, Perl is still very much a popular language. I've found that people pick up Perl faster than Python because of it's looser syntax rules and the fact it doesn't have to be object oriented. That is also a major problem with Perl. Most Perl programmers never get past the basic hacking stage and Perl has a reputation of being a difficult language to maintain because of the sheer number of awful Perl scripts out there.
Ruby hasn't been as popular as Perl or Python, but it has its fans. The big power of Ruby is something called Ruby on Rails. Rails is a programming framework that allows you to quickly build web-based applications. You first should learn Ruby, then learn the Rails component.
PHP use to be the glue that held the web together. I believe its popularity has been dropping in recent years as newer web development platforms have come up. PHP is the mob rule of programming. Unlike all of the other languages, it doesn't have a single champion shaping the language and has developed a lot of detritus. It is sloppy and can be hard to maintain.
.
Still, it is the basis of a lot of web based forums and content management systems like Joomla. After all, PHP was designed to live inside of webpages.
If you don't want to learn script programming, you should try Automator. It comes with a Graphical User Interface and can allow you to build programs to do all sorts of tasks. People who have gotten into Automator have done some amazing stuff with it.
Whatever you choose, you can find some tutorials and information on the web, but if you really want to learn, you should get some good manuals. O'Reilly and Associates has a long storied history with computing and the Internet. Tim O'Reilly (the brains behind the company) has been producing computer manuals since 1978. Almost all of my books are O'Reilly books. Manning is another one, and there are dozens more. Go to a real bookstore before Amazon drives them all out of business and peruse their shelves for something you like.
You have to combine File::Find with File::Copy to reach those requirements
If you use perl, you should look into Path::Class and File:::Copy. That'll handle your path parsing. You'd want to iterate over the filesystem (Path::Class::Dir->next will be your friend here), build a destination file with something like:
my $curr_file = file('whatever_path...');
my $dest_dir = dir('some other path...');
my $dest_file = file($dest_dir, $curr_file->dir->basename . '-' . $curr_file->basename);
then use File::Copy to move:
move($curr_file, $dest_file);
Start by looking up those methods/subroutines and their usage; then you just need to code up the iteration.
I was able to accomplish this using this code (and changing the value inside the "glob" function to different levels of /// manually)
#!/usr/bin/perl
foreach my $files (glob ('*/*/*')) {
use strict;
use File::Basename;
use Cwd 'abs_path';
my $f = "$files";
my $d = basename(dirname(abs_path($f)));
my($file, $dir, $ext) = fileparse($files);
my $initialName = "$dir$file";
my $finalName = "$dir$d$file";
print("$initialName --> $finalName \n");
rename($initialName, $finalName);
}
Then it was as simple as moving the files with a move function into a new folder, like so:
#!/usr/bin/perl
foreach my $files (glob ('*/*/*')) {
use strict;
use File::Basename;
use Cwd 'abs_path';
use File::Copy;
my $f = "$files";
my $d = basename(dirname(abs_path($f)));
my($file, $dir, $ext) = fileparse($files);
my $initialLocation = "$dir$file";
my $finalLocation = "AllFiles/$file";
print("$finalLocation\n");
move($initialLocation,$finalLocation);
}
I know the question is very subjective. But I cannot form the question in a much more better manner. I would appreciate some guidance.
I often as a developer feel how easier it would have been for me if I could have some tools for doing some reasonably complex task on a log file , or a set of source files, or some data set etc.
Clearly, when the same type of task needs to be done repetitively and when speed is critical, I can think of writing it in C++/Java.
But most of the times, it is some kind of text processing or file searching activity that I want to do only once just to perform a quick check or to do some preliminary analysis etc. In such cases, I would be better off doing the task manually rather than writing in C++/Java. But I could surely doing it in seconds if I knew some language like Bash/Python/Perl/Ruby/Sed/Awk.
I know this whole question is subjective and there is no objective definite answer, but in general what does the developer community feel as a whole? What subset of these languages should I know so that I can do all these kinds of tasks easily and improve my productivity.
Would Perl be a good choice?
It is a super set of Sed/Awk, plus it allows to write terse code. I can get done with fewer lines of code. It is neither readable nor easily maintainable, but I never wanted those features anyway.
The only thing that bothers me is the negative publiciity that Perl has got lately and it has been criticized by the Ruby/Python community a lot. Also, I am not sure if it can replace bash scripting totally.
If not, then is Perl+Bash a good combination for these kind of tasks?
I tend to do a lot of processing with ruby. It has all the functionality of perl, but I find it to be a little more readable. Both perl and ruby support the -n, -e, and -p options.
-e 'command' one line of script. Several -e's allowed. Omit [programfile]
-n assume 'while gets(); ... end' loop around your script
-p assume loop like -n but print line also like sed
For example in ruby
seq 1 4 | ruby -ne 'BEGIN{ $product = 1 }; $product *= $_.to_i; END { puts $product }'
24
Which is very similar to perl
seq 1 4 | perl -ne 'BEGIN{ $product = 1 }; $product *= $_; END { print $product }'
24
In Python, the same would look like this:
seq 1 4 | python -c 'import sys; print reduce(lambda x,y : int(x)*int(y), sys.stdin.read().splitlines(True))'
24
While it's possible to do the above in bash/awk/sed, you'll be limited by their lack of more advanced features.
Python is more expressive and readable than bash, but requires more setup: import os and so on. For simple tasks, bash is quicker -- which is the most important for this. And don't underestimate the power of input/output redirection in bash!
I would use Perl over a bash/sed/awk combination. Why ?
You only have the one executable, rather than spawning off multiple executables to do work.
You can make use of a wide range of Perl modules to do most anything (see CPAN for the modules available)
In fact I would recommend any scripting language over the shell/awk/sed combination, for the same reasons. I don't have a problem with sed/awk per se, but as your required solutions become more complex/lengthy, I find the more powerful scripting languages more scalable, and (to some degree) refactorable for re-use.
I find Python+Bash a very nice combo.
I usually use Python because it's very readable and maintainable. And because there are lots of online documentation available.
Btw, I suggest you to read http://www.ibm.com/developerworks/aix/library/au-python/
With shell scripting, all you ever need to know is a bit bash/sh and a lot of awk. Bash for calling your commands, and awk for processing. Some of the unix tools below, contrary to the fact that many people use them, are not necessary because awk can do their functions.
1) cut
2) sed
3) wc
4) (e)grep
5) cat
6) head
7) etc..
and a few others whose functions overlap. In the end, your script will not cluttered with redundant tools and slow down your script.
Perl/Python are very useful sysadmin tools as well. Both of them do similar things and have libraries that help in your sysadmin tasks. The only significant difference is, aesthetically speaking, the appearance of your code written in them.
You can learn about Ruby if you want, but in terms of sysadmin, I would say go for Perl/Python instead.
In the time it took you to write those few paragraphs, you could have already learned enough Python to make your life significantly better.
Anyone who already knows C++ or Java can become productive in Python in about 4 hours. Just read the tutorial.
My first port of call is bash with sed to provide regular expression processing. You can do a lot with a bash for loop, grep and some regular expressions.
It's worth learning regular expressions if you don't already know them. An editor which lets you use them (like vi) is extremely useful when manipulating files (e.g. you have a set of data extracted from a logfile, and you need to turn it into a set of SQL statements for example).
If it takes me more than a few minutes to figure out how to do whatever parsing task I'm trying to do in bash/sed, I usually end up using perl instead. As suggested by ikkebr, python is probably as good as (or better than) perl; I just had the misfortune to learn perl first, so am much more familiar with it - if I was to start again, I'd learn python instead I think.
Do you know if there's any tool for compiling bash scripts?
It doesn't matter if that tool is just a translator (for example, something that converts a bash script to a C program), as long as the translated result can be compiled.
I'm looking for something like shc (it's just an example -- I know that shc doesn't work as a compiler). Are there any other similar tools?
A Google search brings up CCsh, but it will set you back $50 per machine for a license.
The documentation says that CCsh compiles Bourne Shell (not bash ...) scripts to C code and that it understands how to replicate the functionality of 50 odd standard commands avoiding the need to fork them.
But CCsh is not open source, so if it doesn't do what you need (or expect) you won't be able to look at the source code to figure out why.
I don't think you're going to find anything, because you can't really "compile" a shell script. You could write a simple script that converts all lines to calls to system(3), then "compile" that as a C program, but this wouldn't have a major performance boost over anything you're currently using, and might not handle variables correctly. Don't do this.
The problem with "compiling" a shell script is that shell scripts just call external programs.
In theory you could actually get a good performance boost.
Think of all the
if [ x"$MYVAR" == x"TheResult" ]; then echo "TheResult Happened" fi
(note invocation of test, then echo, as well as the interpreting needed to be done.)
which could be replaced by
if ( !strcmp(myvar, "TheResult") ) printf("TheResult Happened");
In C: no process launching, no having to do path searching. Lots of goodness.