How do I wrap a compiled command line tool for use in Ruby? - ruby

I have compiled and tested an open-source command line SIP client for my machine which we can assume has the same architecture as all other machines in our shop. By this I mean that I have successfully passed a compiled binary to others in the shop and they were able to use them.
The tool has a fairly esoteric invocation, a simple bash script piped to it prior to execution as follows:
(sleep 3; echo "# 1"; sleep 3; echo h) | pjsua sip:somephonenumber#ip --flag_1 val --flag_2 val
Note that the leading bash script is an essential part of the functioning of the program and that the line itself seems to be the best practice for use.
In the framing of my problem I am considering the following:
I don't think I can expect very many others in the shop to
compile the binary for themselves
Having a common system architecture in the shop it is reasonable to think that a repo can house the most up-to-date version
Having a way to invoke the tool using Ruby would be the most useful and the most accessible to the
most people.
The leading bash script being passed needs to be wholly extensible. These signify modifiable "scenarios" e.g. in this case:
Call
Wait three seconds
Press 1
Wait three seconds
Hang up
There may be as many as a dozen flags. Possibly a configuration file.
Is it a reasonable practice to create a gem that carries at its core a command line tool that has been previously compiled?

It seems reasonable to create a gem that uses a command line tool. The only thing I'd say is to check that the command is available using system('which psjua') and raising an informative error if it hasn't been installed.

So it seems like the vocabulary I was missing is extension. Here is a great stack discussion on wrapping up a Ruby C extension in a Ruby Gem.
Here is a link to the Gem Guides on creating Gems with Extensions.
Apparently not only is it done but there are sets of best practices around its use.

Related

running external program in TCL

After developing an elaborate TCL code to do smoothing based on Gabriel Taubin's smoothing without shape shrinkage, the code runs extremely slow. This is likely due to the size of unstructured grid I am smoothing. I have to use TCL because the grid generator I am using is Pointwise and Pointwise's "macro language" is TCL based. I'm still a bit new to this, but is there a way to run an external code from TCL where TCL sends the data to the software, the software runs the smoothing operation, and output is sent back to TCL to update the internal data inside the Pointwise grid generation tool? I will be writing the smoothing tool in another language which is significantly faster.
There are a number of options to deal with code that "runs extremely show". I would start with determining how fast it must run. Are we talking milliseconds, seconds, minutes, hours or days. Next it is necessary to determine which part is slow. The time command is useful here.
But assuming you have decided that more performance is necessary and you have some metrics for your current program so you will know if you are improving, here are some things to try:
Try to improve the existing code. If you are using the expr command, make sure your expressions are given to the command as a single argument enclosed in braces. Beginners sometimes forget this and the improvement can be substantial.
Use the critcl package to code parts of the program in "C". Critcl allows you to put "C" code directly into your Tcl program and have that code pulled out, compiled and loaded into your program.
Write a traditional "C" based Tcl extension. Tcl is very extensible and has a clean API for building extensions. There is sample code for extensions and source to many extensions is readily available.
Write a program to do the time consuming part of the job and execute it as a separate process and obtain the output back into your Tcl script. This is where the exec command comes in useful. Presumably you will have to write data out to some where the program can get it and read the output of the program back into your Tcl script. If you want to get fancy you can do two-way communications across a localhost TCP port. The set up in Tcl is quite simple. The "C" code in a program to do it is a bit more tedious, but many examples exist out on the Internet.
Which option to choose depends very much on how much improvement is required and the amount of code that must be improved. You haven't given us much idea what those things are in your case, so all I can offer is rather vague general solutions.
For a loadable module, you can write a Tcl extension. An example is here:
File Last Modified Time with Milliseconds Precision
Alternatively, just write your program to take input from a file. Have Tcl write the input data to the file, run the program, then collect the output from the external program.

See the current line being executed of a ruby script

I have a ruby script, apparently correct, that sometimes stops working (probably on some calls to Postgresql through the pg gem). The problem is that it freezes but doesn't produce any error, so I can't see the line number and I always have to isolate the line by using puts "ok1", puts "ok2", etc. and see where the script stops.
Is there any better way to see the current line being executed (without changing the script)? And maybe the current stack?
You could investigate ruby-debug a project that has been rewritten several times for several different versions of ruby, should allow you to step through your code line by line. I personally prefer printf debugging in a lot of cases though. Also, if I had to take an absolutely random guess at your problem, I might investigate whether or not you're running into a race condition and/or deadlock in your DB.

BASH shell process control - any other examples of controlling/scheduling work

I've inherited a medium sized project in which the main (batch) program is fed work through a large set of shell scripts that do a lot of process control (waiting for process to complete, sleeping, checking for conditions, etc) [ and reprocessed through perl scripts ]
Are there other examples of process control by shell scripts ? I would like to see what other people have done as a comparison. (as i'm not really fond of the 6,668 line shell script)
It may lead to that the current program works and doesn't need to be messed with or for maintenance reasons - it's too cumbersome and doing it another way will be easier to maintain, but I need other examples.
To reduce the "generality" of the question here's an example of what I'm looking for: procsup
Inquisitor project relies on process control from shell scripts extensively. You might want to see it's directory with main function set or directory with tests (i.e. slave processes) that it runs.
This is quite general question, and therefore giving specific answers may be a little bit difficult. (And you wont be happy with 5000 lines long example.) Most probably architecture of your application is faulty, and requires rather complete rework.
As you probably already know, process control with bash is pretty simple:
./test_script.sh &
test_script_pid=$!
wait $test_script_pid # waits until it's done
./test_script2.sh
echo $? # Prints return code of previous command
You can do same things with for example Python subprocess (or with Perl, obviously). If you have complex architecture with large number of different programs, then process is obviously non-trivial.
That is an awfully bug shell script. Have you considered refactoring it?
From the sound of it, there may be a lot of instances where you could replace several lines of code with a call to a shell function. If you can simplify the code in this way, then it will be easier to see where there are errors in the logic.
I've used this tactic successfully with a humongous PERL script and it turned out to have some serious logic errors and to be a security risk because it had embedded passwords that were obfuscated in an easily reversible way. The passwords that were exposed could have been used by persons unknown (well, a disgruntled employee) to shut down an entire global network.
Some managers were leaning towards making a security exception because this script was so important, but when the logic error was explained and it was clear that this script was providing incorrect data, it was decided that no data was better than dirty data. The guy who wrote that script taught himself programming with a PERL book and the writing of the script.

Why is this Perl require line taking so much time?

I have a Perl script that runs via a system() command from C. On a specific site (SunOS 5.10), when that script is run, it nearly always takes 6 seconds or more. On other sites, it runs pretty much instantly (0.1s). If I run the script manually, i.e. not from the C code, it also runs instantly. I eventually tracked the slowness down (by spitting out the time a whole bunch in a lot of different places), to a single require line. The file that it is requiring is another Perl script we wrote. The script consists of a single require (this file here), 3 scalars that are assigned integer values, and a handful of time/date conversion routines. The file ends with a 1;. That single require appears to take as much as 6 seconds on occasion, but as I said, not always even on the same machine. I'm absolutely stumped here. My only last thought is to turn on profiling, but the site doesn't have Devel::Profiler and my only other option (that I know of) would be to add it to the Perl command which would require me altering and recompiling the C code (doable but non-trivial).
Anybody have ANY idea what could be going on here? I don't think I can/want to put the entire date.pl that is being required, but it's pretty much exactly as I described; I could answer any questions about it that you have.
Thanks in advance.
You might be interested in A Timely Start by Jean-Louis Leroy. He had a similar problem and tracked it down to a long and deep module search path where perl usually found the modules in the last entries in #INC.
Six seconds is a long time. Have you checked what your network is doing during this?
My first thought was that spawning the new process when using the system() command could be the problem, but six seconds is too long.
I don't know much about perl, but I could imagine that for any reason, the access of the time module could invoke a call to a network time server. Just to get synchronized. Maybe this takes so long or maybe it is getting a time out.
It could be that this only happens for a newly spawned process -- hence only when you use the system() command.
just wild guessing...
So, this does nothing to answer your question directly, but please tell me that you're not actually running on perl 4? Assuming you're on perl 5, you could remove the entire file and replace the require with use POSIX qw(ctime) to get the version that comes with Perl.
If you do have to support perl4, I'll merely grumble something about version 5 being fifteen years old now and go away. :)

How can I fork a background processes from a Perl CGI script on Windows?

I've had some trouble forking of processes from a Perl CGI script when running on Windows. The main issue seems to be that 'fork' is emulated when running on windows, and doesn't actually seem to create a new process (just another thread in the current one). This means that web servers (like IIS) which are waiting for the process to finish continue waiting until the 'background' process finishes.
Is there a way of forking off a background process from a CGI script under Windows? Even better, is there a single function I can call which will do this in a cross platform way?
(And just to make life extra difficult, I'd really like a good way to redirect the forked processes output to a file at the same time).
If you want to do this in a platform independent way, Proc::Background is probably the best way.
Use Win32::Process->Create with DETACHED_PROCESS parameter
perlfork:
Perl provides a fork() keyword that
corresponds to the Unix system call of
the same name. On most Unix-like
platforms where the fork() system call
is available, Perl's fork() simply
calls it.
On some platforms such as Windows
where the fork() system call is not
available, Perl can be built to
emulate fork() at the interpreter
level. While the emulation is designed
to be as compatible as possible with
the real fork() at the the level of
the Perl program, there are certain
important differences that stem from
the fact that all the pseudo child
``processes'' created this way live in
the same real process as far as the
operating system is concerned.
I've found real problems with fork() on Windows, especially when dealing with Win32 Objects in Perl. Thus, if it's going to be Windows specific, I'd really recommend you look at the Thread library within Perl.
I use this to good effect accepting more than one connection at a time on websites using IIS, and then using even more threads to execute different scripts all at once.
This question is very old, and the accepted answer is correct. However, I just got this to work, and figured I'd add some more detail about how to accomplish it for anyone who needs it.
The following code exists in a very large perl CGI script. This particular sub routine creates tickets in multiple ticketing systems, then uses the returned ticket numbers to make an automated call via Twilio services. The call takes awhile, and I didn't want the CGI users to have to wait until the call ended to see the output from their request. To that end, I did the following:
(All the CGI code that is standard stuff. Calls the subroutine needed, and then)
my $randnum = int(rand(100000));
my $callcmd = $progdir_path . "/aoff-caller.pl --uniqueid $uuid --region $region --ticketid $ticketid";
my $daemon = Proc::Daemon->new(
work_dir => $progdir_path,
child_STDOUT => $tmpdir_path . '/stdout.txt',
child_STDERR => $tmpdir_path . '/stderr.txt',
pid_file => $tmpdir_path . '/' . $randnum . '-pid.txt',
exec_command => $callcmd,
);
my $pid = $daemon->Init();
exit 0;
(kill CGI at the appropriate place)
I am sure that the random number generated and attached to the pid is overkill, but I have no interest in creating issues that are extremely easily avoided. Hopefully this helps someone looking to do the same sort of thing. Remember to add use Proc::Daemon at the top of your script, mirror the code and alter to the paths and names of your program, and you should be good to go.

Resources