How to get return value from call to perl sub from bash script? - bash

I am calling a perl script from a bash script. I want to obtain the return value that the perl script returns in the bash script in order to act on it. When I have the following, the output is a blank line to the console when I expect it to be "good".
bashThing.sh
#!/bin/bash
ARG="valid"
VAL=$(perl -I. -MperlThing -e "perlThing::isValid ${ARG}")
echo $VAL
perlThing.pm
#! /usr/bin/perl -w
use strict;
package perlThing;
sub isValid
{
my $arg = shift;
if($arg == "valid")
{
return "good";
}
else
{
return "bad";
}
}
1;

You didn't have Perl warnings enabled completely. You would have seen several warnings. I enabled warnings both in bash (using -w on the Perl one-liner), and in Perl with use warnings;. The shebang line is ignored in the .pm file since it is not being executed as a script.
isValid returned a string, but you were ignoring the returned value. To fix that, use print in the one-liner.
You also needed to pass the value to isValid properly.
Lastly, you need to use eq instead of == for string comparison.
bash:
#!/bin/bash
ARG="valid"
VAL=$(perl -w -I. -MperlThing -e "print perlThing::isValid(q(${ARG}))")
echo $VAL
Perl:
use strict;
use warnings;
package perlThing;
sub isValid
{
my $arg = shift;
if($arg eq "valid")
{
return "good";
}
else
{
return "bad";
}
}
1;

If you expect the value good or bad in $VAR, then you should print them not return them.
Other than printed stuff, the perl process can pass only integer return values to calling program. And you can check that using $? in bash script, this variable stores the return value of last run process.

Related

Perl module File:Slurp with STDIN piped

I have just tried using the following Perl script to do some text substitution using the File::Slurp module. It works fine on a single file given as an argument at the command line (BASH).
#!/opt/local/bin/perl -w
use File::Slurp qw( edit_file_lines );
foreach my $argnum (0 .. $#ARGV) {
edit_file_lines {
s/foo/bar/g;
print $_
}
$ARGV[$argnum];
}
I would like to alter it to cope also with pipes (i.e. STDIN), so that it can be in the middle of a series of piped operations:
for example:
command blah|....|my-perl-script|sort|uniq|wc....
What is the best way to change the Perl script to allow this, whilst retaining the existing ability to work with single files on the command line?
To have your script work in a pipeline, you could check if STDIN is connected to a tty:
use strict;
use warnings;
use File::Slurp qw( edit_file_lines );
sub my_edit_func { s/foo/bar/g; print $_ }
if ( !(-t STDIN) ) {
while(<>) { my_edit_func }
}
else {
foreach my $argnum (0 .. $#ARGV) {
edit_file_lines { my_edit_func } $ARGV[$argnum];
}
}
See perldoc -X for more information on the -t file test operator.
All you need is the following:
local $^I = '';
while (<>) {
s/foo/bar/g;
print;
}
This is basically the same as perl -i -pe's/foo/bar/', except it doesn't warn when STDIN is used.

Passing bash variable into inline perl script

Reference the 2nd to last line in my script. For some reason Perl is not able to access the variable $perlPort how can I fix this? Note: $perlPort is a bash variable location before my perl script
perl -e '
{
package MyWebServer;
use HTTP::Server::Simple::CGI;
use base qw(HTTP::Server::Simple::CGI);
my %dispatch = (
"/" => \&resp_hello,
);
sub handle_request {
my $self = shift;
my $cgi = shift;
my $path = $cgi->path_info();
my $handler = $dispatch{$path};
if (ref($handler) eq "CODE") {
print "HTTP/1.0 200 OK\r\n";
$handler->($cgi);
} else {
print "HTTP/1.0 404 Not found\r\n";
print $cgi->header,
$cgi->start_html("Not found"),
$cgi->h1("Not found"),
$cgi->end_html;
}
}
sub resp_hello {
my $cgi = shift; # CGI.pm object
return if !ref $cgi;
my $who = $cgi->param("name");
print $cgi->header,
$cgi->start_html("Hello"),
$cgi->h1("Hello Perl"),
$cgi->end_html;
}
}
my $pid = MyWebServer->new($perlPort)->background();
print "Use 'kill $pid' to stop server.\n";'
export perlPort
perl -e '
...
my $pid = MyWebServer->new($ENV{perlPort})->background();
'
You can use -s switch to pass variables. See http://perldoc.perl.org/perlrun.html
perl -se '
...
my $pid = MyWebBrowser->new($perlPort)->background();
...' -- -perlPort="$perlPort"
You can still pass command line arguments to your script. Replace $perlPort with $ARGV[0], then call you script as
perl -e $' ...
my $pid = MyWebServer->new($ARGV[0])->background();
print "Use \'kill $pid\' to stop server.\n";' "$perlPort"
Note the other problem: You can't include single quotes inside a single-quoted string in bash. You can work around this by using a $'...'-quoted string as the argument to Perl, which can contain escaped single quotes. If your script doesn't need to read from standard input, it would be a better idea to have perl read from a here-document instead.
perl <<'EOF' "$perlPort"
{
package MyWebServer;
use HTTP::Server::Simple::CGI;
...
my $pid = MyWebServer->new($ARGV[0])->background();
print "Use 'kill $pid' to stop server.\n";
EOF
The best idea is to simply use a script file instead of trying to construct the script on the command line.
perl -e '
...
my $pid = MyWebServer->new('$perlPort')->background();
...

How to find out if a command exists in a POSIX compliant manner?

See the discussion at Is `command -v` option required in a POSIX shell? Is posh compliant with POSIX?. It describes that type as well as command -v option is optional in POSIX.1-2004.
The answer marked correct at Check if a program exists from a Bash script doesn't help either. Just like type, hash is also marked as XSI in POSIX.1-2004. See http://pubs.opengroup.org/onlinepubs/009695399/utilities/hash.html.
Then what would be a POSIX compliant way to write a shell script to find if a command exists on the system or not?
How do you want to go about it? You can look for the command on directories in the current value of $PATH; you could look in the directories specified by default for the system PATH (getconf PATH as long as getconf
exists on PATH).
Which implementation language are you going to use? (For example: I have a Perl implementation that does a decent job finding executables on $PATH — but Perl is not part of POSIX; is it remotely relevant to you?)
Why not simply try running it? If you're going to deal with Busybox-based systems, lots of the executables can't be found by searching — they're built into the shell. The major caveat is if a command does something dangerous when run with no arguments — but very few POSIX commands, if any, do that. You might also need to determine what command exit statuses indicate that the command is not found versus the command objecting to not being called with appropriate arguments. And there's little guarantee that all systems will be consistent on that. It's a fraught process, in case you hadn't gathered.
Perl implementation pathfile
#!/usr/bin/env perl
#
# #(#)$Id: pathfile.pl,v 3.4 2015/10/16 19:39:23 jleffler Exp $
#
# Which command is executed
# Loosely based on 'which' from Kernighan & Pike "The UNIX Programming Environment"
#use v5.10.0; # Uses // defined-or operator; not in Perl 5.8.x
use strict;
use warnings;
use Getopt::Std;
use Cwd 'realpath';
use File::Basename;
my $arg0 = basename($0, '.pl');
my $usestr = "Usage: $arg0 [-AafhqrsVwx] [-p path] command ...\n";
my $hlpstr = <<EOS;
-A Absolute pathname (determined by realpath)
-a Print all possible matches
-f Print names of files (as opposed to symlinks, directories, etc)
-h Print this help message and exit
-q Quiet mode (don't print messages about files not found)
-r Print names of files that are readable
-s Print names of files that are not empty
-V Print version information and exit
-w Print names of files that are writable
-x Print names of files that are executable
-p path Use PATH
EOS
sub usage
{
print STDERR $usestr;
exit 1;
}
sub help
{
print $usestr;
print $hlpstr;
exit 0;
}
sub version
{
my $version = 'PATHFILE Version $Revision: 3.4 $ ($Date: 2015/10/16 19:39:23 $)';
# Beware of RCS hacking at RCS keywords!
# Convert date field to ISO 8601 (ISO 9075) notation
$version =~ s%\$(Date:) (\d\d\d\d)/(\d\d)/(\d\d) (\d\d:\d\d:\d\d) \$%\$$1 $2-$3-$4 $5 \$%go;
# Remove keywords
$version =~ s/\$([A-Z][a-z]+|RCSfile): ([^\$]+) \$/$2/go;
print "$version\n";
exit 0;
}
my %opts;
usage unless getopts('AafhqrsVwxp:', \%opts);
version if ($opts{V});
help if ($opts{h});
usage unless scalar(#ARGV);
# Establish test and generate test subroutine.
my $chk = 0;
my $test = "-x";
my $optlist = "";
foreach my $opt ('f', 'r', 's', 'w', 'x')
{
if ($opts{$opt})
{
$chk++;
$test = "-$opt";
$optlist .= " -$opt";
}
}
if ($chk > 1)
{
$optlist =~ s/^ //;
$optlist =~ s/ /, /g;
print STDERR "$arg0: mutually exclusive arguments ($optlist) given\n";
usage;
}
my $chk_ref = eval "sub { my(\$cmd) = \#_; return -f \$cmd && $test \$cmd; }";
my #PATHDIRS;
my %pathdirs;
my $path = defined($opts{p}) ? $opts{p} : $ENV{PATH};
#foreach my $element (split /:/, $opts{p} // $ENV{PATH})
foreach my $element (split /:/, $path)
{
$element = "." if $element eq "";
push #PATHDIRS, $element if $pathdirs{$element}++ == 0;
}
my $estat = 0;
CMD:
foreach my $cmd (#ARGV)
{
if ($cmd =~ m%/%)
{
if (&$chk_ref($cmd))
{
print "$cmd\n" unless $opts{q};
next CMD;
}
print STDERR "$arg0: $cmd: not found\n" unless $opts{q};
$estat = 1;
}
else
{
my $found = 0;
foreach my $directory (#PATHDIRS)
{
my $file = "$directory/$cmd";
if (&$chk_ref($file))
{
$file = realpath($file) if $opts{A};
print "$file\n" unless $opts{q};
next CMD unless defined($opts{a});
$found = 1;
}
}
print STDERR "$arg0: $cmd: not found\n" unless $found || $opts{q};
$estat = 1;
}
}
exit $estat;

Invoke Function through Case statements

Will the below statements work? I am trying to invoke function through Case Statements.
#!/bin/bash
function exit
{
`...`
`...`
`...`}
function start
{
`...`
`...`
}
`Case $input in`
`-book) $(exit) ;;`
`-goal) $(start) ;;`
`*) break ;;`
`esac`
Is the syntax correct?
If you have defined a function:
myfunc() {
echo 'hi'
}
then you can invoke that function in a case statement without a capturing expression. You do it just like any other command:
case "$param" in
expr) myfunc;;
*) echo 'nope';;
esac
You need not use a capturing expression unless you mean it. In your case, what you have would attempt to execute the output of the function as a command itself:
$ double_down() {
> echo 'ping google.com'
> }
$ $(double_down)
PING google.com (74.125.226.169): 56 data bytes
it's possible, but seems unlikely, that you really want this.

Return value in a Bash function

I am working with a bash script and I want to execute a function to print a return value:
function fun1(){
return 34
}
function fun2(){
local res=$(fun1)
echo $res
}
When I execute fun2, it does not print "34". Why is this the case?
Although Bash has a return statement, the only thing you can specify with it is the function's own exit status (a value between 0 and 255, 0 meaning "success"). So return is not what you want.
You might want to convert your return statement to an echo statement - that way your function output could be captured using $() braces, which seems to be exactly what you want.
Here is an example:
function fun1(){
echo 34
}
function fun2(){
local res=$(fun1)
echo $res
}
Another way to get the return value (if you just want to return an integer 0-255) is $?.
function fun1(){
return 34
}
function fun2(){
fun1
local res=$?
echo $res
}
Also, note that you can use the return value to use Boolean logic - like fun1 || fun2 will only run fun2 if fun1 returns a non-0 value. The default return value is the exit value of the last statement executed within the function.
Functions in Bash are not functions like in other languages; they're actually commands. So functions are used as if they were binaries or scripts fetched from your path. From the perspective of your program logic, there shouldn't really be any difference.
Shell commands are connected by pipes (aka streams), and not fundamental or user-defined data types, as in "real" programming languages. There is no such thing like a return value for a command, maybe mostly because there's no real way to declare it. It could occur on the man-page, or the --help output of the command, but both are only human-readable and hence are written to the wind.
When a command wants to get input it reads it from its input stream, or the argument list. In both cases text strings have to be parsed.
When a command wants to return something, it has to echo it to its output stream. Another often practiced way is to store the return value in dedicated, global variables. Writing to the output stream is clearer and more flexible, because it can take also binary data. For example, you can return a BLOB easily:
encrypt() {
gpg -c -o- $1 # Encrypt data in filename to standard output (asks for a passphrase)
}
encrypt public.dat > private.dat # Write the function result to a file
As others have written in this thread, the caller can also use command substitution $() to capture the output.
Parallely, the function would "return" the exit code of gpg (GnuPG). Think of the exit code as a bonus that other languages don't have, or, depending on your temperament, as a "Schmutzeffekt" of shell functions. This status is, by convention, 0 on success or an integer in the range 1-255 for something else. To make this clear: return (like exit) can only take a value from 0-255, and values other than 0 are not necessarily errors, as is often asserted.
When you don't provide an explicit value with return, the status is taken from the last command in a Bash statement/function/command and so forth. So there is always a status, and return is just an easy way to provide it.
$(...) captures the text sent to standard output by the command contained within. return does not output to standard output. $? contains the result code of the last command.
fun1 (){
return 34
}
fun2 (){
fun1
local res=$?
echo $res
}
The problem with other answers is they either use a global, which can be overwritten when several functions are in a call chain, or echo which means your function cannot output diagnostic information (you will forget your function does this and the "result", i.e. return value, will contain more information than your caller expects, leading to weird bugs), or eval which is way too heavy and hacky.
The proper way to do this is to put the top level stuff in a function and use a local with Bash's dynamic scoping rule. Example:
func1()
{
ret_val=hi
}
func2()
{
ret_val=bye
}
func3()
{
local ret_val=nothing
echo $ret_val
func1
echo $ret_val
func2
echo $ret_val
}
func3
This outputs
nothing
hi
bye
Dynamic scoping means that ret_val points to a different object, depending on the caller! This is different from lexical scoping, which is what most programming languages use. This is actually a documented feature, just easy to miss, and not very well explained. Here is the documentation for it (emphasis is mine):
Variables local to the function may be declared with the local
builtin. These variables are visible only to the function and the
commands it invokes.
For someone with a C, C++, Python, Java,C#, or JavaScript background, this is probably the biggest hurdle: functions in bash are not functions, they are commands, and behave as such: they can output to stdout/stderr, they can pipe in/out, and they can return an exit code. Basically, there isn't any difference between defining a command in a script and creating an executable that can be called from the command line.
So instead of writing your script like this:
Top-level code
Bunch of functions
More top-level code
write it like this:
# Define your main, containing all top-level code
main()
Bunch of functions
# Call main
main
where main() declares ret_val as local and all other functions return values via ret_val.
See also the Unix & Linux question Scope of Local Variables in Shell Functions.
Another, perhaps even better solution depending on situation, is the one posted by ya.teck which uses local -n.
Another way to achieve this is name references (requires Bash 4.3+).
function example {
local -n VAR=$1
VAR=foo
}
example RESULT
echo $RESULT
The return statement sets the exit code of the function, much the same as exit will do for the entire script.
The exit code for the last command is always available in the $? variable.
function fun1(){
return 34
}
function fun2(){
local res=$(fun1)
echo $? # <-- Always echos 0 since the 'local' command passes.
res=$(fun1)
echo $? #<-- Outputs 34
}
As an add-on to others' excellent posts, here's an article summarizing these techniques:
set a global variable
set a global variable, whose name you passed to the function
set the return code (and pick it up with $?)
'echo' some data (and pick it up with MYVAR=$(myfunction) )
Returning Values from Bash Functions
I like to do the following if running in a script where the function is defined:
POINTER= # Used for function return values
my_function() {
# Do stuff
POINTER="my_function_return"
}
my_other_function() {
# Do stuff
POINTER="my_other_function_return"
}
my_function
RESULT="$POINTER"
my_other_function
RESULT="$POINTER"
I like this, because I can then include echo statements in my functions if I want
my_function() {
echo "-> my_function()"
# Do stuff
POINTER="my_function_return"
echo "<- my_function. $POINTER"
}
The simplest way I can think of is to use echo in the method body like so
get_greeting() {
echo "Hello there, $1!"
}
STRING_VAR=$(get_greeting "General Kenobi")
echo $STRING_VAR
# Outputs: Hello there, General Kenobi!
Instead of calling var=$(func) with the whole function output, you can create a function that modifies the input arguments with eval,
var1="is there"
var2="anybody"
function modify_args() {
echo "Modifying first argument"
eval $1="out"
echo "Modifying second argument"
eval $2="there?"
}
modify_args var1 var2
# Prints "Modifying first argument" and "Modifying second argument"
# Sets var1 = out
# Sets var2 = there?
This might be useful in case you need to:
Print to stdout/stderr within the function scope (without returning it)
Return (set) multiple variables.
Git Bash on Windows is using arrays for multiple return values
Bash code:
#!/bin/bash
## A 6-element array used for returning
## values from functions:
declare -a RET_ARR
RET_ARR[0]="A"
RET_ARR[1]="B"
RET_ARR[2]="C"
RET_ARR[3]="D"
RET_ARR[4]="E"
RET_ARR[5]="F"
function FN_MULTIPLE_RETURN_VALUES(){
## Give the positional arguments/inputs
## $1 and $2 some sensible names:
local out_dex_1="$1" ## Output index
local out_dex_2="$2" ## Output index
## Echo for debugging:
echo "Running: FN_MULTIPLE_RETURN_VALUES"
## Here: Calculate output values:
local op_var_1="Hello"
local op_var_2="World"
## Set the return values:
RET_ARR[ $out_dex_1 ]=$op_var_1
RET_ARR[ $out_dex_2 ]=$op_var_2
}
echo "FN_MULTIPLE_RETURN_VALUES EXAMPLES:"
echo "-------------------------------------------"
fn="FN_MULTIPLE_RETURN_VALUES"
out_dex_a=0
out_dex_b=1
eval $fn $out_dex_a $out_dex_b ## <-- Call function
a=${RET_ARR[0]} && echo "RET_ARR[0]: $a "
b=${RET_ARR[1]} && echo "RET_ARR[1]: $b "
echo
## ---------------------------------------------- ##
c="2"
d="3"
FN_MULTIPLE_RETURN_VALUES $c $d ## <--Call function
c_res=${RET_ARR[2]} && echo "RET_ARR[2]: $c_res "
d_res=${RET_ARR[3]} && echo "RET_ARR[3]: $d_res "
echo
## ---------------------------------------------- ##
FN_MULTIPLE_RETURN_VALUES 4 5 ## <--- Call function
e=${RET_ARR[4]} && echo "RET_ARR[4]: $e "
f=${RET_ARR[5]} && echo "RET_ARR[5]: $f "
echo
##----------------------------------------------##
read -p "Press Enter To Exit:"
Expected output:
FN_MULTIPLE_RETURN_VALUES EXAMPLES:
-------------------------------------------
Running: FN_MULTIPLE_RETURN_VALUES
RET_ARR[0]: Hello
RET_ARR[1]: World
Running: FN_MULTIPLE_RETURN_VALUES
RET_ARR[2]: Hello
RET_ARR[3]: World
Running: FN_MULTIPLE_RETURN_VALUES
RET_ARR[4]: Hello
RET_ARR[5]: World
Press Enter To Exit:

Resources