return value of bash shell - bash

I am trying to learn linux bash scripting. I have a script and I want to get the return value of this script and store it in a variable.
Any help is welcome.
Thank you in advance.
#!/bin/bash
HOST_NAME=$1
{ echo "105" ; sleep 5; } | telnet $HOST_NAME 9761;

To avoid confusion don't think/talk of it as a return value, think of it as what it is - an exit status.
In most programming languages you can capture the return value of a function by capturing whatever that function returns in a variable, e.g. with a C-like language :
int foo() {
printf("35\n");
return 7;
}
void main() {
int var;
var=foo();
}
the variable var in main() after calling foo() will hold the value 7 and 35 will have been printed to stdout. In shell however with similar-looking code:
foo() {
printf "35\n"
return 7
}
main() {
local var
var=$(foo)
}
var will have the value 35 and the unmentioned builtin variable $? which always holds the exit status of the last command run will have the value 7. If you wanted to duplicate the C behavior where 35 goes to stdout and var contains 7 then that'd be:
foo() {
printf "35\n"
return 7
}
main() {
local var
foo
var=$?
}
The fact that shell functions use the keyword return to report their exit status is confusing at first if you're used to other Algol-based languages like C but if they used exit then it'd terminate the whole process so they had to use something and it quickly becomes obvious what it really means.
So when taking about shell scripts and functions use the words "output" and "exit status", not "return" which some people in some contexts will assume means either of those 2 things, and that'll avoid all confusion.
Btw to avoid making things even more complicated I said above that $? is a variable but it's really the value of the "special parameter" ?. If you really want to understand the difference right now then see https://www.gnu.org/software/bash/manual/bash.html#Shell-Parameters for a discussion of shell parameters which includes "special parameters" like ? and #, "positional parameters" like 1 and 2, and "variables" like HOME and var as used in my script above.

The $? shell variable stores the return value, however with the Linux telnet client this may not be as useful as you think. The client will return 1 if the remote host closes the connection (or any remote or network error occurs) and 0 if the local client side closes the connection successfully. The problem being that many services are written so that they send data and then close the TCP connection themselves without waiting for the client:
$ telnet time-b.timefreq.bldrdoc.gov 13
Trying 132.163.96.2...
Connected to time-b-b.nist.gov.
Escape character is '^]'.
58600 19-04-27 13:56:16 50 0 0 736.0 UTC(NIST) *
Connection closed by foreign host.
$ echo $?
1
Even if the client sends a command to the server to quit over the TCP stream, this still results in the remote side closing the connection, with the same result:
$ telnet mail.tardis.ed.ac.uk 25
Trying 193.62.81.50...
Connected to isolus.tardis.ed.ac.uk.
Escape character is '^]'.
220 isolus.tardis.ed.ac.uk ESMTP Postfix (Debian/GNU)
QUIT
221 2.0.0 Bye
Connection closed by foreign host.
$ echo $?
1
So, you're going to get a 1 no matter what really. If you want the return value of a remote script, this is easier with ssh like this:
$ ssh ssh.tardis.ed.ac.uk "exit 5"
THE TARDIS PROJECT | pubpubpubpubpubpubpubpubpub | Authorised access only
$ echo $?
5
As far as I know the only time telnet would return zero (i.e. success) is if you escape and quite the client, like this:
$ telnet www.google.com 80
Trying 216.58.210.36...
Connected to www.google.com.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
$ echo $?
0
Hope this helps.

It depends on what you mean by return value.
Processes (on UNIX-like systems) can return to a shell a single unsigned byte as an exit status, so that gives a value in the range 0-255. By convention zero means success and any other value indicates a failure.
(In lower-level languages, like C, you can get more than just this exit status, but that's not visible in bash).
The exit status of the last command run is stored in variable ?, so you can get its value from $?, however since many programs only return either 0 (it worked) or 1 (it didn't work), that's not much use.
Bash conditionals, like if and while test for success (exit code of 0) or failure (exit code of non-zero):
if some-command
then
echo "It worked"
else
echo "It didn't work"
fi
However ....
If you mean you want to get the output from the script, that's a different matter. You can capture it using:
var=$(some-command)
But wait, that only captures normal output, routed to a stream called stdout (file descriptor 1), it does not capture error messages which most programs write to a stream called stderr (file descriptor 2). To capture errors as well you need to redirect file descriptor 2 to file descriptor 1:
var=$(some-command 2>&1)
The output text is now in variable var.

The ? variable always stores the exit code of the previous command.
You can retrieve the value with $?.
Some context: http://tldp.org/LDP/abs/html/exit-status.html

Related

Bash script sometimes works, sometimes doesn't... pipes issue

I have a weird situation where Bash doesn't seem to be assigning one of my variables correctly, or it might be to do with pipes. I'm honestly not sure. Here's the code:
test=0
if [ "$#" -ne 4 ]; then
echo "Error: You have passed an incorrect number of parameters" > client.pipe
test=1
if [ $test -eq 1 ]; then <-------THIS SOMETIMES DOESN'T EXECUTE
echo "end_result" > client.pipe
exit 1
What's going on here is this is a server script for a command: 'select'.
The select command takes in 4 parameters: an id (to tell it which pipe to output messages to), a database name, a table name, and then a string of column ids.
At the other end of the client pipe, the client script is listening for messages, and if it receives 'end_result' it stops listening. As you can see, no matter what happens 'end_result' should get sent back to the pipe and printed by the client script, but sometimes it doesn't happen, and I get stuck in an infinite while loop. Here's the client script so you can see where the 'listening' is happening.
while true; do
read message < client.pipe
if [ "$message" == "end_result" ]; then
echo $message
break
else
echo $message
fi
done
If I pass in the incorrect number of parameters to the first script, it prints out 'Error: you have passed an incorrect num of parameters' to the pipe, but then sometimes it doesn't assign 1 to test, and it doesn't then send 'end_result' to the pipe and exit. I can't really see what the issue is, and as I've said it works probably 7 times out of 10... I can get around the issue by having the parent script send 'end_result' to the client, but it's a bit of a hack.
I'd really appreciate it if anyone could see what the issue is, and I'm happy to provide more info about the code if required.
Many thanks, R
EDIT:
I'm almost certain the problem is to do with reading from the client pipe and that while loop, as though something were getting stuck in the pipe...
This was the solution, provided by #WilliamPursell:
while true; do
read message
if [ "$message" == "end_result" ]; then
echo $message
break
else
echo $message
fi
done < client.pipe

Evolution e-mail client, pipe to program, code always returns 0

I am using "pipe to program" option in Evolution email client, that runs following ruby script
#!/usr/bin/ruby
%% example code below
junk_mail = 2
junk_mail
Now this program always returns 0, irrespective of what the value of junk_mail variable is.
I guess it has something to do with Evolution forking a child process to run this code, and always 0 (clean exit) is received back?
Help needed.
EDIT
I figured out actual problem is with data being read from pipe. Following code works fine when tested in command line, but it is unable to read pipe data when called from Evolution client
#!/usr/bin/ruby
email_txt = ARGF.read
File.open("~/debug.txt", 'a') { |file| file.write(email_txt + "\n") }
$cat email.txt | ./myprog.rb
This gives debug.txt as expected, but when called from Evolution pipe-to-program, it gives empty data.
Am I using the correct way to read piped stream data when called from external program? (I am under Fedora 20).
Use exit:
#!/usr/bin/ruby
junk_mail = 2
exit junk_mail
You can test this by running it from the command line in linux, then echoing the exit value via
echo $?
EDIT
To read STDIN into a single string:
email_txt = STDIN.readlines.join

Shell scripting return values not correct, why?

In a shell script I wrote to test how functions are returning values I came across an odd unexpected behavior. The code below assumes that when entering the function fnttmpfile the first echo statement would print to the console and then the second echo statement would actually return the string to the calling main. Well that's what I assumed, but I was wrong!
#!/bin/sh
fntmpfile() {
TMPFILE=/tmp/$1.$$
echo "This is my temp file dude!"
echo "$TMPFILE"
}
mainname=main
retval=$(fntmpfile "$mainname")
echo "main retval=$retval"
What actually happens is the reverse. The first echo goes to the calling function and the second echo goes to STDOUT. why is this and is there a better way....
main retval=This is my temp file dude!
/tmp/main.19121
The whole reason for this test is because I am writing a shell script to do some database backups and decided to use small functions to do specific things, ya know make it clean instead of spaghetti code. One of the functions I was using was this:
log_to_console() {
# arg1 = calling function name
# arg2 = message to log
printf "$1 - $2\n"
}
The whole problem with this is that the function that was returning a string value is getting the log_to_console output instead depending on the order of things. I guess this is one of those gotcha things about shell scripting that I wasn't aware of.
No, what's happening is that you are running your function, and it outputs two lines to stdout:
This is my temp file dude!
/tmp/main.4059
When you run it $(), bash will intercept the output and store it in the value. The string that is stored in the variable contains the first linebreak (the last one is removed). So what is really in your "retval" variable is the following C-style string:
"This is my temp file dude!\n/tmp/main.4059"
This is not really returning a string (can't do that in a shell script), it's just capturing whatever output your function returns. Which is why it doesn't work. Call your function normally if you want to log to console.

UNIX: is possible to bypass exit codes of unix?

I´m writing some Shell code and, for some logic of programming, I need to do some returns with negative numbers. This is:
if condition ; then
return -1
else
return -2
fi
Nevertheless, I get errors when using negative numbers, maybe because: Unix exit statuses are restricted to values 0-255, the range of an unsigned 8-bit integer. (from http://en.wikipedia.org/wiki/Exit_status#Unix)
Is there a way to bypass this? (I know that I could use another return numbers)
Thank you.
Sorry, but the Unix standard for shell scripting is exit 0 for success and exit non-zero for non-success.
The best you can do is capture return values and use them as you want, i.e.
myfunc () {
printf -- "$1" "\n"
if (( ${1:-0} == 0 )) ; then
return 0
else
return 1
fi
}
var=$(myfunc -2)
print var=${var}
#output
var=-2
Not what your overlords want to hear, but refer them to the Posix Standards.
Also FYI, $() is called command substitution. You will see people also implement command sub with paired back ticks 'cmd', but use the $( cmd ), unless you are using original borne shell coding on Sun/AIX or other heritage vendor platforms OR you are required to create code that is completely backwards (with the emphasis on backwards!) compatible.
$() is nice because you can nest them as much as you need, i.e.
$( cmd1 $( cmd2 $( cmd..n ) ) )
According the New Kornshell Programming Language (ISBN-10: 0131827006, 1995!) backticks are deprecated.
Note that either type of command substitution is creating a sub-shell to run the command, and then 'substitute in' the results into your command line.
I hope this helps ;-)

How to deal with NFS latency in shell scripts

I'm writing shell scripts where quite regularly some stuff is written
to a file, after which an application is executed that reads that file. I find that through our company the network latency differs vastly, so a simple sleep 2 for example will not be robust enough.
I tried to write a (configurable) timeout loop like this:
waitLoop()
{
local timeout=$1
local test="$2"
if ! $test
then
local counter=0
while ! $test && [ $counter -lt $timeout ]
do
sleep 1
((counter++))
done
if ! $test
then
exit 1
fi
fi
}
This works for test="[ -e $somefilename ]". However, testing existence is not enough, I sometimes need to test whether a certain string was written to the file. I tried
test="grep -sq \"^sometext$\" $somefilename", but this did not work. Can someone tell me why?
Are there other, less verbose options to perform such a test?
You can set your test variable this way:
test=$(grep -sq "^sometext$" $somefilename)
The reason your grep isn't working is that quotes are really hard to pass in arguments. You'll need to use eval:
if ! eval $test
I'd say the way to check for a string in a text file is grep.
What's your exact problem with it?
Also you might adjust your NFS mount parameters, to get rid of the root problem. A sync might also help. See NFS docs.
If you're wanting to use waitLoop in an "if", you might want to change the "exit" to a "return", so the rest of the script can handle the error situation (there's not even a message to the user about what failed before the script dies otherwise).
The other issue is using "$test" to hold a command means you don't get shell expansion when actually executing, just evaluating. So if you say test="grep \"foo\" \"bar baz\"", rather than looking for the three letter string foo in the file with the seven character name bar baz, it'll look for the five char string "foo" in the nine char file "bar baz".
So you can either decide you don't need the shell magic, and set test='grep -sq ^sometext$ somefilename', or you can get the shell to handle the quoting explicitly with something like:
if /bin/sh -c "$test"
then
...
Try using the file modification time to detect when it is written without opening it. Something like
old_mtime=`stat --format="%Z" file`
# Write to file.
new_mtime=$old_mtime
while [[ "$old_mtime" -eq "$new_mtime" ]]; do
sleep 2;
new_mtime=`stat --format="%Z" file`
done
This won't work, however, if multiple processes try to access the file at the same time.
I just had the exact same problem. I used a similar approach to the timeout wait that you include in your OP; however, I also included a file-size check. I reset my timeout timer if the file had increased in size since last it was checked. The files I'm writing can be a few gig, so they take a while to write across NFS.
This may be overkill for your particular case, but I also had my writing process calculate a hash of the file after it was done writing. I used md5, but something like crc32 would work, too. This hash was broadcast from the writer to the (multiple) readers, and the reader waits until a) the file size stops increasing and b) the (freshly computed) hash of the file matches the hash sent by the writer.
We have a similar issue, but for different reasons. We are reading s file, which is sent to an SFTP server. The machine running the script is not the SFTP server.
What I have done is set it up in cron (although a loop with a sleep would work too) to do a cksum of the file. When the old cksum matches the current cksum (the file has not changed for the determined amount of time) we know that the writes are complete, and transfer the file.
Just to be extra safe, we never overwrite a local file before making a backup, and only transfer at all when the remote file has two cksums in a row that match, and that cksum does not match the local file.
If you need code examples, I am sure I can dig them up.
The shell was splitting your predicate into words. Grab it all with $# as in the code below:
#! /bin/bash
waitFor()
{
local tries=$1
shift
local predicate="$#"
while [ $tries -ge 1 ]; do
(( tries-- ))
if $predicate >/dev/null 2>&1; then
return
else
[ $tries -gt 0 ] && sleep 1
fi
done
exit 1
}
pred='[ -e /etc/passwd ]'
waitFor 5 $pred
echo "$pred satisfied"
rm -f /tmp/baz
(sleep 2; echo blahblah >>/tmp/baz) &
(sleep 4; echo hasfoo >>/tmp/baz) &
pred='grep ^hasfoo /tmp/baz'
waitFor 5 $pred
echo "$pred satisfied"
Output:
$ ./waitngo
[ -e /etc/passwd ] satisfied
grep ^hasfoo /tmp/baz satisfied
Too bad the typescript isn't as interesting as watching it in real time.
Ok...this is a bit whacky...
If you have control over the file: you might be able to create a 'named pipe' here.
So (depending on how the writing program works) you can monitor the file in an synchronized fashion.
At its simplest:
Create the named pipe:
mkfifo file.txt
Set up the sync'd receiver:
while :
do
process.sh < file.txt
end
Create a test sender:
echo "Hello There" > file.txt
The 'process.sh' is where your logic goes : this will block until the sender has written its output. In theory the writer program won't need modifiying....
WARNING: if the receiver is not running for some reason, you may end up blocking the sender!
Not sure it fits your requirement here, but might be worth looking into.
Or to avoid synchronized, try 'lsof' ?
http://en.wikipedia.org/wiki/Lsof
Assuming that you only want to read from the file when nothing else is writing to it (ie, the writing process has finished) - you could check whether nothing else has file handle to it ?

Resources