invoke variable declared in child script to parent shell script - shell

in my case script A is calling to script B.
now I am declaring a variable in my child script B and would like to do if else condition check in prent script.
variablename in child script
logFileName=stop_log$current_date'.log'
this is how I am trying to invoke
logFileName = os.environ["logFileName"]
export logfilename
echo $logFileName
and then doing condition check like
if
logerr=`grep 'ConnectException' $logFileName`
if [ -z "$logerr" ]; then
echo " No error "
else
exit 1
fi
I am not able to exort that variable in parent script. could someone please help.

A child process, for all practical purposes, cannot set a variable in the parent process.
Therefore, you have a few options available to get the log file name from the child to the parent:
Use the . command (aka source in C shell and Bash) to read script B and execute it as part of the current shell.
Have script B echo the name of the logfile. Script A can capture it using:
logfilename=$(script-b …)
The major downside of this is that it is hard to do if script B is supposed to generate other output too.
Have script B save the name of the logfile in another file. Usually, script A should tell script B where to save it. Occasionally, you can agree on a location, but remember that there could be multiple copies of the scripts running at the same time, so a fixed name (/tmp/tmp.file for example) is dangerous on multiple counts (security and concurrency are both issues).
Illustrating option 3
Script-A
logfilename=$(mktemp ${TMPDIR:-/tmp}/Script-A.log.XXXXXX
trap "rm -f $logfilename; exit 1" 0 1 2 3 13 15
echo "Message from Script-A" > $logfilename
Script-B $logfilename
echo "End message from Script-A" >> $logfilename
echo Log file name: $logfilename
cat $logfilename
rm -f $logfilename
trap 0
Script-B
logfilename=${1:?}
echo "Script-B busy at work"
echo "Message for the log file" >> $logfilename # NB: >> each time
echo "Script-B wrapping up"
echo "Script-B complete" >> $logfilename
In the code of Script-A, the command mktemp creates a temporary file name at random based on the template given. Normally, the template will be /tmp/script-A.log.XXXXXX, where the 6 X's will be replaced by random letters or digits. The trap command means that if the script is signalled (SIGHUP 1, SIGINT 2, SIGQUIT 3, SIGPIPE 13 or SIGTERM 15) or exits (0), the temporary file will be removed. If it is meant to outlive the run of Script-A, you would omit the trap but echo the name. It writes a message to the log file; it runs Script-B, passing the log file name; it writes another message. It then wraps up: reports the file name, shows its contents; removes the file; and cancels the trap so that it can exit with a status of 0, success.
The Script-B code checks that it was given an argument (${1:?}) and saves it as the variable logfilename. You could have had Script-A export the variable and Script-B could have tested that the environment variable was set instead of requiring an argument, but arguments are generally better. Then Script-B echoes a message to its output and another to the log file (note that you need to append to the log file). It does its work (nothing here); writes another message to output and another message to the logfile; and exits.
There are lots of other stunts you can pull in Script-B to get the messages to the log file, but this should get you going.
If you don't have the mktemp command, either get its source (GNU or BSD), or use:
logfilename=${TMPDIR:-/tmp}/Script-A.log.$$
This uses the process ID to give you moderate assurance that the name won't be used by another process, but it is more easily determined and so is less secure than the random name generated by mktemp.

Related

what does "if { set -C; 2>/dev/null >~/test.lock; }" in a bash mean?

I have encountered this in a bash script:
if { set -C; 2>/dev/null >~/test.lock; }; then
echo "Acquired lock"
else
echo "Lock file exists… exiting"
exit 1
fi
It enters on the else flow. I know set -C will not overwrite the files, 2>/dev/null means something as : redirect errors to "void", but then I have >~/test.lock and will mean to redirect something in the lock file, (what exactly, the errors probably). I have test.lock the file in home, created, and empty. Being a if , it must return false in my case.
{ ... ; ... ; } is a compound command. That is bash executes every command in it, and the exit code is the one of the last one.
It is a bit like ( ... ; ... ), except that with ( you execute a subshell (a bit like sh -c "... ; ...") which is less efficient, and, moreover, prevent affecting local variables of your current shell, for example.
So, in short, { set -C; 2>/dev/null >~/test.lock; } means "do set -C, then do 2>/dev/null >~/test.lock and the return (exit code) is the one of that last command".
So if { set -C; 2>/dev/null >~/test.lock; } is "if 2>/dev/null >~/test.lock succeeds in that compound command, that is after set -C".
Now, set -C means that you can't overwrite existing files
And 2>/dev/null > ~/test.lock is an attempt to overwrite test.lock if it exists, or to create it if it doesn't.
So, what you have here, is
If lock file already exist, fail and say "lock file exists, exiting"
If lock file did not exit, create it, and say "lock acquired".
And it does it in one operation.
So it is different than
# Illustration of how NOT to do it. Do not use this code :-)
if [[ -f "test.lock" ]]
then
echo "lock file exists, exiting"
else
2>/dev/null > ~/test.lock
echo "lock file acquired"
fi
because that, clearer but wrong, version does not guarantee that something will have created the lock file between the evaluation of the if condition and the execution of 2>/dev/null > ~/test.lock.
The version you've shown has the advantage that the creation and test of lock is the same thing.
set -C disallows writing to existing files
2>/dev/null suppresses warnings
>~/test.lock attempts to write to a file called test.lock. If the file already exists this returns an error because of set -C. Otherwise it will create a new test.lock file, making the next instance of this script fail on this step.
The purpose of lock files is to ensure that only one instance of a script runs at the same time. When the program is finished it could delete ~/test.lock to let another instance run.

Reuse variable in EOF bash script

I have a script doing something like this:
var1=""
ssh xxx#yyy<<'EOF'
[...]
var2=`result of bash command`
echo $var2 #print what I need
var1=$var2 #is there a way to pass var2 into global var1 variable ?
EOF
echo $var1 # the need is to display the value of var2 created in EOF block
Is there a way to do this?
In general, an executed command has three paths of delivering information:
By stating an exit code.
By making output.
By creating files.
It is not possible to change a (environment) variable of the parent process. This is true for all child processes, and your ssh process is no exemption.
I would not rely on ssh to pass the exit code of the remote process, though (because even if it works in current implementations, this is brittle; ssh could also want to state its own success or failure with its exit code, not the remote process's).
Using files also seems inappropriate because the remote process will probably have a different file system (but if the remote and the local machine share an NFS for instance, this could be an option).
So I suggest using the output of the remote process for delivering information. You could achieve this like this:
var1=$(ssh xxx#yyy<<'EOF'
[...]
var2=$(result of bash command)
echo "$var2" 1>&2 # to stderr, so it's not part of the captured output
# and instead shown on the terminal
echo "$var2" # to stdout, so it's part of the captured output
EOF
)
echo "$var1"

Background process appears to hang

Editor's note: The OP is ultimately looking to package the code from this answer
as a script. Said code creates a stay-open FIFO from which a background command reads data to process as it arrives.
It works if I type it in the terminal, but it won't work if I enter those commands in a script file and run it.
#!/bin/bash
cat >a&
pid=$!
it seems that the program is stuck at cat>a&
$pid has no value after running the script, but the cat process seems to exist.
cdarke's answer contains the crucial pointer: your script mustn't run in a child process, so you have to source it.
Based on the question you linked to, it sounds like you're trying to do the following:
Open a FIFO (named pipe).
Keep that FIFO open indefinitely.
Make a background command read from that FIFO whenever new data is sent to it.
See bottom for a working solution.
As for an explanation of your symptoms:
Running your script NOT sourced (NOT with .) means that the script runs in a child process, which has the following implications:
Variables defined in the script are only visible inside that script, and the variables cease to exist altogether when the script finishes running.
That's why you didn't see the script's $myPid variable after running the script.
When the script finishes running, its background tasks (cat >a&) are killed (as cdarke explains, the SIGHUP signal is sent to them; any process that doesn't explicitly trap that signal is terminated).
This contradicts your claim that the cat process continues to exist, but my guess is that you mistook an interactively started cat process for one started by a script.
By contrast, any FIFO created by your script (with mkfifo) does persist after the script exits (a FIFO behaves like a file - it persists until you explicitly delete it).
However, when you write to that FIFO without another process reading from it, the writing command will block and thus appear to hang (the writing process blocks until another process reads the data from the FIFO).
That's probably what happened in your case: because the script's background processes were killed, no one was reading from the FIFO, causing an attempt to write to it to block. You incorrectly surmised that it was the cat >a& command that was getting "stuck".
The following script, when sourced, adds functions to the current shell for setting up and cleaning up a stay-open FIFO with a background command that processes data as it arrives. Save it as file bgfifo_funcs:
#!/usr/bin/env bash
[[ $0 != "$BASH_SOURCE" ]] || { echo "ERROR: This script must be SOURCED." >&2; exit 2; }
# Set up a background FIFO with a command listening for input.
# E.g.:
# bgfifo_setup bgfifo "sed 's/^/# /'"
# echo 'hi' > bgfifo # -> '# hi'
# bgfifo_cleanup
bgfifo_setup() {
(( $# == 2 )) || { echo "ERROR: usage: bgfifo_setup <fifo-file> <command>" >&2; return 2; }
local fifoFile=$1 cmd=$2
# Create the FIFO file.
mkfifo "$fifoFile" || return
# Use a dummy background command that keeps the FIFO *open*.
# Without this, it would be closed after the first time you write to it.
# NOTE: This call inevitably outputs a job control message that looks
# something like this:
# [1]+ Stopped cat > ...
{ cat > "$fifoFile" & } 2>/dev/null
# Note: The keep-the-FIFO-open `cat` PID is the only one we need to save for
# later cleanup.
# The background processing command launched below will terminate
# automatically then FIFO is closed when the `cat` process is killed.
__bgfifo_pid=$!
# Now launch the actual background command that should read from the FIFO
# whenever data is sent.
{ eval "$cmd" < "$fifoFile" & } 2>/dev/null || return
# Save the *full* path of the FIFO file in a global variable for reliable
# cleanup later.
__bgfifo_file=$fifoFile
[[ $__bgfifo_file == /* ]] || __bgfifo_file="$PWD/$__bgfifo_file"
echo "FIFO '$fifoFile' set up, awaiting input for: $cmd"
echo "(Ignore the '[1]+ Stopped ...' message below.)"
}
# Cleanup function that you must call when done, to remove
# the FIFO file and kill the background commands.
bgfifo_cleanup() {
[[ -n $__bgfifo_file ]] || { echo "(Nothing to clean up.)"; return 0; }
echo "Removing FIFO '$__bgfifo_file' and terminating associated background processes..."
rm "$__bgfifo_file"
kill $__bgfifo_pid # Note: We let the job control messages display.
unset __bgfifo_file __bgfifo_pid
return 0
}
Then, source script bgfifo_funcs, using the . shell builtin:
. bgfifo_funcs
Sourcing executes the script in the current shell (rather than in a child process that terminates after the script has run), and thus makes the script's functions and variables available to the current shell. Functions by definition run in the current shell, so any background commands started from functions stay alive.
Now you can set up a stay-open FIFO with a background process that processes input as it arrives as follows:
# Set up FIFO 'bgfifo in the current dir. and process lines sent to it
# with a sample Sed command that simply prepends '# ' to every line.
$ bgfifo_setup bgfifo "sed 's/^/# /'"
# Send sample data to the FIFO.
$ echo 'Hi.' > bgfifo
# Hi.
# ...
$ echo 'Hi again.' > bgfifo
# Hi again.
# ...
# Clean up when done.
$ bgfifo_cleanup
The reason that cat >a "hangs" is because it is reading from the standard input stream (stdin, file descriptor zero), which defaults to the keyboard.
Adding the & causes it to run in background, which disconnects from the keyboard. Normally that would leave a suspended job in background, but, since you exit your script, its background tasks are killed (sends a SIGHUP signal).
EDIT: although I followed the link in the question, it was not stated originally that the OP was actually using a FIFO at that stage. So thanks to #mklement0.
I don't understand what you are trying to do here, but I suspect you need to run it as a "sourced" file, as follows:
. gash.sh
Where gash.sh is the name of your script. Note the preceding .
You need to specify a file with "cat":
#!/bin/bash
cat SOMEFILE >a &
pid=$!
echo PID $pid
Although that seems a bit silly - why not just "cp" the file (cp SOMEFILE a)?
Q: What exactly are you trying to accomplish?

How can I have output from one named pipe fed back into another named pipe?

I'm adding some custom logging functionality to a bash script, and can't figure out why it won't take the output from one named pipe and feed it back into another named pipe.
Here is a basic version of the script (http://pastebin.com/RMt1FYPc):
#!/bin/bash
PROGNAME=$(basename $(readlink -f $0))
LOG="$PROGNAME.log"
PIPE_LOG="$PROGNAME-$$-log"
PIPE_ECHO="$PROGNAME-$$-echo"
# program output to log file and optionally echo to screen (if $1 is "-e")
log () {
if [ "$1" = '-e' ]; then
shift
$# > $PIPE_ECHO 2>&1
else
$# > $PIPE_LOG 2>&1
fi
}
# create named pipes if not exist
if [[ ! -p $PIPE_LOG ]]; then
mkfifo -m 600 $PIPE_LOG
fi
if [[ ! -p $PIPE_ECHO ]]; then
mkfifo -m 600 $PIPE_ECHO
fi
# cat pipe data to log file
while read data; do
echo -e "$PROGNAME: $data" >> $LOG
done < $PIPE_LOG &
# cat pipe data to log file & echo output to screen
while read data; do
echo -e "$PROGNAME: $data"
log echo $data # this doesn't work
echo -e $data > $PIPE_LOG 2>&1 # and neither does this
echo -e "$PROGNAME: $data" >> $LOG # so I have to do this
done < $PIPE_ECHO &
# clean up temp files & pipes
clean_up () {
# remove named pipes
rm -f $PIPE_LOG
rm -f $PIPE_ECHO
}
#execute "clean_up" on exit
trap "clean_up" EXIT
log echo "Log File Only"
log -e echo "Echo & Log File"
I thought the commands on line 34 & 35 would take the $data from $PIPE_ECHO and output it to the $PIPE_LOG. But, it doesn't work. Instead I have to send that output directly to the log file, without going through the $PIPE_LOG.
Why is this not working as I expect?
EDIT: I changed the shebang to "bash". The problem is the same, though.
SOLUTION: A.H.'s answer helped me understand that I wasn't using named pipes correctly. I have since solved my problem by not even using named pipes. That solution is here: http://pastebin.com/VFLjZpC3
it seems to me, you do not understand what a named pipe really is. A named pipe is not one stream like normal pipes. It is a series of normal pipes, because a named pipe can be closed and a close on the producer side is might be shown as a close on the consumer side.
The might be part is this: The consumer will read data until there is no more data. No more data means, that at the time of the read call no producer has the named pipe open. This means that multiple producer can feed one consumer only when there is no point in time without at least one producer. Think of it of door which closes automatically: If there is a steady stream of people keeping the door always open either by handing the doorknob to the next one or by squeezing multiple people through it at the same time, the door is open. But once the door is closed it stays closed.
A little demonstration should make the difference a little clearer:
Open three shells. First shell:
1> mkfifo xxx
1> cat xxx
no output is shown because cat has opened the named pipe and is waiting for data.
Second shell:
2> cat > xxx
no output, because this cat is a producer which keeps the named pipe open until we tell him to close it explicitly.
Third shell:
3> echo Hello > xxx
3>
This producer immediately returns.
First shell:
Hello
The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.
Third shell
3> echo World > xxx
3>
First shell:
World
The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.
Second Shell: write into the cat > xxx window:
And good bye!
(control-d key)
2>
First shell
And good bye!
1>
The ^D key closed the last producer, the cat > xxx, and hence the consumer exits also.
In your case which means:
Your log function will try to open and close the pipes multiple times. Not a good idea.
Both your while loops exit earlier than you think. (check this with (while ... done < $PIPE_X; echo FINISHED; ) &
Depending on the scheduling of your various producers and consumers the door might by slam shut sometimes and sometimes not - you have a race condition built in. (For testing you can add a sleep 1 at the end of the log function.)
You "testcases" only tries each possibility once - try to use them multiple times (you will block, especially with the sleeps ), because your producer might not find any consumer.
So I can explain the problems in your code but I cannot tell you a solution because it is unclear what the edges of your requirements are.
It seems the problem is in the "cat pipe data to log file" part.
Let's see: you use a "&" to put the loop in the background, I guess you mean it must run in parallel with the second loop.
But the problem is you don't even need the "&", because as soon as no more data is available in the fifo, the while..read stops. (still you've got to have some at first for the first read to work). The next read doesn't hang if no more data is available (which would pose another problem: how does your program stops ?).
I guess the while read checks if more data is available in the file before doing the read and stops if it's not the case.
You can check with this sample:
mkfifo foo
while read data; do echo $data; done < foo
This script will hang, until you write anything from another shell (or bg the first one). But it ends as soon as a read works.
Edit:
I've tested on RHEL 6.2 and it works as you say (eg : bad!).
The problem is that, after running the script (let's say script "a"), you've got an "a" process remaining. So, yes, in some way the script hangs as I wrote before (not that stupid answer as I thought then :) ). Except if you write only one log (be it log file only or echo,in this case it works).
(It's the read loop from PIPE_ECHO that hangs when writing to PIPE_LOG and leaves a process running each time).
I've added a few debug messages, and here is what I see:
only one line is read from PIPE_LOG and after that, the loop ends
then a second message is sent to the PIPE_LOG (after been received from the PIPE_ECHO), but the process no longer reads from PIPE_LOG => the write hangs.
When you ls -l /proc/[pid]/fd, you can see that the fifo is still open (but deleted).
If fact, the script exits and removes the fifos, but there is still one process using it.
If you don't remove the log fifo at the cleanup and cat it, it will free the hanging process.
Hope it will help...

Using Return in bash

I'm wanting this script to print 1,2,3.. without the use of functions, just execute two.sh then carry on where it left off, is it possible?
[root#server:~]# cat testing.sh
#!/bin/bash
echo "1"
exec ./two.sh
echo "3"
[root#server:~]# cat two.sh
#!/bin/bash
echo "2"
return
exec, if you give it a program name a, will replace the current program with whatever you specify.
If you want to just run the script (in another process) and return, simply use:
./two.sh
to do that.
For this simple case, you can also execute the script in the context of the current process with:
. ./two.sh
That will not start up a new process but will have the side-effect of allowing two.sh to affect the current shell's environment. While that's not a problem for your current two.sh (since all it does is echo a line), it may be problematic for more complicated scripts (for example, those that set environment variables).
a Without a program name, it changes certain properties of the current program, such as:
exec >/dev/null
which simply starts sending all standard output to the bit bucket.
Sure, just run:
echo "1"
./two.sh
echo "3"

Resources