How can I redirect stdout and stderr with variant? - bash

Normally, we use
sh script.sh 1>t.log 2>t.err
to redirect log.
How can I use variant to log:
string="1>t.log 2>t.err"
sh script.sh $string

You need to use 'eval' shell builtin for this purpose. As per man page of bash command:
eval [arg ...]
The args are read and concatenated together into a single command. This command is then read and exe‐
cuted by the shell, and its exit status is returned as the value of eval. If there are no args, or
only null arguments, eval returns 0.
Run your command like below:
eval sh script.sh $string
However, do you really need to run script.sh through sh command? If you instead put sh interpreter line (using #!/bin/sh as the first line in your shell script) in your script itself and give it execute permission, that would let you access return code of ls command. Below is an example of using sh and not using sh. Notice the difference in exit codes.
Note: I had only one file try.sh in my current directory. So ls command was bound to exit with return code 2.
$ ls try1.sh try1.sh.backup 1>out.txt 2>err.txt
$ echo $?
2
$ eval sh ls try1.sh try1.sh.backup 1>out.txt 2>err.txt
$ echo $?
127
In the second case, the exit code is of sh shell. In first case, the exit code is of ls command. You need to make cautious choice depending on your needs.

I figure out one way but it's ugly:
echo script.sh $string | sh

I think you can just put the name into a string variable
and then use data redirection
file_name="file1"
outfile="$file_name"".log"
errorfile="$file_name"".err"
sh script.sh 1> $outfile 2> $errorfile

Related

Why it runs different for source and sh calling in shell

My shell a.sh script like this:
#!/bin/sh
# $ret maybe from database or pipe,whatever it likes:
ret="cnt
1"
echo -e $ret
and calling in different ways produces different results:
$ sh a.sh
cnt 1
$ source a.sh
cnt
1
$
How can I get the same output under sh and source?
How can I get the same output under sh and source?
you need to quote echo. – fedorqui
thanks #fedorqui. that means echo -e "$ret" – tonylee0329
Exactly, quoting echo's argument is the way to adjust the difference between the two shells' echos.

Bash script: how to get the whole command line which ran the script

I would like to run a bash script and be able to see the command line used to launch it:
sh myscript.sh arg1 arg2 1> output 2> error
in order to know if the user used the "std redirection" '1>' and '2>', and therefore adapt the output of my script.
Is it possible with built-in variables ??
Thanks.
On Linux and some unix-like systems, /proc/self/fd/1 and /proc/self/fd/2 are symlinks to where your std redirections are pointing to. Using readlink, we can query if they were redirected or not by comparing them to the parent process' file descriptor.
We will however not use self but $$ because $(readlink /proc/"$$"/fd/1) spawns a new shell so self would no longer refer to the current bash script but to a subshell.
$ cat test.sh
#!/usr/bin/env bash
#errRedirected=false
#outRedirected=false
parentStderr=$(readlink /proc/"$PPID"/fd/2)
currentStderr=$(readlink /proc/"$$"/fd/2)
parentStdout=$(readlink /proc/"$PPID"/fd/1)
currentStdout=$(readlink /proc/"$$"/fd/1)
[[ "$parentStderr" == "$currentStderr" ]] || errRedirected=true
[[ "$parentStdout" == "$currentStdout" ]] || outRedirected=true
echo "$0 ${outRedirected:+>$currentStdout }${errRedirected:+2>$currentStderr }$#"
$ ./test.sh
./test.sh
$ ./test.sh 2>/dev/null
./test.sh 2>/dev/null
$ ./test.sh arg1 2>/dev/null # You will lose the argument order!
./test.sh 2>/dev/null arg1
$ ./test.sh arg1 2>/dev/null >file ; cat file
./test.sh >/home/camusensei/file 2>/dev/null arg1
$
Do not forget that the user can also redirect to a 3rd file descriptor which is open on something else...!
Not really possible. You can check whether stdout and stderr are pointing to a terminal: [ -t 1 -a -t 2 ]. But if they do, it doesn't necessarily mean they weren't redirected (think >/dev/tty5). And if they don't, you can't distinguish between stdout and stderr being closed and them being redirected. And even if you know for sure they are redirected, you can't tell from the script itself where they point after redirection.

How to check the current shell and change it to bash via script?

#!/bin/bash
if [ ! -f readexportfile ]; then
echo "readexportfile does not exist"
exit 0
fi
The above is part of my script. When the current shell is /bin/csh my script fails with the following error:
If: Expression Syntax
Then: Command not found
If I run bash and then run my script, it runs fine(as expected).
So the question is: If there is any way that myscript can change the current shell and then interpretate rest of the code.
PS: If i keep bash in my script, it changes the current shell and rest of the code in script doesn't get executed.
The other replies are correct, however, to answer your question, this should do the trick:
[[ $(basename $SHELL) = 'bash' ]] || exec /bin/bash
The exec builtin replaces the current shell with the given command (in this case, /bin/bash).
You can use SHEBANG(#!) to overcome your issue.
In your code you are already using she-bang but make sure it is first and foremost line.
$ cat test.sh
#!/bin/bash
if [ ! -f readexportfile ]; then
echo "readexportfile does not exist"
exit 0
else
echo "No File"
fi
$ ./test.sh
readexportfile does not exist
$ echo $SHELL
/bin/tcsh
In the above code even though I am using CSH that code executed as we mentioned shebang in the code. In case if there is no shebang then it will take the help of shell in which you are already logged in.
In you case you also check the location of bash interpreter using
$ which bash
or
$ cat /etc/shells |grep bash

How to specify zeroeth argument

I'm writing a bash script that starts the tcsh interpreter as a login shell and has it execute my_command. The tcsh man page says that there are two ways to start a login shell. The first is to use /bin/tcsh -l with no other arguments. Not an option, because I need the shell to execute my_command. The second is to specify a dash (-) as the zeroeth argument.
Now the bash exec command with the -l option does exactly this, and in fact the following works perfectly:
#!/bin/bash
exec -l /bin/tcsh -c my_command
Except... I can't use exec because I need the script to come back and do some other things afterwards! So how can I specify - as the zeroeth argument to /bin/tcsh without using exec?
You can enclose the exec command into a sub-shell of your script.
#!/bin/bash
(exec -l /bin/tcsh -c my_command)
# ... whatever else you need to do after the command is done
You can write a wrapper (w.sh) script that contains:
#!/bin/bash
exec -l /bin/tcsh -c my_command
and execute w.sh in your main script.

How can a ksh script determine the full path to itself, when sourced from another?

How can a script determine it's path when it is sourced by ksh? i.e.
$ ksh ". foo.sh"
I've seen very nice ways of doing this in BASH posted on stackoverflow and elsewhere but haven't yet found a ksh method.
Using "$0" doesn't work. This simply refers to "ksh".
Update: I've tried using the "history" command but that isn't aware of the history outside the current script.
$ cat k.ksh
#!/bin/ksh
. j.ksh
$ cat j.ksh
#!/bin/ksh
a=$(history | tail -1)
echo $a
$ ./k.ksh
270 ./k.ksh
I would want it echo "* ./j.ksh".
If it's the AT&T ksh93, this information is stored in the .sh namespace, in the variable .sh.file.
Example
sourced.sh:
(
echo "Sourced: ${.sh.file}"
)
Invocation:
$ ksh -c '. ./sourced.sh'
Result:
Sourced: /var/tmp/sourced.sh
The .sh.file variable is distinct from $0. While $0 can be ksh or /usr/bin/ksh, or the name of the currently running script, .sh.file will always refer to the file for the current scope.
In an interactive shell, this variable won't even exist:
$ echo ${.sh.file:?}
-ksh: .sh.file: parameter not set
I believe the only portable solution is to override the source command:
source() {
sourced=$1
. "$1"
}
And then use source instead of . (the script name will be in $sourced).
The difference of course between sourcing and forking is that sourcing results in the invoked script being executed within the calling process. Henk showed an elegant solution in ksh93, but if, like me, you're stuck with ksh88 then you need an alternative. I'd rather not change the default ksh method of sourcing by using C-shell syntax, and at work it would be against our coding standards, so creating and using a source() function would be unworkable for me. ps, $0 and $_ are unreliable, so here's an alternative:
$ cat b.sh ; cat c.sh ; ./b.sh
#!/bin/ksh
export SCRIPT=c.sh
. $SCRIPT
echo "PPID: $$"
echo "FORKING c.sh"
./c.sh
If we set the invoked script in a variable, and source it using the variable, that variable will be available to the invoked script, since they are in the same process space.
#!/bin/ksh
arguments=$_
pid=$$
echo "PID:$pid"
command=`ps -o args -p $pid | tail -1`
echo "COMMAND (from ps -o args of the PID): $command"
echo "COMMAND (from c.sh's \$_ ) : $arguments"
echo "\$SCRIPT variable: $SCRIPT"
echo dirname: `dirname $0`
echo ; echo
Output is as follows:
PID:21665
COMMAND (from ps -o args of the PID): /bin/ksh ./b.sh
COMMAND (from c.sh's $_ ) : SCRIPT=c.sh
$SCRIPT variable: c.sh
dirname: .
PPID: 21665
FORKING c.sh
PID:21669
COMMAND (from ps -o args of the PID): /bin/ksh ./c.sh
COMMAND (from c.sh's $_ ) : ./c.sh
$SCRIPT variable: c.sh
dirname: .
So when we set the SCRIPT variable in the caller script, the variable is either accessible from the sourced script's operands, or, in the case of a forked process, the variable along with all other environment variables of the parent process are copied for the child process. In either case, the SCRIPT variable can contain your command and arguments, and will be accessible in the case of both sourcing and forking.
You should find it as last command in the history.

Resources