When this is executed: `"(exec -l -a specialname /bin/bash -c 'echo $0' ) 2> error"`, why it outputs `[7^[[r^[[999;999H^[[6n` to stderr - bash

when I do the bash test:
(exec -l -a specialname /bin/bash -c 'echo $0' ) 2> error
the run-builtins fails, after some search, I found that it outputs
^[7^[[r^[[999;999H^[[6n
to the stderr, so I redirect it to a file error.
If I cat it, it output a blank line.
I opened it using vim with which I found the:
^[7^[[r^[[999;999H^[[6n
why?

After a long search, I found that bash read the /etc/profile file, and in this file, has the following:
if [ -x /usr/bin/resize ];then
/usr/bin/resize >/dev/null
fi
so the bash execute the resize program, this program is produced by busybox in my system, the busybox source code console-tools/resize.c has:
fprintf(stderr, ESC"7" ESC"[r" ESC"[999;999H" ESC"[6n")
so it output that to stderr.

run the command:
(exec -l -a specialname /bin/bash -c 'export PS1='test';echo ${PS1}') 2> err.log
vi err.log

Related

sh -c doesn't recognize heredoc syntax

I think the two commands below should be identical, but given the heredoc, the shell produces an error. Is it possible to pass a heredoc to the -c argument of sh?
heredoc example
/bin/sh -c <<EOF
echo 'hello'
EOF
# ERROR: /bin/sh: -c: option requires an argument
simple string example
/bin/sh -c "echo 'hello'"
# prints hello
The commands are not equivalent.
/bin/sh -c <<EOF
echo 'hello'
EOF
is equivalent to
echo "echo 'hello'" | /bin/sh -c
or, with here-string:
/bin/sh -c <<< "echo 'hello'"
but sh -c requires an argument. It would work with
echo "echo 'hello'" | /bin/sh
I tried to post this as a comment but formatting code doesn't work well in comments. Using the accepted answer by #Benjamin W. as a guide, I was able to get it to work with this snippet
( cat <<EOF
echo 'hello'
EOF
) | /bin/sh
The magic is in how cat handles inputs. From the man page:
If file is a single dash (`-') or absent, cat reads from the standard input.
So cat can redirect stdin to stdout and that can be piped to /bin/sh

How to print shell script stdout/stderr to file/s and console

In the bash script I use the following syntax I order to print everything from the script to the files - $file and $sec_file
we are running the script on our Linux rhel server - version 7.8
exec > >(tee -a "$file" >>"$sec_file") 2>&1
so after bash script completed , we get on both files the content of stdout/stderr of every line in the bash script
now we want additionally to print to the console the stdout/stderr and not only to the files
I will appreciate of any suggestion
Example of the script:
# more /tmp/script.bash
#!/bin/bash
file=/tmp/file.txt
sec_file=/tmp/sec_file.txt
exec > >(tee -a "$file" >>"$sec_file") 2>&1
echo "hello world , we are very happy to stay here "
Example how to run the script:
/tmp/script.bash
<-- no output from the script -->
# more /tmp/file.txt
hello world , we are very happy to stay here
# more /tmp/sec_file.txt
hello world , we are very happy to stay here
example of expected output that should be as the following
/tmp/script.bash
hello world , we are very happy to stay here
and
# more /tmp/file.txt
hello world , we are very happy to stay here
# more /tmp/sec_file.txt
hello world , we are very happy to stay here
I think, the easiest is to just add multiple files as arguments to the tee like this:
% python3 -c 'import sys; print("to stdout"); print("to stderr", file=sys.stderr)' 2>&1 | tee -a /tmp/file.txt /tmp/file_sec.txt
to stdout
to stderr
% cat /tmp/file.txt
to stdout
to stderr
% cat /tmp/file_sec.txt
to stdout
to stderr
Your script would look like this then:
#!/bin/bash
file=/tmp/file.txt
sec_file=/tmp/sec_file.txt
exec > >(tee -a "$file" "$sec_file") 2>&1
echo "hello world , we are very happy to stay here "
I would suggest to just write console things to a new output channel:
#!/bin/bash
file=file.txt
sec_file=sec_file.txt
exec 4>&1 > >(tee -a "$file" >>"$sec_file") 2>&1
echo "stdout"
echo "stderr" >&2
echo "to the console" >&4
Output:
me#pc:~/⟫ ./script.sh
to the console
me#pc:~/⟫ cat file.txt
stdout
stderr
me#pc:~/⟫ cat sec_file.txt
stdout
stderr
If you want you can do this and even write to stderr again with >&5:
exec 4>&1 5>&1 > >(tee -a "$file" >>"$sec_file") 2>&1
echo "stderr to console" >&5
Edit: Changed &3 to &4 as &3 is sometimes used for stdin.
But maybe this is the moment to rethink what you are doing and keep &1 stdout and &2 stderr and use &4 and &5 to write to file?
exec 4> >(tee -a "$file" >>"$sec_file") 5>&1
This does require you though to add to all lines that should end up in your file to prepend >&4 2>&5

Why does `bash -c '...'` terminate early on some (but not all) errors?

What is going on here?
The following works as expected:
$ bash -c 'false; echo $?'
1
But trying to kill a nonexistent process with pkill makes bash terminate before the script is done.
$ bash -c 'pkill -f xyz_non_existent_process_xyz; echo $?'
[1] 21078 terminated bash -c 'pkill -f xyz_non_existent_process_xyz; echo $?'
If I run this command in the terminal, I see that pkill returns an error code of 1, just like the false command did:
$ pkill -f xyz_non_existing_process_xyz; echo $?
1
So the two commands are returning the same status code... so what's the difference!?
I tried wrapping the command in a number of ways. For example:
$ bash -c '(pkill -f xyz_non_existent_process_xyz || true); echo $?'
[1] 21309 terminated bash -c '(pkill -f xyz_non_existent_process_xyz || true); echo $?'
So it seems like whatever is causing bash to terminate early, it's not the exit status of any of the commands??
What's going on here?
It's simple: pkill find the bash command and stops its execution. Change the search pattern and it will function:
bash -c 'pkill -f "xyz_n""on_existent_process_xyz"; echo $?'
It's a little bit tricky: "xyz_n""on_existent_process_xyz" is the same as xyz_non_existent_process_xyz

Why can't I redirect stderr from within a bash -c command line?

I'm trying to log the time for the execution of a command, so I'm doing that by using the builtin time command in bash. I also wish to redirect the stderr and stdout to a logfile at the same time. However, it doesn't seem to be working as the stderr just spills out onto my terminal.
Here is the command:
rm -rf doxygen
mkdir doxygen
bash -c 'time "/cygdrive/d/Program Files/doxygen/bin/doxygen.exe" Doxyfile > doxygen/doxygen.log 1>&2' genfile > doxygen/time 1>&2 &
What am I doing wrong here?
You are using 1>&2 instead of 2>&1.
With the lengths of names reduced, you're trying to run:
bash -c 'time doxygen Doxyfile > doxygen.log 1>&2' genfile > doxygen.time 1>&2 &
The > doxygen.log sends standard output to the file; the 1>&2 then changes your mind and sends standard output to the same place that standard error is going. Similarly with the outer pair of redirections.
If you used:
bash -c 'time doxygen Doxyfile > doxygen.log 2>&1' genfile > doxygen.time 2>&1 &
then you send standard error to the same place that standard output goes — twice.
Incidentally, do you realize that the genfile serves as the $0 for the script run by bash -c '…'? I'm not convinced it is needed in your script. To see this, try:
bash -c 'echo 0=$0; echo 1=$1; echo 2=$2' genfile jarre oxygene
When run, this produces:
0=genfile
1=jarre
2=oxygene

Bash script: how to get the whole command line which ran the script

I would like to run a bash script and be able to see the command line used to launch it:
sh myscript.sh arg1 arg2 1> output 2> error
in order to know if the user used the "std redirection" '1>' and '2>', and therefore adapt the output of my script.
Is it possible with built-in variables ??
Thanks.
On Linux and some unix-like systems, /proc/self/fd/1 and /proc/self/fd/2 are symlinks to where your std redirections are pointing to. Using readlink, we can query if they were redirected or not by comparing them to the parent process' file descriptor.
We will however not use self but $$ because $(readlink /proc/"$$"/fd/1) spawns a new shell so self would no longer refer to the current bash script but to a subshell.
$ cat test.sh
#!/usr/bin/env bash
#errRedirected=false
#outRedirected=false
parentStderr=$(readlink /proc/"$PPID"/fd/2)
currentStderr=$(readlink /proc/"$$"/fd/2)
parentStdout=$(readlink /proc/"$PPID"/fd/1)
currentStdout=$(readlink /proc/"$$"/fd/1)
[[ "$parentStderr" == "$currentStderr" ]] || errRedirected=true
[[ "$parentStdout" == "$currentStdout" ]] || outRedirected=true
echo "$0 ${outRedirected:+>$currentStdout }${errRedirected:+2>$currentStderr }$#"
$ ./test.sh
./test.sh
$ ./test.sh 2>/dev/null
./test.sh 2>/dev/null
$ ./test.sh arg1 2>/dev/null # You will lose the argument order!
./test.sh 2>/dev/null arg1
$ ./test.sh arg1 2>/dev/null >file ; cat file
./test.sh >/home/camusensei/file 2>/dev/null arg1
$
Do not forget that the user can also redirect to a 3rd file descriptor which is open on something else...!
Not really possible. You can check whether stdout and stderr are pointing to a terminal: [ -t 1 -a -t 2 ]. But if they do, it doesn't necessarily mean they weren't redirected (think >/dev/tty5). And if they don't, you can't distinguish between stdout and stderr being closed and them being redirected. And even if you know for sure they are redirected, you can't tell from the script itself where they point after redirection.

Resources