The script utility works like this:
$ script
Script started, file is typescript
$ ls
2 bin doubleline new1 play typescript
alok core filelist output pslist unix
$ ps
PID TTY TIME CMD
28149 pts/7 0:00 ksh.ms
$
Script done, file is typescript
After this contents of the typescript file are:
$ cat typescript
Script started on Wed Sep 07 05:56:26 2011
$ ls
2 bin doubleline new1 play typescript
alok core filelist output pslist unix
$ ps
PID TTY TIME CMD
28149 pts/7 0:00 ksh.ms
$
script done on Wed Sep 07 05:56:33 2011
$
I want to copy this behaviour with using other commands and I/O redirections.
The solution must be a one line command (may include pipelined commands).
Any help would be great, like if anyone can tell how can we redirect stdin, stdout and stderr to some file while all the data is still on the terminal.
$tee -a typescript | sh -i 2>&1 | tee -a typescript
Related
I am using /bin/script to capture the output of a command and preserve the colors and formatting. Using subshell and assigning to variable does not always work well. E.g.,:
foo="$( ls --color 2>&1 )"
I can use /bin/script to capture stdout:
$ script -qc "echo foo" >& /dev/null && cat typescript
Script started on Mon 06 Nov 2017 10:40:40 PM PST
foo
I can use /bin/script to capture stderr (ls of non-existent directory goes to stderr):
$ script -qc "ls vxzcvcxvc" >& /dev/null && cat typescript
Script started on Mon 06 Nov 2017 10:38:17 PM PST
ls: cannot access vxzcvcxvc: No such file or directory
My problem arises when the script run inside /bin/script mucks with file descriptors.
I am not able to use /bin/script to capture redirected stderr:
$ script -qc "ls vxzcvcxvc 2>&1" >& /dev/null && cat typescript
Script started on Mon 06 Nov 2017 10:47:13 PM PST
I have tried other ways as well:
$ script -qc "echo foo1 && >&2 echo foo2 && echo foo3" >& /dev/null && cat typescript
Script started on Mon 06 Nov 2017 10:46:09 PM PST
foo1
foo3
I assume /bin/script is doing its own file descriptor magic (redirecting output to file), so I am left wondering what to do if the script I am calling does its own redirection.
Tangential question: The primary culprit is a logging line that does
printf "${1}" 1>&2
in order to print logging to stderr. Is there a way to output to stderr without mucking with file descriptors (assuming this is the reason /bin/script fails to pick it up)?
This question already has an answer here:
Output bash script into file (without >> )
(1 answer)
Closed 9 years ago.
In gnu/Linux i want to log all the command output to one particular file.
Say in terminal,i am typing
echo "Hi this is a dude"
It should print in the file name specified earlier without using the redirection in every command.
$ script x1
Script started, file is x1
$ echo "Hi this is a dude"
Hi this is a dude
$ echo "done"
done
$ exit
exit
Script done, file is x1
Then, the contents of file x1 are:
Script started on Thu Jun 13 14:51:29 2013
$ echo "Hi this is a dude"
Hi this is a dude
$ echo "done"
done
$ exit
exit
Script done on Thu Jun 13 14:51:52 2013
You can easily edit out your own commands and start/end lines using basic shell scripting (grep -v, especially if your Unix prompt has a distinctive substring pattern)
Commands launched from the shell inherit the file descriptor to use for standard output from the shell. In your typical interactive shell, standard output is the terminal. You can change that by using the exec command:
exec > output.txt
Following that command, the shell itself will write its standard output to a file called output.txt, and any command it spawns will do likewise, unless otherwise redirected. You can always "restore" output to the terminal using
exec > /dev/tty
Note that your shell prompt and text you type at the prompt continue to be displayed on the screen (since the shell writes both of those to standard error, not standard output).
{ command1 ; command2 ; command3 ; } > outfile.txt
Output redirection can be achieved in bash with >: See this link for more info on bash redirection.
You can run any program with ported output and all its output will go to a file, for example:
$ ls > out
$ cat out
Desktop
Documents
Downloads
eclipse
Firefox_wallpaper.png
...
So, if you want to open a new shell session with ported output, just do so!:
$ bash > outfile
will start a new bash session porting all of stdout to that file.
$ bash &> outfile
will port all of stdout AND stderr to that file (meaning you will no longer see prompts show up in your terminal)
For example:
$ bash > outfile
$ echo "hello"
$ echo "this is an outfile"
$ cd asdsd
bash: cd: asdsd: No such file or directory
$ exit
exit
$ cat outfile
hello
this is an outfile
$
$ bash &> outfile
echo "hi"
echo "this saves everythingggg"
cd asdfasdfasdf
exit
$ cat outfile
hi
this saves everythingggg
bash: line 3: cd: asdfasdfasdf: No such file or directory
$
If you want to see the output and have it written to a file (say for later analysis) then you can use the tee command.
$ echo "hi this is a dude" | tee hello
hi this is a dude
$ ls
hello
$ cat hello
hi this is a dude
tee is a useful command because it allows you to store everything that goes into it as well as displaying it on the screen. Particularly useful for logging the output of scripts.
I have a shell script which uses process substitution
The script is:
#!/bin/bash
while read line
do
echo "$line"
done < <( grep "^abcd$" file.txt )
When I run the script using sh file.sh I get the following output
$sh file.sh
file.sh: line 5: syntax error near unexpected token `<'
file.sh: line 5: `done < <( grep "^abcd$" file.txt )'
When I run the script using bash file.sh, the script works.
Interestingly, sh is a soft-link mapped to /bin/bash.
$ which bash
/bin/bash
$ which sh
/usr/bin/sh
$ ls -l /usr/bin/sh
lrwxrwxrwx 1 root root 9 Jul 23 2012 /usr/bin/sh -> /bin/bash
$ ls -l /bin/bash
-rwxr-xr-x 1 root root 648016 Jul 12 2012 /bin/bash
I tested to make sure symbolic links are being followed in my shell using the following:
$ ./a.out
hello world
$ ln -s a.out a.link
$ ./a.link
hello world
$ ls -l a.out
-rwx--x--x 1 xxxx xxxx 16614 Dec 27 19:53 a.out
$ ls -l a.link
lrwxrwxrwx 1 xxxx xxxx 5 May 14 14:12 a.link -> a.out
I am unable to understand why sh file.sh does not execute as /bin/bash file.sh since sh is a symbolic link to /bin/bash.
Any insights will be much appreciated. Thanks.
When invoked as sh, bash enters posix
mode after the startup files are read. Process substitution is not recognized in posix mode. According to posix, <(foo) should direct input from the file named (foo). (Well, that is, according to my reading of the standard. The grammar is ambiguous in many places.)
EDIT: From the bash manual:
The following list is what’s changed when ‘POSIX mode’ is in effect:
...
Process substitution is not available.
I'm trying to get the PID of a piped data collection command in a BASH script so that I can kill the process once another (foreground) process has completed. I've got a timeout on the data collection command, but I don't know beforehand how long it should be; anywhere from a few minutes to several hours.
This question How to get the PID of a process in a pipeline was informative, but it didn't pan out. Probably because I'm using BASH 2.05 (on an embedded system).
Initial testing on the command line for the jobs -p command looked promising. In practice, I replace ksmon command below with a piped command (ksmon | {various filters} | gzip -c > file) so the "$!" BASH variable gives the last process in the pipe, not the ksmon program I'm trying to kill:
Test:/ $ ksmon > /dev/null & jobs -p
[1] 2016
2016
Test:/ $ ps | grep ksmon
2016 root 660 S ksmon
2018 root 580 S grep ksmon
Test:/ $ kill $(jobs -p)
[1]+ Terminated ksmon >/dev/null
Test:/ $ ps | grep ksmon
2021 root 580 S grep ksmon
Test:/ $
Whoohoo! So I tried to put that in a script:
Test:/ $ cat > test.sh << EOF
> #!/bin/bash
> # That's bash 2.05
> ksmon > /dev/null & jobs -p
> EOF
Test:/ $ chmod 755 test.sh
Test:/ $ ./test.sh
Test:/ $
Test:/ $ # Nothing ...
Test:/ $ ps | grep ksmon
2025 root 660 S ksmon
2027 root 580 S grep ksmon
Test:/ $ jobs -p
Test:/ $
This is the version I'm trying to get it working with:
Test:/ $ bash --version
GNU bash, version 2.05.0(2)-release (arm-Artila-linux-gnu)
Copyright 2000 Free Software Foundation, Inc.
Oddly enough, the above script does works on an Ubuntu host.
What do I have to do to make jobs -p work in a BASH 2.05 script, or is there an alternative?
One thought was to just subtract a fixed number from the $! variable, but I'm not sure if that a good idea ...
EDIT:
The reason I'd like to kill the first program in the pipe, is that all the programs that follow in the pipe, particularly gzip will do a nice job of closing their output streams.
I suggest you to use $! to get PID of the last process in the pipe; then use lsof to sequentially determine previous processes in chain by open file descriptors from /proc.
If I enter bash -x option, it will show all the line. But the script will execute normaly.
How can I execute line by line? Than I can see if it do the correct thing, or I abort and fix the bug. The same effect is put a read in every line.
You don't need to put a read in everyline, just add a trap like the following into your bash script, it has the effect you want, eg.
#!/usr/bin/env bash
set -x
trap read debug
< YOUR CODE HERE >
Works, just tested it with bash v4.2.8 and v3.2.25.
IMPROVED VERSION
If your script is reading content from files, the above listed will not work. A workaround could look like the following example.
#!/usr/bin/env bash
echo "Press CTRL+C to proceed."
trap "pkill -f 'sleep 1h'" INT
trap "set +x ; sleep 1h ; set -x" DEBUG
< YOUR CODE HERE >
To stop the script you would have to kill it from another shell in this case.
ALTERNATIVE1
If you simply want to wait a few seconds before proceeding to the next command in your script the following example could work for you.
#!/usr/bin/env bash
trap "set +x; sleep 5; set -x" DEBUG
< YOUR CODE HERE >
I'm adding set +x and set -x within the trap command to make the output more readable.
The BASH Debugger Project is "a source-code debugger for bash that follows the gdb command syntax."
If your bash script is really a bunch of one off commands that you want to run one by one, you could do something like this, which runs each command one by one when you increment a variable LN, corresponding to the line number you want to run. This allows you to just run the last command again super easy, and then you just increment the variable to go to the next command.
Assuming your commands are in a file "it.sh", run the following, one by one.
$ cat it.sh
echo "hi there"
date
ls -la /etc/passwd
$ $(LN=1 && cat it.sh | head -n$LN | tail -n1)
"hi there"
$ $(LN=2 && cat it.sh | head -n$LN | tail -n1)
Wed Feb 28 10:58:52 AST 2018
$ $(LN=3 && cat it.sh | head -n$LN | tail -n1)
-rw-r--r-- 1 root wheel 6774 Oct 2 21:29 /etc/passwd
Have a look at bash-stepping-xtrace.
It allows stepping xtrace.
xargs: can filter lines
cat .bashrc | xargs -0 -l -d \\n bash
-0 Treat as raw input (no escaping)
-l Separate each line (Not by default for performances)
-d \\n The line separator