how to log all the command output to one single file in bash scripting [duplicate] - bash

This question already has an answer here:
Output bash script into file (without >> )
(1 answer)
Closed 9 years ago.
In gnu/Linux i want to log all the command output to one particular file.
Say in terminal,i am typing
echo "Hi this is a dude"
It should print in the file name specified earlier without using the redirection in every command.

$ script x1
Script started, file is x1
$ echo "Hi this is a dude"
Hi this is a dude
$ echo "done"
done
$ exit
exit
Script done, file is x1
Then, the contents of file x1 are:
Script started on Thu Jun 13 14:51:29 2013
$ echo "Hi this is a dude"
Hi this is a dude
$ echo "done"
done
$ exit
exit
Script done on Thu Jun 13 14:51:52 2013
You can easily edit out your own commands and start/end lines using basic shell scripting (grep -v, especially if your Unix prompt has a distinctive substring pattern)

Commands launched from the shell inherit the file descriptor to use for standard output from the shell. In your typical interactive shell, standard output is the terminal. You can change that by using the exec command:
exec > output.txt
Following that command, the shell itself will write its standard output to a file called output.txt, and any command it spawns will do likewise, unless otherwise redirected. You can always "restore" output to the terminal using
exec > /dev/tty
Note that your shell prompt and text you type at the prompt continue to be displayed on the screen (since the shell writes both of those to standard error, not standard output).

{ command1 ; command2 ; command3 ; } > outfile.txt

Output redirection can be achieved in bash with >: See this link for more info on bash redirection.
You can run any program with ported output and all its output will go to a file, for example:
$ ls > out
$ cat out
Desktop
Documents
Downloads
eclipse
Firefox_wallpaper.png
...
So, if you want to open a new shell session with ported output, just do so!:
$ bash > outfile
will start a new bash session porting all of stdout to that file.
$ bash &> outfile
will port all of stdout AND stderr to that file (meaning you will no longer see prompts show up in your terminal)
For example:
$ bash > outfile
$ echo "hello"
$ echo "this is an outfile"
$ cd asdsd
bash: cd: asdsd: No such file or directory
$ exit
exit
$ cat outfile
hello
this is an outfile
$
$ bash &> outfile
echo "hi"
echo "this saves everythingggg"
cd asdfasdfasdf
exit
$ cat outfile
hi
this saves everythingggg
bash: line 3: cd: asdfasdfasdf: No such file or directory
$

If you want to see the output and have it written to a file (say for later analysis) then you can use the tee command.
$ echo "hi this is a dude" | tee hello
hi this is a dude
$ ls
hello
$ cat hello
hi this is a dude
tee is a useful command because it allows you to store everything that goes into it as well as displaying it on the screen. Particularly useful for logging the output of scripts.

Related

How to capture shell script output

I have an unix shell script. I have put -x in shell to see all the execution step. Now I want to capture these in one log file on a daily basis.
Psb script.
#!/bin/ksh -x
Logfile= path.log.date
Print " copying file" | tee $logifle
Scp -i key source destination | tee -a $logfile.
Exit 0;
First line of the shell script is known as shebang , which indicates what interpreter has to be execute the below script.
Similarly first line is commented which denotes coming lines not related to that interpreted session.
To capture the output, run the script redirect your output while running the script.
ksh -x scriptname >> output_file
Note:it will output what your script's doing line by line
There are two cases, using ksh as your shell, then you need to do IO redirection accordingly, and using some other shell and executing a .ksh script, then IO redirection could be done based on that shell. Following method should work for most of the shells.
$ cat somescript.ksh
#!/bin/ksh -x
printf "Copy file \n";
printf "Do something else \n";
Run it:
$ ./somescript.ksh 1>some.log 2>&1
some.log will contain,
+ printf 'Copy file \n'
Copy file
+ printf 'Do something else \n'
Do something else
In your case, no need to specify logfile and/or tee. Script would look something like this,
#!/bin/ksh -x
printf "copying file\n"
scp -i key user#server /path/to/file
exit 0
Run it:
$ ./myscript 1>/path/to/logfile 2>&1
2>&1 captures both stderr and stdout into stdout and 1>logfile prints it out into logfile.
I would prefer to explicitly redirecting the output (including stderr 2> because set -x sends output to stderr).
This keeps the shebang short and you don't have to cram the redirecton and filename-building into it.
#!/bin/ksh
logfile=path.log.date
exec >> $logfile 2>&1 # redirecting all output to logfile (appending)
set -x # switch on debugging
# now start working
echo "print something"

Bash script: how to get the whole command line which ran the script

I would like to run a bash script and be able to see the command line used to launch it:
sh myscript.sh arg1 arg2 1> output 2> error
in order to know if the user used the "std redirection" '1>' and '2>', and therefore adapt the output of my script.
Is it possible with built-in variables ??
Thanks.
On Linux and some unix-like systems, /proc/self/fd/1 and /proc/self/fd/2 are symlinks to where your std redirections are pointing to. Using readlink, we can query if they were redirected or not by comparing them to the parent process' file descriptor.
We will however not use self but $$ because $(readlink /proc/"$$"/fd/1) spawns a new shell so self would no longer refer to the current bash script but to a subshell.
$ cat test.sh
#!/usr/bin/env bash
#errRedirected=false
#outRedirected=false
parentStderr=$(readlink /proc/"$PPID"/fd/2)
currentStderr=$(readlink /proc/"$$"/fd/2)
parentStdout=$(readlink /proc/"$PPID"/fd/1)
currentStdout=$(readlink /proc/"$$"/fd/1)
[[ "$parentStderr" == "$currentStderr" ]] || errRedirected=true
[[ "$parentStdout" == "$currentStdout" ]] || outRedirected=true
echo "$0 ${outRedirected:+>$currentStdout }${errRedirected:+2>$currentStderr }$#"
$ ./test.sh
./test.sh
$ ./test.sh 2>/dev/null
./test.sh 2>/dev/null
$ ./test.sh arg1 2>/dev/null # You will lose the argument order!
./test.sh 2>/dev/null arg1
$ ./test.sh arg1 2>/dev/null >file ; cat file
./test.sh >/home/camusensei/file 2>/dev/null arg1
$
Do not forget that the user can also redirect to a 3rd file descriptor which is open on something else...!
Not really possible. You can check whether stdout and stderr are pointing to a terminal: [ -t 1 -a -t 2 ]. But if they do, it doesn't necessarily mean they weren't redirected (think >/dev/tty5). And if they don't, you can't distinguish between stdout and stderr being closed and them being redirected. And even if you know for sure they are redirected, you can't tell from the script itself where they point after redirection.

How do I automatically save the output of the last command I've run (every time)?

If I wanted to have the output of the last command stored in a file such as ~/.last_command.txt (overwriting output of previous command), how would I go about doing so in bash so that the output goes to both stdout and that file? I imagine it would involve piping to tee ~/.last_command.txt but I don't know what to pipe to that, and I definitely don't want to add that to every command I run manually.
Also, how could I extend this to save the output of the last n commands?
Under bash this seems to have the desired effect.
bind 'RETURN: "|tee ~/.last_command.txt\n"'
You can add it to your bashrc file to make it permanent.
I should point out it's not perfect. Just hitting the enter key and you get:
matt#devpc:$ |tee ~/.last_command.txt
bash: syntax error near unexpected token `|'
So I think it needs a little more work.
This will break program/feature expecting a TTY, but...
exec 4>&1
PROMPT_COMMAND="exec 1>&4; exec > >(mv ~/.last_command{_tmp,}; tee ~/.last_command_tmp)"
If it is acceptable to record all output, this can be simplified:
exec > >(tee ~/.commands)
Overwrite for 1 command:
script -c ls ~/.last_command.txt
If you want more than 1 command:
$ script ~/.last_command.txt
$ command1
$ command2
$ command3
$ exit
If you want to save during 1 login session, append "script" to .bashrc
When starting a new session (after login, or after opening the terminal), you can start another "nested" shell, and redirect its output:
<...login...>
% bash | tee -a ~/.bash_output
% ls # this is the nested shell
% exit
% cat ~/.bash_output
% exit
Actually, you don't even have to enter a nested shell every time. You can simply replace your shell-command in /etc/passwd from bash to bash | tee -a ~USERNAME/.bash_output.

Issue with scheduling in Linux

I scheduled a script using at scheduler in linux.
The job ran fine but the echo statements which I had redirected to a file are no where to be found.
The at scheduling command is as follows:
at -f /app/data/scripts/func_test.sh >> /app/data/log/log.txt 2>&1 -v 09:50
Can anyone point out what is the issue with the above command.
I cannot see any echo statements from the script in the log.txt file
To include shell syntax like I/O redirection, you'll need to either fold it into your script, or pass the input to at via standard input, like so:
at -v 09:50 <<EOF
sh /app/data/scripts/func_test.sh >> /app/data/log/log.txt 2>&1
EOF
If func_test.sh is already executable, you can omit the sh from the beginning of the command; it's there to ensure that you are passing a valid command line to at.
You can also simply ensure that your script itself redirects all its output to a specific log file. As an example,
#!/bin/bash
echo foo
echo bar
becomes
#!/bin/bash
{
echo foo
echo bar
} >> /app/data/log/log.txt 2>&1
Then you can simply run your script with at using
at -f /app/data/scripts/func_test.sh -v 09:50
with no output redirection, because the script itself already redirects all its output to that file.

Unexpected output from cat `bash` command

Can someone please explain this? I ran the commands as shown below
$ cat `bash`
$ ls
$ ctrl+D
and it's giving me some unexpected output on terminal.
NOTE: bash is in backquotes.
Good question! The "unexpected output" is cat printing all of the files found by ls in the cwd. Detailed explanation follow:
On your first line:
$ cat `bash`
The bash part actually spawns a new shell from your original shell because bash is enclosed by backquotes (backquotes means to run the enclosed program in this context)
Then when you do:
$ ls
This is actually done in the newly spawned bash shell. It lists the directory of wherever the newly spawned bash shell is (should be the same as the original). This, in turn, in essence changes the cat command in the first step to
$ cat file_1 file_2 ... file_x
(basically all of the files in that directory returned by ls. However, you won't see these results yet because the output is waiting to be printed to the stdout of your original shell : cat is waiting to evaluate the stdout of your new bash shell.)
Lastly, when you do:
$ ctrl+D
It exits the new bash shell that you spawned from your original shell, and then cat outputs everything that got printed to stdout in the new shell (the search results from ls) into your old shell.
You can verify what I just said by:
$ cd ~/
$ mkdir temp_test_dir
$ cd temp_test_dir
$ echo "some text for file1" > file1
$ echo "other text for file2" > file2
Now run what you had in your question:
$ cat `bash`
$ ls
$ ctrl+D
And this is what you should see:
some text for file1
other text for file2
in some order, which is just cat outputting all of the files that were found by ls.

Resources