sending echo and error to both terminal and file log - bash

I am trying to modify a script someone created for unix in shell. This script is mostly used to run on backed servers with no human interaction, however I needed to make another script to allow users to input information. So, it is just modifying to old version for user input. But the biggest issue I am running into is trying to get both error logs and echos to be saved in a log file. The script has a lot of them, but I wanted to have those shown on the terminal as well as send them to the log file specified, to be looked into later.
What I have is this:
exec 1> ${LOG} 2>&1
This line is pretty much send everything to the log file. That is all good, but I also have people trying to enter in information in the script, and it is sending everything to the log file including the echo needed for the prompt. This line is also at the beginning of the script, but reading more into the stderr and stdout messages. I tried:
exec 2>&1 1>>${LOG}
exec 1 | tee ${LOG} But only getting error when running it this "./bash_pam.sh: line 39: exec: 1: not found"
I have went over site such as this to solve the issue, but I am not understanding why it does not print to both. The way I insert it, it either only sends it to the log location and not to the terminal, or it sends it to the terminal, but nothing is persevered in the log.
EDIT: Some of the solutions, for this have mentioned that certain fixes will work in bash, but not in /bin/sh.

If you would like all output to be printed onto the console, while also being printed to a logfile.txt you would run this command on your script:
bash your_script.sh 2>&1 | tee -a logfile.txt
Or calling it within the file:
<bash_command> 2>&1 | tee -a logfile.txt
To append to logfile.txt instead of overwriting, add the -a option to tee.

Related

Email the output of a shell script

I have a shell script (script.sh) in which I run a simple python web scraper
python3 script.py
As I run this shell script in a cron job, I want to be notified by email if something goes wrong. I found this code on stackoverflow, which I put at the bottom of my .sh script:
./script.sh 2>&1 | tee output.txt | mail -s "Script log" email#address.com
The problem is that the script now seems to be looping; where it should only take a few seconds, it now takes a few minutes to run and I receive 10-20 emails in my mailbox. The content of these emails differs, most of the times the emails are empty, but sometimes they contain messages such as:
./script.sh: 4: ./script.sh: Cannot fork
or:
mail: Null message body; hope that's ok
I'm not sure what goes wrong here. What can I do to fix this?
The script runs, runs the python, then calls itself again, until it runs out of resources and fails to fork.
The .sh should contain:
python3 script.py 2>&1 | tee output.txt | mail -s "Script log" email#address.com
Assuming you actually want to create output.txt in whatever folder pwd is... otherwise you could leave that out entirely.
Alternatively you could just configure your MAILTO in your crontab (see https://www.cyberciti.biz/faq/linux-unix-crontab-change-mailto-settings/)

shell script - redirection makes some information lost

I have a shell script that can enable ble device scan with the following command
timeout 10s hcitool lescan
By executing this script (say ble_scan), I can see the nearby devices shown on the terminal.
However, when I redirect it to the file and terminal
./ble_scan | tee test.log
I can't see the nearby devices shown on the screen anymore and log file as well.
./ble_scan 2>&1 | tee test.log
The above redirection also doesnt help, anything I go wrong here?
If the command behaves differently with file output, you can run it within script.
script test.log
#=> Script started, output file is test.log
./ble_scan
# lots of output here
exit
#=> Script done, output file is test.log
Note that the file will include terminal-specific characters like carriage returns not normally captured in output redirects.

Unix output redirection

I have a script which prompts user to select options like 'y' or 'n'.
If 'y' is selected, the script proceeds with further execution and if 'n' is selected then it stops.
I want the output of this file to be re-directed to a log file. so used below command:-
./script stop >> script_RUN.log 2>&1
The problem is, the script starts running but does not prompt to ask for options like 'y' or 'n'
It is writing this to script_RUN.log.
How can I make the script to prompt user for options and re-direct the further execution to script_RUN.log?
you can try using tee command instead.
./script stop | tee script_RUN.log
NOTE:
Only the output of the program will be saved.
EDIT:
if you don't want to see the output on the console at all just redirect it into /dev/null
for example:
./script stop | tee script_RUN.log > /dev/null
the above line will write the file into log but dost NOT printout on console
This works like it has to really. You are redirecting stdout and stderr output from the very start. Instead you should try to redirect it in the script after the prompt. I think this would be helpful for you:
redirect COPY of stdout to log file from within bash script itself

shell script : write sdterr & sdtout to file

I know this has been asked many times, but I can find a suitable answer in my case.
I croned a backup script using rsync and would like to see all output, errors or not, from the all script commands. I must write the command inside the script itself, and do not want to see output in my shell.
I have been trying with no success. Below part of the script.
#!/bin/bash
.....
BKLOG=/mnt/backup_error_$now.txt
# Log everything to log file
# something like
exec 2>&1 | tee $BKLOG
# OR
exec &> $BKLOG
I have been adding at the script beginig all kinds of exec | tee $BKLOG with adding &>, 2>&1at various part of the command line, but all failed. I either get an empty log file or incomplete. I need to see on log file what rsync has done, and the error if script failed before syncing.
Thank you for help. My shell is zsh, so any solution in zsh is welcomed.
To redirect all the stdout/stderr to a file place this line on top of your script:
BKLOG=/mnt/backup_error_$now.txt
exec &> "$BKLOG"

Bash: How to redirect stdin, stdout and stderr to a log file for an install log

I currently have a bunch of installer scripts which log stderr/stdout to a log file that works well but I need to also redirect stdin for user responses to the same log file. The install scripts sometime call functions in a shared library (an include), which may also read user input. I thought about adding a custom read function but this will require altering the shared library and wondered if there's a way to do this from the calling script.
At the moment the scripts are similar to this:
#!/usr/bin/bash
. ./libInstall
INSTALL_LOG="./install.log"
( (
echo "INFO: Installing..."
# Run some arbitrary commands...
# Read some input...
read ANSWER1
read ANSWER2
# Call function in libInstall which will prompt the user...
funcWhichAsksAQuestion ANSWER3
echo "INFO: Installation Complete"
) 2>&1 ) | tee -a "${INSTALL_LOG}"
If I change "( (" to reflect the line below I can tee off stdin to the log file:
cat - 2> /dev/null | tee -a ${INSTALL_LOG} | ( (
This works but requires 2 carriage returns once the script ends, presumably because the pipe is broken.
It's almost there but I'd it to work without having to press enter twice at the end to get back to the shell prompt.
These scripts have to be fairly portable to work on RHEL >=5, AIX >=5.1, Solaris >=9 with the lowest bash version being v2.05 I believe.
Any ideas how I can achieve this?
Thanks
Why not just add 'echo "\n\n"' after your "installation complete" line? Granted, you'll have two extra lines in your log file, but those seem relatively harmless.
I believe you have to return twice because of how tee is implemented. It "uses" one return by itself, and the other two come from the 'read' calls (well, one read, one funcWhichAsksAQuestion).

Resources