Re-direct output of a shell to a txt file - shell

I have a script written and I want to include a function in the script, that silently logs the console output to a .txt file. The printf used in my shell scripts have colors for certain characters.
A sample:
# Color block
G="\033[32m"
N="\033[0m"
R="\033[31m"
Y="\033[33m"
# MCS Check
mcs=$(cat /home/admin/service-health.txt | grep -i mcs | cut -d ' ' -f 5 | tr . " ")
if [ "$mcs" == "up " ]
then
printf "${Y}MCS${N} Service Status is\t\t |${G}UP${N}\n"
else
printf "${Y}MCS${N} Service Status is\t\t |${R}DOWN${N}\n"
fi
Console output for this will display the color.
This is not mandatory in the .txt logging.
I will then be emailing this .txt to an address using:
sendmail $vdp $eaddr < /home/admin/health-check.txt
I used this block as I want to redirect the output within the script itself:
sudo touch /home/admin/health-check.txt
exec > >(tee -i /home/admin/health-check.txt)
exec 2>&1
But since this is a colored output, I keep getting this in my email:
[33mGSAN[0m Service Status is |[32mUP[0m
[33mMCS[0m Service Status is |[32mUP[0m
[33mTomcat[0m Service Status is |[32mUP[0m
[33mScheduler[0m Service Status is |[32mUP[0m
[33mMaintenance[0m Service Status is |[32mUP[0m
VDP [33mAccess State[0m is |[32mFULL[0m
Any thoughts about stripping colors during redirect? I do not want to use sed to find and replace as this looks tedious.
Thanks.

You can direct the output using the > character. printf "mytext" > out.txt will print "mytext" to the file "out.txt"

Related

Inspect null character from Bash's read command

I am on a system that does not have hexdump. I know there's a null character on STDIN, but I want to show/prove it. I've got Ruby on the system. I've found that I can directly print it like this:
$ printf 'before\000after' | (ruby -e "stdin_contents = STDIN.read_nonblock(10000) rescue nil; puts 'stdin contents: ' + stdin_contents.inspect")
stdin contents: "before\x00after"
However, I need to run this inside of a bash script i.e. STDIN is not being directly piped to my script. I have to get it via running read in bash.
When I try to use read to get the stdin characters, it seems to be truncating them and it doesn't work:
$ printf 'before\000after' | (read -r -t 1 -n 1000000; printf "$REPLY" | ruby -e "stdin_contents = STDIN.read_nonblock(10000) rescue nil; puts 'stdin contents: ' + stdin_contents.inspect")
stdin contents: "before"
My question is this: How can I get the full/raw output including the null character from read

Can you colorize specific lines that are grepped from a file?

I run a weekly CRONTAB that collects hardware info from 40+ remote servers and creates a weekly log file on our report server at the home office. I have a script that I run against this weekly file to output only specific status lines to my display.
#!/bin/sh
# store newest filename to variable
DD_FILE="$(ls -t /home/user/ddinfo/|head -1)"
# List the site name, disk ID (virtual & physical), Status and State of each ID, Failure Prediction for each physical disk, and the site divider
grep -w 'Site\|^ID\|^State\|^Status\|^Failure Predicted\|^##' /home/user/ddinfo/$DD_FILE
echo "/home/user/ddinfo/"$DD_FILE
exit 0
This is a sample output:
Accessing Site: site01
ID : 0
Status : Ok
State : Ready
ID : 0:0:0
Status : Ok
State : Online
Failure Predicted : No
ID : 0:0:1
Status : Ok
State : Online
Failure Predicted : No
################################################
Accessing Site: site02
ID : 0
Status : Ok
State : Ready
ID : 0:0:0
Status : Non-Critical
State : Online
Failure Predicted : Yes
ID : 0:0:1
Status : Ok
State : Online
Failure Predicted : No
################################################
Is there a way to cat / grep / sed / awk / perl / this output so that any lines that end with either Critical or Yes, get colorized?
With GNU grep:
grep --color -E ".*Yes$|.*Critical$|$" file
You could try ack, a very nice alternative to grep:
% ack '(Critical|Yes)$' file
# Or to colorize the whole line:
% ack '(.*(Critical|Yes))$' file
See Beyond grep
Or if you want to see all lines and only colorize specific ones:
use Term::ANSIColor qw/ colored /;
while (<$fh>) {
s/(.*)(Critical|Yes)$/colored(["yellow bold"], $1.$2)/e;
print;
}
To see all lines but have the lines that end in Critical or Yes colorized, try:
awk -v on="$(tput smso)" -v off="$(tput rmso)" '/(Critical|Yes)$/{$0=on $0 off} 1' logfile
This uses tput to create codes suitable for your terminal. For demonstration purposes, I chose the smso/rmso to set and reset the "standout mode." You can use any other feature that tput supports.
Variation
If we want the text in red instead of "standout mode":
awk -v on="$(tput setaf 1)" -v off="$(tput sgr0)" '/(Critical|Yes)$/{$0=on $0 off} 1' logfile
tput setaf 1 is the code to create red. (In tput, red is 1, green is 2, etc.). tput sgr0 is the code to turn off all attributes.
How it works
-v on="$(tput smso)" -v off="$(tput rmso)"
This defines two awk variables, on and off that turn on and turn off whatever color effect we prefer.
/(Critical|Yes)$/{$0=on $0 off}
For any line that ends with Critical or Yes, we add the on code to the front of the line and the off code to the end.
1
This is awk's cryptic shorthand for print-the-line.
You could use Term::ANSIColor module of Perl:
... | perl -pne 'BEGIN { use Term::ANSIColor } /: (Yes|Critical)$/ && { $_ = color("red") . "$_" . color("reset") }'
Thank you for all of your responses. I ended up piping the original grep results to another grep | grep --color=auto '.*\(Yes\|Critical\).*\|$' and got the colorized results I wanted:
grep -i 'site\|^ID\|^State\|^Status\|^Failure Predicted\|^##' /home/user/ddinfo/$DD_FILE | grep --color=auto '.*\(Yes\|Critical\).*\|$'
This the new sample output:

write shell script copy one file to number of servers

i am searching in google but i can.t find that.i want a Successful shell script and using for loop.most of the case fails for searching this things.enter image description here
You can two scripts:
1. Server List, this can contain a list of destination hostnames each one on a new line.
2. A copy script, which can basically cat the above server list and then execute scp command to copy the file for the same. It can also accept parameters if your server list is different per every application. Below is a sample:
Usage()
{
echo "Usage: $0 [-a application] [-l level]"
echo " where application = {a, b, c , d }"
exit 1;
}
SERVER_LIST=a.txt
for HOST in `cat $SERVER_LIST | grep -v ^# | cut -d: -f2`
do
spawn /usr/bin/scp FILE user#$HOST:destinationDirectory
expect {
"*password:*" { send $PASSWORD\r;interact }
}
exit
"
done

Obtain output from a bash command in Ruby

I'm trying to obtain the output of a bash command. More precisely, I need to store the number of lines that contains a string in a file:
variable_name = AAAAAAA
PATH_TO_SEARCH = .
COMMAND = "grep -R #{variable_name} #{PATH_TO_SEARCH} | wc -l"
To execute the command I tried both methods:
num_lines = %x[ #{COMMAND} ]
num_lines = `#{COMMAND}`
but the problem is: In "num_lines" I have 1) the number of lines that contain the string (OK!) and 2) output from grep like "grep: /home/file_example.txt: No such file or directory" (NO!).
I would like to store just the first output.
Looks like you may just need to suppress the error messages.
"You can use the -s or --no-messages flag to suppress errors." found from How can I have grep not print out 'No such file or directory' errors?

Bash add to end of file (>>) if not duplicate line

Normally I use something like this for processes I run on my servers
./runEvilProcess.sh >> ./evilProcess.log
However I'm currently using Doxygen and it produces lots of duplicate output
Example output:
QGDict::hashAsciiKey: Invalid null key
QGDict::hashAsciiKey: Invalid null key
QGDict::hashAsciiKey: Invalid null key
So you end up with a very messy log
Is there a way I can only add the line to the log file if the line wasn't the last one added.
A poor example (but not sure how to do in bash)
$previousLine = ""
$outputLine = getNextLine()
if($previousLine != $outputLine) {
$outputLine >> logfile.log
$previousLine = $outputLine
}
If the process returns duplicate lines in a row, pipe the output of your process through uniq:
$ ./t.sh
one
one
two
two
two
one
one
$ ./t.sh | uniq
one
two
one
If the logs are sent to the standard error stream, you'll need to redirect that too:
$ ./yourprog 2>&1 | uniq >> logfile
(This won't help if the duplicates come from multiple runs of the program - but then you can pipe your log file through uniq when reviewing it.)
Create a filter script (filter.sh):
while read line; do
if [ "$last" != "$line" ]; then
echo $line
last=$line
fi
done
and use it:
./runEvilProcess.sh | sh filter.sh >> evillog

Resources