Sending ANSI-Colored codes text to 3 outputs: screen, file and file filtering ANSI codes - shell

My program is sending characters-colored text to some log file:
echo -e "\\033[38;5;2m-->\\033[0m Starting program." | tee $LogFile -a
Resulting in a perfectly colored log line, but I would like to simultaneously create another logfile without ANSI codes, due to I need to browse this log in Windows (I know there are some ANSI browsers for Windows, but I prefer to browse the log files using Total Commander, that has no decent plugin for that).
So, I need three outputs in my log line:
Some colored --> Starting program. line to screen.
The same colored --> Starting program. line to the logfile with ANSI codes.
The same not colored --> Starting program. line to the logfile (supposedly without ANSI codes, or so I think).
The above line was fine thanks to the tee command, that solves points 1 and 2, but I don't know how to add some option to save to another file but without ANSI codes, at least without replicating the full line.
Maybe using redirectors, file descriptor, with the help of mkfifo?
My workaround for now is, by the way, duplication of outputs (what becomes a bit awkward):
echo -e "\\033[38;5;2m--> Starting program.\\033[0m" | tee $LogFile -a
echo "--> Starting program." >> $NotANSILogFile

To send the ansi codes to a log file, ansi.log, and to the screen, while also sending a non-ansi version to a log file called nonansi.log, use:
echo -e "\\033[38;5;2m-->\\033[0m Starting program." | tee -a ansi.log | tee /dev/tty | sed $'s/\E[^m]*m//g' >>nonansi.log
How it works
tee -a ansi.log
The first tee command appends the ansi-encoded string the the log file ansi.log.
tee /dev/tty
The second tee command sends the ansi-encoded string to the screen. (/dev/tty is the file name of the current screen.)
sed $'s/\E[^m]*m//g' >>nonansi.log
The final command, sed removes the ansi sequences and the result is appended to nonansi.log.
The sed command is contained in a bash $'...' string so that the escape character can be represented simply as \E. This substitute command looks for sequences that start with escape, \E, and end with m and removes them. The final g tells sed to perform this substitution on every escape sequence on the line, not just the first one.
If you have GNU sed, an alternative is to use GNU's \o notation instead:
sed 's/\o033[^m]*m//g' >>nonansi.log
If you have neither GNU sed nor bash, then you need to see what facilities your sed or your shell provide for the Esc character.
Hiding the details in a shell function
If you need to make multiple log entries, it might be easiest to create a shell function, logger, to hold all the messy details:
logger() { echo -e "$*" | tee -a ansi.log | tee /dev/tty | sed $'s/\E[^m]*m//g' >>nonansi.log; }
With this function defined, then any log entry can be performed by providing the ansi-encoded string as an argument:
logger "\\033[38;5;2m-->\\033[0m Start."

This method works (needs perl installed) to send output to screen (ANSI) and two files: $LogFile (ANSI) and $LogFile.plain (ANSI-free):
echo -e "\\033[38;5;2m--> Starting program.\\033[0m" | tee >(perl -pe 's/\e\[?.*?[\#-~]//g'>>"$LogFile".plain) $LogFile -a
Details:
The tee command splits output to both screen(stdout) and to the perl command.
The perl line filters the ANSI codes.
The output of perl is redirected to the plain Non-ANSI log file (used LogFile.plain for its filename).
To send output only to files (not to screen) do (same with just a classic >/dev/null).
echo -e "\\033[38;5;2m--> Starting program.\\033[0m" | tee >(perl -pe 's/\e\[?.*?[\#-~]//g'>>"$LogFile".plain) $LogFile -a >/dev/null
Notes:
Used >> instead of >, as expected for an incremental log file.
Used -a for the tee command for the same reason above.
I would prefer to use sed instead of perl, but all the sed examples I have found until now do not work. If someone knows, please report.

Related

Bash:Tail logs without junk characters

Running tail -f /var/log/* can sometimes show junk/garbage characters and trash the screen with control codes.
What's a good way to filter those out, in order to see a clean output with minimal loss of information?
Pipe to sed, to strip out ANSI codes (those likely to trash the console).
Pipe to strings, to only print actual strings, because some files in /var/log/ contains binary data.
tail -f /var/log/* | sed $'s#\e[\[(][[:digit:]]*[[:print:]]##g' | strings
You can add these helper aliases to your shell profile by executing this:
cat >>~/.profile <<EOF
# ANSI codes stripping helpers
alias noansi="sed $'s#\e[\[(][[:digit:]]*[[:print:]]##g'"
alias noansistrings="noansi|strings"
EOF
. ~/.profile
Then for example, you will be able to run:
tail -f /var/log/* | noansistrings

Bash script: write string to file without any output to the terminal, using pipe

Sorry for the title, i couldn't find proper words to explain my problem.
Here's the code:
wlan_c=$(iwconfig | sed '/^\(w.*$\)/!d;s/ .*//' > ./wifi_iface)
wlan=$(<./wifi_iface)
echo "$wlan"
I get the following output:
lo no wireless extensions.
enp4s0 no wireless extensions.
wlp2s0
The last line is the result of execution the echo "$wlan".
The previous lines coming from the iwconfig, those that are not getting formatted by sed.
And the file ./wifi_iface also has the info i need.
Everything works as intended.
So i really want to get rid of that unwanted output before the wlp2s0 line.
How do i manage to do this?
That output must be going to stderr rather than stdout. Redirect it to /dev/null
iwconfig 2>/dev/null | sed '/^\(w.*$\)/!d;s/ .*//' > ./wifi_iface
There's no need to assign this to wlan_c. Since you're writing to the file, nothing will be written to stdout, so the assignment will always be empty.

Remove escaping sequences automatically while redirecting

Lots of shell tools such as grep and ls can print colorful texts in the terminal. And when the output is redirected to a regular file, the escaping sequences representing colors are removed and only pure texts are written to the file. How to achieve that?
Use:
if [ -t 1 ]
to test whether stdout is connected to a terminal. If it is, print the escape sequences, otherwise just print plain text.
Specifically, grep has a command-line switch to adjust this setting:
echo hello | grep ll # "ll" is printed in red
echo hello | grep --color=never ll # "ll" is printed without special colouring
Most if not all tools that do this sort of thing will have a similar switch - check the manpages for other tools.
Another way to do this for tools that auto detect whether stdout is connected to the terminal or not is to trick them by piping output though cat:
echo hello | grep ll | cat # "ll" is printed without special colouring
I had the same issue the other day and realized I had the following in my .bashrc
alias grep='grep --color=always'
I changed it to the following and had no further problems
alias grep='grep --color=auto'

wc output differs inside/outside vim

I'm working on a text file that contains normal text with LaTeX-style comments (lines starting with a %). To determine the non-comment word count of the file, I was running this command in Bash:
grep -v "^%" filename | wc -w
which returns about the number of words I would expect. However, if from within vim I run this command:
:r! grep -v "^%" filename | wc -w
It outputs the word count which includes the comments, but I cannot figure out why.
For example, with this file:
%This is a comment.
This is not a comment.
Running the command from outside vim returns 5, but opening the file in vim and running the similar command prints 9.
I also was having issues getting vim to prepend a "%" to the command's output, but if the output is wrong anyways, that issue becomes irrelevant.
The % character is special in vi. It gets substituted for the filename of the current file.
Try this:
:r! grep -v "^\%" filename | wc -w
Same as before but backslash-escaping the %. In my testing just now, your example :r! command printed 9 as it did for you, and the above printed 5.

Why piping to the same file doesn't work on some platforms?

In cygwin, the following code works fine
$ cat junk
bat
bat
bat
$ cat junk | sort -k1,1 |tr 'b' 'z' > junk
$ cat junk
zat
zat
zat
But in the linux shell(GNU/Linux), it seems that overwriting doesn't work
[41] othershell: cat junk
cat
cat
cat
[42] othershell: cat junk |sort -k1,1 |tr 'c' 'z'
zat
zat
zat
[43] othershell: cat junk |sort -k1,1 |tr 'c' 'z' > junk
[44] othershell: cat junk
Both environments run BASH.
I am asking this because sometimes after I do text manipulation, because of this caveat, I am forced to make the tmp file. But I know in Perl, you can give "i" flag to overwrite the original file after some operations/manipulations. I just want to ask if there is any foolproof method in unix pipeline to overwrite the file that I am not aware of.
Four main points here:
"Useless use of cat." Don't do that.
You're not actually sorting anything with sort. Don't do that.
Your pipeline doesn't say what you think it does. Don't do that.
You're trying to over-write a file in-place while reading from it. Don't do that.
One of the reasons you are getting inconsistent behavior is that you are piping to a process that has redirection, rather than redirecting the output of the pipeline as a whole. The difference is subtle, but important.
What you want is to create a compound command with Command Grouping, so that you can redirect the input and output of the whole pipeline. In your case, this should work properly:
{ sort -k1,1 | tr 'c' 'z'; } < junk > sorted_junk
Please note that without anything to sort, you might as well skip the sort command too. Then your command can be run without the need for command grouping:
tr 'c' 'z' < junk > sorted_junk
Keep redirections and pipelines as simple as possible. It makes debugging your scripts much easier.
However, if you still want to abuse the pipeline for some reason, you could use the sponge utility from the moreutils package. The man page says:
sponge reads standard input and writes it out to the specified
file. Unlike a shell redirect, sponge soaks up all its input before
opening the output file. This allows constricting pipelines that read
from and write to the same file.
So, your original command line can be re-written like this:
cat junk | sort -k1,1 | tr 'c' 'z' | sponge junk
and since junk will not be overwritten until sponge receives EOF from the pipeline, you will get the results you were expecting.
In general this can be expected to break. The processes in a pipeline are all started up in parallel, so the > junk at the end of the line will usually truncate your input file before the process at the head of the pipelining has finished (or even started) reading from it.
Even if bash under Cygwin let's you get away with this you shouldn't rely on it. The general solution is to redirect to a temporary file and then rename it when the pipeline is complete.
You want to edit that file, you can just use the editor.
ex junk << EOF
%!(sort -k1,1 |tr 'b' 'z')
x
EOF
Overriding the same file in pipeline is not advice, because when you do the mistake you can't get it back (unless you've the backup or it's the under version control).
This happens, because the input and output in pipeline is automatically buffered (which gives you an impression it works), but it actually it's running in parallel. Different platforms could buffer the output in different way (based on the settings), so on some you end up with empty file (because the file would be created at the start), on some other with half-finished file.
The solution is to use some method when the file is only overridden when it encounters an EOF with full buffered and processed input.
This can be achieved by:
Using utility which can soaks up all its input before opening the output file.
This can either be done by sponge (as opposite of unbuffer from expect package).
Avoid using I/O redirection syntax (which can create the empty file before starting the command).
For example using tee (which buffers its standard streams), for example:
cat junk | sort | tee junk
This would only work with sort, because it expects all the input to process the sorting. So if your command doesn't use sort, add one.
Another tool which can be used is stdbuf which modifies buffering operations for its standard streams where you can specify the buffer size.
Use text processor which can edit files in-place (such as sed or ex).
Example:
$ ex -s +'%!sort -k1' -cxa myfile.txt
$ sed -i '' s/foo/bar/g myfile.txt
Using the following simple script, you can make it work like you want to:
$ cat junk | sort -k1,1 |tr 'b' 'z' | overwrite_file.sh junk
overwrite_file.sh
#!/usr/bin/env bash
OUT=$(cat -)
FILENAME="$*"
echo "$OUT" | tee "$FILENAME"
Note that if you don't want the updated file to be send to stdout, you can use this approach instead
overwrite_file_no_output.sh
#!/usr/bin/env bash
OUT=$(cat -)
FILENAME="$*"
echo "$OUT" > "$FILENAME"

Resources