I'm pulling data from a file (in this case an exim mail log) and often it saves characters in an escaped octal sequence like \NNN where 'N' represents an octal value 0-7. This mainly happens when the subject is written in non-Latin characters (Arabic for example).
My goal is to find the cleanest way to convert these octal characters to display correctly in my utf-8 enabled terminal, specifically in 'less' as there is the potential for lots of output.
The best approach I have found so far is as follows:
arbitrary_stream | { while read -r temp; do printf %b "$temp\n"; done } | less
This seems to work pretty well, however I would assume that there is some translator tool, or maybe even a flag built into 'less' to handle this. I also found that if you use something like sed to inject a 0 after each \, you can store it as a variable, then use 'echo -e $data' however this was more messy than the previous solution.
Test case:
octalvar="\342\202\254"
expected output in less:
€
I'm looking for something cleaner, more complete or just better than my above solution in the form of either:
echo $octalvar | do_something | less
or
echo $octalvar | less --some_magic_flag
Any suggestions? Or is my solution about as clean as I can expect?
Conversion in GNU awk (for using strtonum). It proved out to be a hassle so the code is a mess and maybe could be streamlined, feel free to advice:
awk '{
while(match($0,/\\[0-8]{3}/)) { # search for \NNNs
o=substr($0,RSTART,RLENGTH) # extract it
sub(/\\/,"0",o) # replace \ with 0 for strtonum
c=sprintf("%c",strtonum(o)) # convert to a character
sub(/\\[0-8]{3}/,c) # replace the \NNN with the char
}
}1' foo > bar
or paste the code between single quotes to a file above_program.awk and run it like awk -f above_program.awk foo > bar. Test file foo:
test 123 \342\202\254
Run it in a non-UTF8 locale, I used locale C:
$ locale
...
LC_ALL=C
$ awk -f above_program.awk foo
test 123 €
If you run it a UTF8 locale, conversion will happen:
$ locale
...
LC_ALL=en_US.utf8
$ awk -f above_program.awk foo
test 123 â¬
This is my current version:
echo $arbitrary | { IFS=$'\n'; while read -r temp; do printf %b "$temp\n"; done; unset IFS; } | iconv -f utf-8 -t utf-8 -c | less
Related
The text file is like this,
#एक
1के
अंकगणित8IU
अधोरेखाunderscore
$thatऔर
%redएकyellow
$चिह्न
अंडरस्कोर#_
The desired text file should be like,
#
1
8IU
underscore
$that
%redyellow
$
#_
This is what I have tried so far, using awk
awk -F"[अ-ह]*" '{print $1}' filename.txt
And the output that I am getting is,
#
1
$that
%red
$
and using this awk -F"[अ-ह]*" '{print $1,$2}' filename.txt and I am getting an output like this,
#
1 े
ं
ो
$that
%red yellow
$ ि
ं
Is there anyway to solve this in bash script?
Using perl:
$ perl -CSD -lpe 's/\p{Devanagari}+//g' input.txt
#
1
8IU
underscore
$that
%redyellow
$
#_
-CSD tells perl that standard streams and any opened files are encoded in UTF-8. -p loops over input files printing each line to standard output after executing the script given by -e. If you want to modify the file in place, add the -i option.
The regular expression matches any codepoints assigned to the Devanagari script in the Unicode standard and removes them. Use \P{Devanagari} to do the opposite and remove the non-Devanagari characters.
Using awk you can do:
awk '{sub(/[^\x00-\x7F]+/, "")} 1' file
#
1
8IU
underscore
$that
%redyellow
See documentation: https://www.gnu.org/software/gawk/manual/html_node/Bracket-Expressions.html
using [\x00-\x7F].
This matches all values numerically between zero and 127, which is the defined range of the ASCII character set. Use a complemented character list [^\x00-\x7F] to match any single-byte characters that are not in the ASCII range.
tr is a very good fit for this task:
LC_ALL=C tr -c -d '[:cntrl:][:graph:]' < input.txt
It sets the POSIX C locale environment so that only US English character set is valid.
Then instructs tr to -d delete -c complement [:cntrl:][:graph:], control and drawn characters classes (those not control or visible) characters. Since it is sets all the locale setting to C, all non-US-English characters are discarded.
Here PLUGIN=ABC
$ echo "{\"PluginName\": \"${PLUGIN}\""
""PluginName": "ABC
$ echo "{\"PluginName\":${PLUGIN}\",\"Filename\":\"${VAR}\" , \"ErrorString\":"
","Filename":"ABC" , "ErrorString":eployerProps
However if I change above variable PLUGIN to any other string its working.
$ echo "{\"PluginName\":\"${PLUGINS}\",\"Filename\":\"${VAR}\" , \"ErrorString\":"
{"PluginName":"ABC","Filename":"ABC" , "ErrorString":
Not able to understand whats the reason. This is bash 4 however on other server its working fine.
I cannot reproduce your problem. This is what my bash 4.4.23(1) prints:
$ PLUGIN=ABC
$ echo "{\"PluginName\": \"${PLUGIN}\""
{"PluginName": "ABC"
However if I change above variable PLUGIN to any other string its working.
Have you noticed that your second command differs from the first one?
echo "{\"PluginName\":${PLUGIN}\",\"Filename\":\"${VAR}\" , \"ErrorString\":"
| |
different | \ different
| |
echo "{\"PluginName\":\"${PLUGINS}\",\"Filename\":\"${VAR}\" , \"ErrorString\":"
However, you could make your life a lot easier by using printf:
$ PLUGIN=ABC
$ VAR=XYZ
$ printf '{"PluginName": "%s"\n' "$PLUGIN"
{"PluginName": "ABC"
$ printf '{"PluginName":"%s","Filename":"%s","ErrorString":\n' "$PLUGIN" "$VAR"
{"PluginName":"ABC","Filename":"XYZ","ErrorString":
or even better for a general approach:
$ printf '{'; printf '"%s":"%s",' PluginName "$PLUGIN" Filename "$VAR"
{"PluginName":"ABC","Filename":"XYZ",
Here PLUGIN=ABC
No, that would not explain the output you're seeing. It's much more likely that PLUGIN=$'ABC\r' (i.e. A B C followed by a carriage return).
Carriage return moves the cursor back to the beginning of the line when printed to a terminal, which is why your output looks so confusing.
Try echo "$PLUGIN" | cat -v or echo "$PLUGIN" | xxd (or any other hex dump tool) to see what's actually in there.
But not able to do on a specific server only.
If PLUGIN is the result of reading a line from a file, then this file is probably in Windows/DOS format on that server (with Carriage Return / Line Feed endings) instead of Unix format (Line Feed only).
grep returns
Binary file test.log matches
For example
echo "line1 re \x00\r\nline2\r\nline3 re\r\n" > test.log # in zsh
echo -e "line1 re \x00\r\nline2\r\nline3 re\r\n" > test.log # in bash
grep re test.log
I wish the result will show line1 and line3 (total two lines).
Is it possible to use tr convert the unprintable data into readable data, to let grep work again?
grep -a
It can't get simpler than that.
One way is to simply treat binary files as text anyway, with grep --text but this may well result in binary information being sent to your terminal. That's not really a good idea if you're running a terminal that interprets the output stream (such as VT/DEC or many others).
Alternatively, you can send your file through tr with the following command:
tr '[\000-\011\013-\037\177-\377]' '.' <test.log | grep whatever
This will change anything less than a space character (except newline) and anything greater than 126, into a . character, leaving only the printables.
If you want every "illegal" character replaced by a different one, you can use something like the following C program, a classic standard input filter:
#include<stdio.h>
int main (void) {
int ch;
while ((ch = getchar()) != EOF) {
if ((ch == '\n') || ((ch >= ' ') && (ch <= '~'))) {
putchar (ch);
} else {
printf ("{{%02x}}", ch);
}
}
return 0;
}
This will give you {{NN}}, where NN is the hex code for the character. You can simply adjust the printf for whatever style of output you want.
You can see that program in action here, where it:
pax$ printf 'Hello,\tBob\nGoodbye, Bob\n' | ./filterProg
Hello,{{09}}Bob
Goodbye, Bob
You could run the data file through cat -v, e.g
$ cat -v tmp/test.log | grep re
line1 re ^#^M
line3 re^M
which could be then further post-processed to remove the junk; this is most analogous to your query about using tr for the task.
-v simply tells cat to display non-printing characters.
You can use "strings" to extract strings from a binary file, for example
strings binary.file | grep foo
You can force grep to look at binary files with:
grep --binary-files=text
You might also want to add -o (--only-matching) so you don't get tons of binary gibberish that will bork your terminal.
Starting with Grep 2.21, binary files are treated differently:
When searching binary data, grep now may treat non-text bytes as line
terminators. This can boost performance significantly.
So what happens now is that with binary data, all non-text bytes
(including newlines) are treated as line terminators. If you want to change this
behavior, you can:
use --text. This will ensure that only newlines are line terminators
use --null-data. This will ensure that only null bytes are line terminators
grep -a will force grep to search and output from a file that grep thinks is binary.
grep -a re test.log
As James Selvakumar already said, grep -a does the trick. -a or --text forces Grep to handle the inputstream as text.
See Manpage http://unixhelp.ed.ac.uk/CGI/man-cgi?grep
try
cat test.log | grep -a somestring
you can do
strings test.log | grep -i
this will convert give output as a readable string to grep.
Here's what I used in a system that didn't have "strings" command installed
cat yourfilename | tr -cd "[:print:]"
This prints the text and removes unprintable characters in one fell swoop, unlike "cat -v filename" which requires some postprocessing to remove unwanted stuff. Note that some of the binary data may be printable so you'll still get some gibberish between the good stuff. I think strings removes this gibberish too if you can use that.
You can also try Word Extractor tool. Word Extractor can be used with any file in your computer to separate the strings that contain human text / words from binary code (exe applications, DLLs).
In a slightly different question from this other, I would like to convert any number in a text file from decimal to hexadecimal.
A number is here defined by a set of numeric characters together.
Example:
$ cat MyFile.txt
Hello,10,Good255Bye-boys01
Must become:
Hello,0A,GoodFFBye-boys01
Valid too:
Hello,A,GoodFFBye-boys1
Methods that allow (first case) to specify the character wide (to obtain 0A instead of A) are preferred.
I have tested grep to extract the numbers piped to bc to convert them:
( echo "obase=16" ; cat Line.txt |grep -o '[0-9]*') | bc
but this method shows only one (converted to hex) number each line, and removes the rest of the characters.
Since you're okay with using grep and bc in a pipe, it's clear that you don't want a solution in pure sh, but are happy to use external tools.
perl -pe 's/([0-9]+)/sprintf "%02X", $1/ge' myfile.txt
A Python solution (thanks to the suggestion from user 4ae1e1):
$ cat convert.py
#!/usr/bin/env python3
import fileinput
import re
for line in fileinput.input():
print(re.sub("\d+", lambda matchobj: "%X" % int(matchobj.group(0)), line), end="")
Example usage:
cat MyFile.txt | ./convert.py
or:
./convert.py MyFile.txt
dec2hex command will do the task.
grep returns
Binary file test.log matches
For example
echo "line1 re \x00\r\nline2\r\nline3 re\r\n" > test.log # in zsh
echo -e "line1 re \x00\r\nline2\r\nline3 re\r\n" > test.log # in bash
grep re test.log
I wish the result will show line1 and line3 (total two lines).
Is it possible to use tr convert the unprintable data into readable data, to let grep work again?
grep -a
It can't get simpler than that.
One way is to simply treat binary files as text anyway, with grep --text but this may well result in binary information being sent to your terminal. That's not really a good idea if you're running a terminal that interprets the output stream (such as VT/DEC or many others).
Alternatively, you can send your file through tr with the following command:
tr '[\000-\011\013-\037\177-\377]' '.' <test.log | grep whatever
This will change anything less than a space character (except newline) and anything greater than 126, into a . character, leaving only the printables.
If you want every "illegal" character replaced by a different one, you can use something like the following C program, a classic standard input filter:
#include<stdio.h>
int main (void) {
int ch;
while ((ch = getchar()) != EOF) {
if ((ch == '\n') || ((ch >= ' ') && (ch <= '~'))) {
putchar (ch);
} else {
printf ("{{%02x}}", ch);
}
}
return 0;
}
This will give you {{NN}}, where NN is the hex code for the character. You can simply adjust the printf for whatever style of output you want.
You can see that program in action here, where it:
pax$ printf 'Hello,\tBob\nGoodbye, Bob\n' | ./filterProg
Hello,{{09}}Bob
Goodbye, Bob
You could run the data file through cat -v, e.g
$ cat -v tmp/test.log | grep re
line1 re ^#^M
line3 re^M
which could be then further post-processed to remove the junk; this is most analogous to your query about using tr for the task.
-v simply tells cat to display non-printing characters.
You can use "strings" to extract strings from a binary file, for example
strings binary.file | grep foo
You can force grep to look at binary files with:
grep --binary-files=text
You might also want to add -o (--only-matching) so you don't get tons of binary gibberish that will bork your terminal.
Starting with Grep 2.21, binary files are treated differently:
When searching binary data, grep now may treat non-text bytes as line
terminators. This can boost performance significantly.
So what happens now is that with binary data, all non-text bytes
(including newlines) are treated as line terminators. If you want to change this
behavior, you can:
use --text. This will ensure that only newlines are line terminators
use --null-data. This will ensure that only null bytes are line terminators
grep -a will force grep to search and output from a file that grep thinks is binary.
grep -a re test.log
As James Selvakumar already said, grep -a does the trick. -a or --text forces Grep to handle the inputstream as text.
See Manpage http://unixhelp.ed.ac.uk/CGI/man-cgi?grep
try
cat test.log | grep -a somestring
you can do
strings test.log | grep -i
this will convert give output as a readable string to grep.
Here's what I used in a system that didn't have "strings" command installed
cat yourfilename | tr -cd "[:print:]"
This prints the text and removes unprintable characters in one fell swoop, unlike "cat -v filename" which requires some postprocessing to remove unwanted stuff. Note that some of the binary data may be printable so you'll still get some gibberish between the good stuff. I think strings removes this gibberish too if you can use that.
You can also try Word Extractor tool. Word Extractor can be used with any file in your computer to separate the strings that contain human text / words from binary code (exe applications, DLLs).