Consider you are using a Linux/UNIX shell whose default character set is UTF-8:
$ echo $LANG
en_US.UTF-8
You have a text file, emoji.txt, which is coded in UTF-8:
$ file -i ./emoji.txt
./emoji.txt: text/plain; charset=utf-8
This text file contains some emoji and a variant form escape sequence:
$ cat ./emoji.txt
Standard ☁
Variant form ☁️
$ uni2ascii -a B -q ./emoji.txt
Standard \x2601
Variant form \x2601\xFE0F
You want to remove both emoji, including that variant form character (\xFE0F), and so the output should be
Standard
Variant form
How would you do this?
Update. This question is not about how to remove the last word in every line. Imagine emoji2.txt that includes a large text with many emoji characters; and some of these are followed by the variant form sequence.
With GNU sed and bash:
sed -E s/$'\u2601\uFE0F?'//g emoji.txt
You can use awk, like this:
$ cat emo.ascii
Standard \x2601
Variant form \x2601\xFE0F
$ ascii2uni -a B emo.ascii
Standard ☁
Variant form ☁️
3 tokens converted # note: this is stderr
$ ascii2uni -a B emo.ascii | awk -F' ' '{NF--}1' | cat -A
3 tokens converted # note: this is stderr
Standard$
Variant form$
NF-- will decrease the field count in awk, which effectively removes the last field. 1 evaluates to true, which makes awk print the modified line.
(Used cat -A here only to show that there aren't any invisible characters left)
Have awk print all but the last field:
$ awk '/^Standard/ || /^Variant form/ { $(NF)="" }1' emoji.txt
Standard
Variant form
NOTE: This particular solution will leave the field separator (blank) on the end of the output line; if you want to strip the trailing blank you can pipe to sed, tr, etc ... or have awk loop through fields 1 to (NF-1) and output via printf
Use nkf command. nkf -s try to convert character encoding to Shift-jis which does not support emojis. Therefore, emojis and escape sequence will be gone. Finally, revert input to UTF-8 with nkf -w.
$ cat emoji.txt | nkf -s | nkf -w
Standard
Variant form
$ cat emoji.txt | nkf -s | nkf -w | od -tx1c
0000000 53 74 61 6e 64 61 72 64 20 0a 56 61 72 69 61 6e
S t a n d a r d \n V a r i a n
0000020 74 20 66 6f 72 6d 20 0a
t f o r m \n
0000030
I thought ruby may work. Because \p{Emoji} matches emojis. But it remains the escape sequences..
$ ruby -nle 'puts $_.gsub!(/\p{Emoji}/,"")' emoji.txt
Standard
Variant form ️
$ ruby -nle 'puts $_.gsub!(/\p{Emoji}/,"")' emoji.txt | od -tx1c
0000000 53 74 61 6e 64 61 72 64 20 0a 56 61 72 69 61 6e
S t a n d a r d \n V a r i a n
0000020 74 20 66 6f 72 6d 20 ef b8 8f 0a
t f o r m 217 \n
0000033
Convert the Unicode text file to ASCII and remove those Unicode characters that are represented by ASCII characters, and convert it to UTF-8 again:
$ uni2ascii -q ./emoji.txt | sed "s/ 0x2601\(0xFE0F\)\?//g" | ascii2uni -q
Standard
Variant form
$
Related
I have a program which prints something that contains null bytes \0 and special characters like \x1f and newlines. For instance:
someprogram
#!/bin/bash
printf "ALICE\0BOB\x1fCHARLIE\n"
Given such a program, I want to read its output in such a way that all those special characters are captured in a shell variable output. So, if I run:
echo $output
because I'm not giving -e, I'd want the output to be:
ALICE\0BOB\x1fCHARLIE\n
How can this be achieved?
My first attempt was:
output=$(someprogram)
But I got this echoed output which doesn't have the special characters:
./myscript.sh: line 2: warning: command substitution: ignored null byte in input
ALICEBOBCHARLIE
I also tried to use read as follows:
output=""
while read -r
do
output="$output$REPLY"
done < <(someprogram)
Then I got rid of the warning but the output is still missing all special characters:
ALICEBOBCHARLIE
So how can I capture the output of someprogram in such a way that I have all the special characters in my resulting string?
EDIT: Note that it is possible to have such strings in bash:
$ x="ALICE\0BOB\x1fCHARLIE\n"
$ echo $x
ALICE\0BOB\x1fCHARLIE\n
So that shouldn't be the problem.
EDIT2: I'll reformulate the question a little bit now that I got an accepted answer and I understood things a little bit better. So, I just needed to be able to store the output of someprogram in some shell variable in such a way that I can print it to stdout without any changes in any special characters as if someprogram was just piped directly to stdout.
You just can't store zero byte in bash variables. It's impossible.
The usual solution is to convert the stream of bytes into hexadecimal. Then convert it back each time you want to do something with it.
$ x=$(printf "ALICE\0BOB\x1fCHARLIE\n" | xxd -p)
$ echo "$x"
414c49434500424f421f434841524c49450a
$ <<<"$x" xxd -p -r | hexdump -C
00000000 41 4c 49 43 45 00 42 4f 42 1f 43 48 41 52 4c 49 |ALICE.BOB.CHARLI|
00000010 45 0a |E.|
00000012
You can also write your own serialization and deserialization functions for the purpose.
Another idea I have is to for example read the data into an array by using zero byte as a separator (as any other byte is valid). This however will have problems with distinguishing the trailing zero byte:
$ readarray -d '' arr < <(printf "ALICE\0BOB\x1fCHARLIE\n")
$ printf "%s\0" "${arr[#]}" | hexdump -C
00000000 41 4c 49 43 45 00 42 4f 42 1f 43 48 41 52 4c 49 |ALICE.BOB.CHARLI|
00000010 45 0a 00 |E..|
# ^^ additional zero byte if input doesn't contain a trailing zero byte
00000013
I am writing a game engine for Bash using the cursor movement feature described here. However, if I echo emojis or other UTF-8 characters that span more than 1 byte, the cursor position seems to get messed up.
For example, the following code is supposed to echo "1🔈3", move the cursor back 3 positions and then echo "abc" in the same place. The result should only be "abc" (ideally). Instead, I see "1abc"
~ $ echo -e "1🔈3\033[3Dabc"
1abc
A similar problem can be illustrated with the carriage feed:
~ $ echo -e "1🔈3\rabc"
abc3
Is there any good way of resolving this? I am using the Terminal app on macOS. Is there any portable way of doing this?
Note: note, not all UTF-8 chars seem to behave this way. Mostly, I have only been able to reproduce this issue with emojis:
~ $ while true; do read -p "Enter emoji: " x; echo $x | hexdump; echo -e "1${x}3\033[3Dabc"; done
Enter emoji: 🔈
0000000 f0 9f 94 88 0a
0000005
1abc
Enter emoji: ♞
0000000 e2 99 9e 0a
0000004
abc
Enter emoji: ☞
0000000 e2 98 9e 0a
0000004
abc
Enter emoji: 😋
0000000 f0 9f 98 8b 0a
0000005
1abc
Enter emoji: 🃘
0000000 f0 9f 83 98 0a
0000005
abc
Enter emoji: 🀖
0000000 f0 9f 80 96 0a
0000005
abc
Enter emoji: 𝕭
0000000 f0 9d 95 ad 0a
0000005
abc
Enter emoji: 🇺🇸
0000000 f0 9f 87 ba f0 9f 87 b8 0a
0000009
1abc
Enter emoji: ✎
0000000 e2 9c 8e 0a
0000004
abc
The problem happens because a 😋is actually rendered across two columns. On my system, the four emoji and eight digits are equally long:
😋😋😋😋
12345678
It's expected that a single Wide character will require two Narrow characters to overwrite it.
Treating these emoji as wide is recommended by Unicode TR51-16:
Current practice is for emoji to have a square aspect ratio, deriving from their origin in Japanese. For interoperability, it is recommended that this practice be continued with current and future emoji. They will typically have about the same vertical placement and advance width as CJK ideographs.
Given the recommendation, I would be comfortable simply hard coding anything in the "Emoticon" Unicode block as being wide. Your other symbols that work, such as 🀖 and ☞ are not in the Emoticon block (they're in Mahjong and Miscellaneous Symbols respectively).
If you want to determine the width at runtime, you can e.g. ask Python, which helpfully reports their East Asian Width as Full/Wide even though the Unicode tables themselves label it Neutral:
$ python3 -c 'import sys; import unicodedata as u; print(u.east_asian_width(sys.argv[1]))' 😋
W
$ python3 -c 'import sys; import unicodedata as u; print(u.east_asian_width(sys.argv[1]))' ♞
N
🇺🇸 is a bit of a special case since it's composed of two different Regional Indicator Symbols with separate code points, but Python labels each of them as Neutral so if you take that as 1 it'll still add up to 2.
Try this:
s="1🔈3" ; printf "$s"; sleep 2; printf "\033[$((${#s}+1))Dabc%${#s}s\n" ' '
I've put a delay in between the printfs so it's easier to see what happens. First there's:
1🔈 3
Two seconds later the above is overwritten with:
abc
How it works: We put the unicode stuff in a string $s. The ${#s} returns the length in bytes of that string. The length is used in $((${#s}+1)) to calculate how many spaces back to move, then %${#s}s tells printf how many spaces it needs (plus a few more) to overwrite any leftover chars.
If "a few more" spaces is too many, counting the overwriting string gives a more precise result:
s="1🔈3" t="abc"
printf "${s}"; sleep 2; printf "\033[$((${#s}+1))D$t%$((1+${#s}-${#t}))s\n" ''
Hi is there anyway to get ascii value of Alphanumeric String without reading single character at a time .
For eg if I enter A ,output should be 65.
If I enter Onkar123#. How to calculate ascii of this string?
Also I want sum of ascii value produced by the above string.
Try using echo "test" | hexdump -e '16/1 "%02x " "\n"' by replacing test with Onkar123# or anything else
idk what kind of output you expect, nor do I know why you care if the string is processed one char at a time or how you'd know if a given tool is going one char at a time (and how else COULD any tool be doing this anyway?) so idk if this is the kind of answer you're looking for or not but maybe this will point you in a direction at least:
$ printf '%s' "Onkar123#" | awk -l ordchr -v RS='.{1}' '{print ord(RT)}'
79
110
107
97
114
49
50
51
35
The above uses GNU awk for ord() in the ordchr library.
Based on one of your comments, it sounds like this might be what you're looking for:
$ printf '%s' "Onkar123#" | awk -l ordchr -v RS='.{1}' '{s+=ord(RT)} END{print s+0}'
692
od
There's really no such thing as the ASCII value of a string. There is such a thing as the decimal (or octal, or hexadecimal) value of each ASCII character in a string, though.
Since you don't seem to have hexdump, try the od (octal dump) utility. I don't think I've ever seen a *nix system that didn't have od.
$ echo "Onkar123#" | od -An -t d1
79 110 107 97 114 49 50 51 35 10
I guess endianness might come into play. But od has a --endian argument for that.
awk
It's a lot harder in awk. I think you have to build a lookup table, then lookup the decimal code for each character in the input. That means you still have to process one character at a time.
# output-decimal-ascii.awk -- write ASCII decimal codes for input
BEGIN {
# 127 for ASCII; 256 for extended ASCII
for(n = 0; n < 127; n++) {
ascii_table[sprintf("%c",n)] = n
}
}
{
split($0, arr, "")
for (i = 1; i <= length(arr); i++) {
printf("%d ", ascii_table[arr[i]])
}
print "\n"
}
$ echo "Onkar123#" | awk -f code/awk/output-decimal-ascii.awk
79 110 107 97 114 49 50 51 35
To sum the numbers use:
echo "test" | od -An -t d1 | xargs | sed "s/ /+/g" | bc
I have a lot of this kind of string and I want to find a command to convert it in ascii, I tried with echo -e and od, but it did not work.
0xA7.0x9B.0x46.0x8D.0x1E.0x52.0xA7.0x9B.0x7B.0x31.0xD2
This worked for me.
$ echo 54657374696e672031203220330 | xxd -r -p
Testing 1 2 3$
-r tells it to convert hex to ascii as opposed to its normal mode of doing the opposite
-p tells it to use a plain format.
This code will convert the text 0xA7.0x9B.0x46.0x8D.0x1E.0x52.0xA7.0x9B.0x7B.0x31.0xD2 into a stream of 11 bytes with equivalent values. These bytes will be written to standard out.
TESTDATA=$(echo '0xA7.0x9B.0x46.0x8D.0x1E.0x52.0xA7.0x9B.0x7B.0x31.0xD2' | tr '.' ' ')
for c in $TESTDATA; do
echo $c | xxd -r
done
As others have pointed out, this will not result in a printable ASCII string for the simple reason that the specified bytes are not ASCII. You need post more information about how you obtained this string for us to help you with that.
How it works: xxd -r translates hexadecimal data to binary (like a reverse hexdump). xxd requires that each line start off with the index number of the first character on the line (run hexdump on something and see how each line starts off with an index number). In our case we want that number to always be zero, since each execution only has one line. As luck would have it, our data already has zeros before every character as part of the 0x notation. The lower case x is ignored by xxd, so all we have to do is pipe each 0xhh character to xxd and let it do the work.
The tr translates periods to spaces so that for will split it up correctly.
You can use xxd:
$cat hex.txt
68 65 6c 6c 6f
$cat hex.txt | xxd -r -p
hello
You can use something like this.
$ cat test_file.txt
54 68 69 73 20 69 73 20 74 65 78 74 20 64 61 74 61 2e 0a 4f 6e 65 20 6d 6f 72 65 20 6c 69 6e 65 20 6f 66 20 74 65 73 74 20 64 61 74 61 2e
$ for c in `cat test_file.txt`; do printf "\x$c"; done;
This is text data.
One more line of test data.
The values you provided are UTF-8 values. When set, the array of:
declare -a ARR=(0xA7 0x9B 0x46 0x8D 0x1E 0x52 0xA7 0x9B 0x7B 0x31 0xD2)
Will be parsed to print the plaintext characters of each value.
for ((n=0; n < ${#ARR[*]}; n++)); do echo -e "\u${ARR[$n]//0x/}"; done
And the output will yield a few printable characters and some non-printable characters as shown here:
For converting hex values to plaintext using the echo command:
echo -e "\x<hex value here>"
And for converting UTF-8 values to plaintext using the echo command:
echo -e "\u<UTF-8 value here>"
And then for converting octal to plaintext using the echo command:
echo -e "\0<octal value here>"
When you have encoding values you aren't familiar with, take the time to check out the ranges in the common encoding schemes to determine what encoding a value belongs to. Then conversion from there is a snap.
The echo -e must have been failing for you because of wrong escaping.
The following code works fine for me on a similar output from your_program with arguments:
echo -e $(your_program with arguments | sed -e 's/0x\(..\)\.\?/\\x\1/g')
Please note however that your original hexstring consists of non-printable characters.
Make a script like this:
bash
#!/bin/bash
echo $((0x$1)).$((0x$2)).$((0x$3)).$((0x$4))
Example:
sh converthextoip.sh c0 a8 00 0b
Result:
192.168.0.11
I got a very big file that contains n lines of text (with n being <1000) at the beginning, an empty line and then lots of untyped binary data.
I would like to extract the first n lines of text, and then somehow extract the exact offset of the binary data.
Extracting the first lines is simple, but how can I get the offset? bash is not encoding aware, so just counting up the number of characters is senseless.
grep has an option -b to output the byte offset.
Example:
$ hexdump -C foo
00000000 66 6f 6f 0a 0a 62 61 72 0a |foo..bar.|
00000009
$ grep -b "^$" foo
4:
$ hexdump -s 5 -C foo
00000005 62 61 72 0a |bar.|
00000009
In the last step I used 5 instead of 4 to skip the newline.
Also works with umlauts (äöü) in the file.
Use grep to find the empty line
grep -n "^$" your_file | tr -d ':'
Optionally use tail -n 1 if you want the last empty line (that is, if the top part of the file can contain empty lines before the binary stuff starts).
Use head to get the top part of the file.
head -n $num
you might want to use tools like hexdump or od to retrieve binary offsets instead of bash. Here's a reference.
Perl can tell you where you are in a file:
pos=$( perl -le '
open $fh, "<", $ARGV[0];
$/ = ""; # read the file in "paragraphs"
$first_paragraph = <$fh>;
print tell($fh)
' filename )
Parenthetically, I was attempting to one-liner this
pos=$( perl -00 -lne 'if ($. == 2) {print tell(___what?___); exit}' filename
What is the "current filehandle" variable? I couldn't find it in the docs.