How to convert actual Unicode to \u0123 - bash

I want to turn Unicode text into pure ASCII encoding using escape sequences.
Input :Ɏɇ衳 outputs to ... "\u024E\u0247\u8873"
Basically the opposite of this.
$ echo -e "\u024E\u0247\u8873"
Ɏɇ衳
I want the encoding to stay in utf8, all I'm doing is changing forms.
I've Tried:
iconv -f utf8 -t utf8 $file
iconv -f utf8 -t utf16 $file

Your mentioned codes 024E, 0247, .. are called Unicode code points and are independent from UTF-8 or UTF-16.
If perl is your option, you can retrieve the codes with:
perl -C -ne 'map {printf "\\u%04X", ord} (/./g)' <<< "Ɏɇ衳"; echo
which outputs:
\u024E\u0247\u8873
Explanation
The perl code above is mostly equivalent to:
#!/usr/bin/perl
use utf8;
$str = "Ɏɇ衳";
foreach $chr ($str =~ /./g) {
printf "\\u%04X", ord($chr);
}
print "\n";
use utf8 specifies the string is encoded in UTF-8 (just because the string is embedded in the script).
($str =~ /./g) brakes the string into an array of characters.
foreach iterates over the array of characters.
ord returns the code point of the given character.
EDIT
If you want to auto-scale the number of digits considering the out-of-BMP characters, try instead:
#!/usr/bin/perl
use utf8;
$str = "Ɏɇ衳";
foreach $chr ($str =~ /./g) {
$n = ord($chr);
$d = $n > 0xffff ? 8 : 4;
printf "\\u%0${d}X", $n;
}

If you have that in a file you can use iconv.
iconv -f $input_encoding -t $output_encoding $file
check "man iconv" for more details

Related

Bash: decode string with url escaped hex codes [duplicate]

I'm looking for a way to turn this:
hello < world
to this:
hello < world
I could use sed, but how can this be accomplished without using cryptic regex?
Try recode (archived page; GitHub mirror; Debian page):
$ echo '<' |recode html..ascii
<
Install on Linux and similar Unix-y systems:
$ sudo apt-get install recode
Install on Mac OS using:
$ brew install recode
With perl:
cat foo.html | perl -MHTML::Entities -pe 'decode_entities($_);'
With php from the command line:
cat foo.html | php -r 'while(($line=fgets(STDIN)) !== FALSE) echo html_entity_decode($line, ENT_QUOTES|ENT_HTML401);'
An alternative is to pipe through a web browser -- such as:
echo '!' | w3m -dump -T text/html
This worked great for me in cygwin, where downloading and installing distributions are difficult.
This answer was found here
Using xmlstarlet:
echo 'hello < world' | xmlstarlet unesc
A python 3.2+ version:
cat foo.html | python3 -c 'import html, sys; [print(html.unescape(l), end="") for l in sys.stdin]'
This answer is based on: Short way to escape HTML in Bash? which works fine for grabbing answers (using wget) on Stack Exchange and converting HTML to regular ASCII characters:
sed 's/ / /g; s/&/\&/g; s/</\</g; s/>/\>/g; s/"/\"/g; s/#'/\'"'"'/g; s/“/\"/g; s/”/\"/g;'
Edit 1: April 7, 2017 - Added left double quote and right double quote conversion. This is part of bash script that web-scrapes SE answers and compares them to local code files here: Ask Ubuntu -
Code Version Control between local files and Ask Ubuntu answers
Edit June 26, 2017
Using sed was taking ~3 seconds to convert HTML to ASCII on a 1K line file from Ask Ubuntu / Stack Exchange. As such I was forced to use Bash built-in search and replace for ~1 second response time.
Here's the function:
LineOut="" # Make global
HTMLtoText () {
LineOut=$1 # Parm 1= Input line
# Replace external command: Line=$(sed 's/&/\&/g; s/</\</g;
# s/>/\>/g; s/"/\"/g; s/'/\'"'"'/g; s/“/\"/g;
# s/”/\"/g;' <<< "$Line") -- With faster builtin commands.
LineOut="${LineOut// / }"
LineOut="${LineOut//&/&}"
LineOut="${LineOut//</<}"
LineOut="${LineOut//>/>}"
LineOut="${LineOut//"/'"'}"
LineOut="${LineOut//'/"'"}"
LineOut="${LineOut//“/'"'}" # TODO: ASCII/ISO for opening quote
LineOut="${LineOut//”/'"'}" # TODO: ASCII/ISO for closing quote
} # HTMLtoText ()
On macOS, you can use the built-in command textutil (which is a handy utility in general):
echo '👋 hello < world 🌐' | textutil -convert txt -format html -stdin -stdout
outputs:
👋 hello < world 🌐
To support the unescaping of all HTML entities only with sed substitutions would require too long a list of commands to be practical, because every Unicode code point has at least two corresponding HTML entities.
But it can be done using only sed, grep, the Bourne shell and basic UNIX utilities (the GNU coreutils or equivalent):
#!/bin/sh
htmlEscDec2Hex() {
file=$1
[ ! -r "$file" ] && file=$(mktemp) && cat >"$file"
printf -- \
"$(sed 's/\\/\\\\/g;s/%/%%/g;s/&#[0-9]\{1,10\};/\&#x%x;/g' "$file")\n" \
$(grep -o '&#[0-9]\{1,10\};' "$file" | tr -d '&#;')
[ x"$1" != x"$file" ] && rm -f -- "$file"
}
htmlHexUnescape() {
printf -- "$(
sed 's/\\/\\\\/g;s/%/%%/g
;s/&#x\([0-9a-fA-F]\{1,8\}\);/\&#x0000000\1;/g
;s/&#x0*\([0-9a-fA-F]\{4\}\);/\\u\1/g
;s/&#x0*\([0-9a-fA-F]\{8\}\);/\\U\1/g' )\n"
}
htmlEscDec2Hex "$1" | htmlHexUnescape \
| sed -f named_entities.sed
Note, however, that a printf implementation supporting \uHHHH and \UHHHHHHHH sequences is required, such as the GNU utility’s. To test, check for example that printf "\u00A7\n" prints §. To call the utility instead of the shell built-in, replace the occurrences of printf with env printf.
This script uses an additional file, named_entities.sed, in order to support the named entities. It can be generated from the specification using the following HTML page:
<!DOCTYPE html>
<head><meta charset="utf-8" /></head>
<body>
<p id="sed-script"></p>
<script type="text/javascript">
const referenceURL = 'https://html.spec.whatwg.org/entities.json';
function writeln(element, text) {
element.appendChild( document.createTextNode(text) );
element.appendChild( document.createElement("br") );
}
(async function(container) {
const json = await (await fetch(referenceURL)).json();
container.innerHTML = "";
writeln(container, "#!/usr/bin/sed -f");
const addLast = [];
for (const name in json) {
const characters = json[name].characters
.replace("\\", "\\\\")
.replace("/", "\\/");
const command = "s/" + name + "/" + characters + "/g";
if ( name.endsWith(";") ) {
writeln(container, command);
} else {
addLast.push(command);
}
}
for (const command of addLast) { writeln(container, command); }
})( document.getElementById("sed-script") );
</script>
</body></html>
Simply open it in a modern browser, and save the resulting page as text as named_entities.sed. This sed script can also be used alone if only named entities are required; in this case it is convenient to give it executable permission so that it can be called directly.
Now the above shell script can be used as ./html_unescape.sh foo.html, or inside a pipeline reading from standard input.
For example, if for some reason it is needed to process the data by chunks (it might be the case if printf is not a shell built-in and the data to process is large), one could use it as:
nLines=20
seq 1 $nLines $(grep -c $ "$inputFile") | while read n
do sed -n "$n,$((n+nLines-1))p" "$inputFile" | ./html_unescape.sh
done
Explanation of the script follows.
There are three types of escape sequences that need to be supported:
&#D; where D is the decimal value of the escaped character’s Unicode code point;
&#xH; where H is the hexadecimal value of the escaped character’s Unicode code point;
&N; where N is the name of one of the named entities for the escaped character.
The &N; escapes are supported by the generated named_entities.sed script which simply performs the list of substitutions.
The central piece of this method for supporting the code point escapes is the printf utility, which is able to:
print numbers in hexadecimal format, and
print characters from their code point’s hexadecimal value (using the escapes \uHHHH or \UHHHHHHHH).
The first feature, with some help from sed and grep, is used to reduce the &#D; escapes into &#xH; escapes. The shell function htmlEscDec2Hex does that.
The function htmlHexUnescape uses sed to transform the &#xH; escapes into printf’s \u/\U escapes, then uses the second feature to print the unescaped characters.
I like the Perl answer given in https://stackoverflow.com/a/13161719/1506477.
cat foo.html | perl -MHTML::Entities -pe 'decode_entities($_);'
But, it produced an unequal number of lines on plain text files. (and I dont know perl enough to debug it.)
I like the python answer given in https://stackoverflow.com/a/42672936/1506477 --
python3 -c 'import html, sys; [print(html.unescape(l), end="") for l in sys.stdin]'
but it creates a list [ ... for l in sys.stdin] in memory, that is forbidden for large files.
Here is another easy pythonic way without buffering in memory: using awkg.
$ echo 'hello < : " world' | \
awkg -b 'from html import unescape' 'print(unescape(R0))'
hello < : " world
awkg is a python based awk-like line processor. You may install it using pip https://pypi.org/project/awkg/:
pip install awkg
-b is awk's BEGIN{} block that runs once in the beginning.
Here we just did from html import unescape.
Each line record is in R0 variable, for which we did
print(unescape(R0))
Disclaimer:
I am the maintainer of awkg
I have created a sed script based on the list of entities so it must handle most of the entities.
sed -f htmlentities.sed < file.html
My original answer got some comments, that recode does not work for UTF-8 encoded HTML files. This is correct. recode supports only HTML 4. The encoding HTML is an alias for HTML_4.0:
$ recode -l | grep -iw html
HTML-i18n 2070 RFC2070
HTML_4.0 h h4 HTML
The default encoding for HTML 4 is Latin-1. This has changed in HTML 5. The default encoding for HTML 5 is UTF-8. This is the reason, why recode does not work for HTML 5 files.
HTML 5 defines the list of entities here:
https://html.spec.whatwg.org/multipage/named-characters.html
The definition includes a machine readable specification in JSON format:
https://html.spec.whatwg.org/entities.json
The JSON file can be used to perform a simple text replacement. The following example is a self modifying Perl script, which caches the JSON specification in its DATA chunk.
Note: For some obscure compatibility reasons, the specification allows entities without a terminating semicolon. Because of that the entities are sorted by length in reverse order to make sure, that the correct entities are replaced first so that they do not get destroyed by entities without the ending semicolon.
#! /usr/bin/perl
use utf8;
use strict;
use warnings;
use open qw(:std :utf8);
use LWP::Simple;
use JSON::Parse qw(parse_json);
my $entities;
INIT {
if (eof DATA) {
my $data = tell DATA;
open DATA, '+<', $0;
seek DATA, $data, 0;
my $entities_json = get 'https://html.spec.whatwg.org/entities.json';
print DATA $entities_json;
truncate DATA, tell DATA;
close DATA;
$entities = parse_json ($entities_json);
} else {
local $/ = undef;
$entities = parse_json (<DATA>);
}
}
local $/ = undef;
my $html = <>;
for my $entity (sort { length $b <=> length $a } keys %$entities) {
my $characters = $entities->{$entity}->{characters};
$html =~ s/$entity/$characters/g;
}
print $html;
__DATA__
Example usage:
$ echo '😊 & ٱلْعَرَبِيَّة' | ./html5-to-utf8.pl
😊 & ٱلْعَرَبِيَّة
With Xidel:
echo 'hello < : " world' | xidel -s - -e 'parse-html($raw)'
hello < : " world

How to convert between the unicode forms: string, name, number

I have been lately using unicode more often and wondered if there is a command line tool to convert unicode between its forms.
Would be nice to be able to say:
uni_convert "☃" --string
And know that the string is defined in unicode as a "SNOWMAN".
Perl's Unicode-Tussle distribution comes with the useful uniprops.
$ uniprops '☃'
U+2603 ‹☃› \N{SNOWMAN}
...
$ uniprops 'U+2603'
U+2603 ‹☃› \N{SNOWMAN}
...
$ uniprops 'SNOWMAN'
U+2603 ‹☃› \N{SNOWMAN}
...
If you're writing code, you'll want charnames.
Want
Have
Code
$code
$char
ord($char)
$code
$name
charnames::vianame($name)
$char
$code
chr($code)
$char
$name
chr(charnames::vianame($name))
$name
$code
charnames::viacode($code)
$name
$char
charnames::viacode(ord($char))
vianame accepts official aliases (e.g. LF for LINEFEED). You'll need to parse U+ notation yourself if wish to accept it. ($code = hex(s/^U\+//r);)
Example:
use strict;
use warnings;
use feature qw( say );
use experimental qw( regex_sets ); # Safe. Optional since 5.36.
use utf8; # Source encoded using UTF-8.
use open ":std", ":encoding(UTF-8)"; # Terminal provides/expects UTF-8.
use charnames qw( :full );
use Encode qw( decode_utf8 );
#ARGV == 1
or die("usage\n");
my $s = decode_utf8($ARGV[0]);
for my $cp ( unpack "W*", $s ) {
my $ch = chr($cp);
if ( $ch =~ /(?[ \p{Print} - \p{Mark} ])/ ) { # Not sure if good enough.
printf "‹%s› ", $ch;
} else {
print "--- ";
}
printf "U+%X ", $cp;
say charnames::viacode($cp);
}
$ uni_id ☃
‹☃› U+2603 SNOWMAN
$ uni_id çà
‹ç› U+E7 LATIN SMALL LETTER C WITH CEDILLA
‹à› U+E0 LATIN SMALL LETTER A WITH GRAVE
Other resources:
Unicode::UCD
Provides accsess at the information found in the Unicode Character Database.
The Unicode Standard is more than characters and properties.
perluniprops
unichars from Unicode-Tussle (e.g. unichars '\p{Hiragana}')
Here is an awk to do that.
Download this file from unicode.org that provides the latest names.
Then:
q=$(printf '%x\n' \'☃)
awk '/^[[:xdigit:]]+/{
str=$0
sub(/^[[:xdigit:]]+[[:blank:]]+/,"",str)
names[$1]=str
}
END{ print names[q] }
' q="$q" names.txt
Prints:
SNOWMAN
If you want to go the other way:
cp=$(awk '/^[[:xdigit:]]+/{
str=$0
sub(/^[[:xdigit:]]+[[:blank:]]+/,"",str)
other_names[str]=$1
}
END{ print other_names[q] }
' q="SNOWMAN" names.txt)
echo -e "\u${cp}"
Prints:
☃
If you have GNU awk you can easily convert the hex index into decimal and can print from within. This allows a single source file to be used and go one way or the other by defining q or r:
gawk '/^[[:xdigit:]]+/{
str=$0
sub(/^[[:xdigit:]]+[[:blank:]]+/,"",str)
names[$1]=str
other_names[str]=$1
}
END{ print q ? names[q] : sprintf("%c", strtonum("0x" other_names[r])) }
' r='SNOWMAN' names.txt
☃
gawk '/^[[:xdigit:]]+/{
str=$0
sub(/^[[:xdigit:]]+[[:blank:]]+/,"",str)
names[$1]=str
other_names[str]=$1
}
END{ print q ? names[q] : sprintf("%c", strtonum("0x" other_names[r])) }
' q=$(printf '%x\n' \'☃) names.txt
SNOWMAN
I separated the code into a file and created a repo:
https://github.com/poti1/uni_convert

multi search and replace with sed or similar commands in a unicode environment

I have some *.txt files , places in c:\apple and it sub directories in WINDOWS 7 environment.
eg:
c:\apple\orange
c:\apple\pears ....etc
but numbers of subfolders in c:\apple are unknown
and I have a text file (say sample.txt) , which something like a config file, the structure is :
綫 綫
胆 胆
湶 湶
峯 峯
one space between the chinese char and the string.
I hope I can use this file sample.txt file , to search ALL THE text files in the C:\APPLE\ and it subdirectories , find out those chinese characters and replace with the characters after that.
I have tried sed but no luck on chinese characters.
sed -r "s/^(.*) (.*)/s#\1#\2#/g" c:\temp\sample.txt *.txt
Any one have an idea?
Assuming your text files including sample.txt are encoded with UTF-16LE, please try:
perl -e '
use utf8;
use File::Find;
$topdir = "c:/apple"; # top level of subfolders
$mapfile = "c:/temp/sample.txt"; # config file to map character to code
$enc = "utf16le"; # character coding of texts
open(FH, "<:encoding($enc)", $mapfile) or die "$mapfile: $!";
while (<FH>) {
#_ = split(" ");
$map{$_[0]} = $_[1];
}
close(FH);
find(\&process, $topdir);
sub process {
my $file = $_;
if (-f $file && $file =~ /\.txt$/) {
my $tmp = "$file.tmp";
my $lines = "";
open(FH, "<:encoding($enc)", $file) or die "$file: $!";
open(W, ">:encoding($enc)", $tmp) or die "$tmp: $!";
while (<FH>) {
$lines .= $_; # slurp all text
}
foreach $key (keys %map) {
$lines =~ s/$key/$map{$key}/ge;
}
print W $lines;
close(FH);
close(W);
rename $file, "$file.bak"; # back-up original file
rename $tmp, $file;
}
}'
I need to tell you I have not tested the code on Windows execution environments (it is tested on Linux with Windows files). If it has some problems, please let me know. You may need to modify the assignments to $topdir, $mapfile, or $enc.

perl print hex ascii from arg input

I'm piping some hex ascii into perl and would like to use it's print function to print this stuff to a file. however the hex is not printed as ascii but as normal string
echo "\x48\x32\x35\x45" "\x90\xc1\xe2\x7a" | perl -e' #string = split " ",<>; print ${string[0]},chr(0),"$string[1]";' > input.txt
This gives however the following in input.txt
\x48\x32\x35\x45^#\x90\xc1\xe2\x7a
where it should more look like
H25E^#<90><C1><E2>z
You need evals (and extra double quotes) to make perl "interpret" the hex escapes:
echo "\x48\x32\x35\x45 \x90\xc1\xe2\x7a" | perl -e '#string = split " ",<>; print eval "\"$string[0]\"",chr(0),eval "\"$string[1]\"";'

How to urlencode data for curl command?

I am trying to write a bash script for testing that takes a parameter and sends it through curl to web site. I need to url encode the value to make sure that special characters are processed properly. What is the best way to do this?
Here is my basic script so far:
#!/bin/bash
host=${1:?'bad host'}
value=$2
shift
shift
curl -v -d "param=${value}" http://${host}/somepath $#
Use curl --data-urlencode; from man curl:
This posts data, similar to the other --data options with the exception that this performs URL-encoding. To be CGI-compliant, the <data> part should begin with a name followed by a separator and a content specification.
Example usage:
curl \
--data-urlencode "paramName=value" \
--data-urlencode "secondParam=value" \
http://example.com
See the man page for more info.
This requires curl 7.18.0 or newer (released January 2008). Use curl -V to check which version you have.
You can as well encode the query string:
curl --get \
--data-urlencode "p1=value 1" \
--data-urlencode "p2=value 2" \
http://example.com
# http://example.com?p1=value%201&p2=value%202
Another option is to use jq:
$ printf %s 'input text'|jq -sRr #uri
input%20text
$ jq -rn --arg x 'input text' '$x|#uri'
input%20text
-r (--raw-output) outputs the raw contents of strings instead of JSON string literals. -n (--null-input) doesn't read input from STDIN.
-R (--raw-input) treats input lines as strings instead of parsing them as JSON, and -sR (--slurp --raw-input) reads the input into a single string. You can replace -sRr with -Rr if your input only contains a single line or if you don't want to replace linefeeds with %0A:
$ printf %s\\n multiple\ lines of\ text|jq -Rr #uri
multiple%20lines
of%20text
$ printf %s\\n multiple\ lines of\ text|jq -sRr #uri
multiple%20lines%0Aof%20text%0A
Or this percent-encodes all bytes:
xxd -p|tr -d \\n|sed 's/../%&/g'
Here is the pure BASH answer.
Update: Since many changes have been discussed, I have placed this on https://github.com/sfinktah/bash/blob/master/rawurlencode.inc.sh for anybody to issue a PR against.
Note: This solution is not intended to encode unicode or multi-byte characters - which are quite outside BASH's humble native capabilities. It's only intended to encode symbols that would otherwise ruin argument passing in POST or GET requests, e.g. '&', '=' and so forth.
Very Important Note: DO NOT ATTEMPT TO WRITE YOUR OWN UNICODE CONVERSION FUNCTION, IN ANY LANGUAGE, EVER. See end of answer.
rawurlencode() {
local string="${1}"
local strlen=${#string}
local encoded=""
local pos c o
for (( pos=0 ; pos<strlen ; pos++ )); do
c=${string:$pos:1}
case "$c" in
[-_.~a-zA-Z0-9] ) o="${c}" ;;
* ) printf -v o '%%%02x' "'$c"
esac
encoded+="${o}"
done
echo "${encoded}" # You can either set a return variable (FASTER)
REPLY="${encoded}" #+or echo the result (EASIER)... or both... :p
}
You can use it in two ways:
easier: echo http://url/q?=$( rawurlencode "$args" )
faster: rawurlencode "$args"; echo http://url/q?${REPLY}
[edited]
Here's the matching rawurldecode() function, which - with all modesty - is awesome.
# Returns a string in which the sequences with percent (%) signs followed by
# two hex digits have been replaced with literal characters.
rawurldecode() {
# This is perhaps a risky gambit, but since all escape characters must be
# encoded, we can replace %NN with \xNN and pass the lot to printf -b, which
# will decode hex for us
printf -v REPLY '%b' "${1//%/\\x}" # You can either set a return variable (FASTER)
echo "${REPLY}" #+or echo the result (EASIER)... or both... :p
}
With the matching set, we can now perform some simple tests:
$ diff rawurlencode.inc.sh \
<( rawurldecode "$( rawurlencode "$( cat rawurlencode.inc.sh )" )" ) \
&& echo Matched
Output: Matched
And if you really really feel that you need an external tool (well, it will go a lot faster, and might do binary files and such...) I found this on my OpenWRT router...
replace_value=$(echo $replace_value | sed -f /usr/lib/ddns/url_escape.sed)
Where url_escape.sed was a file that contained these rules:
# sed url escaping
s:%:%25:g
s: :%20:g
s:<:%3C:g
s:>:%3E:g
s:#:%23:g
s:{:%7B:g
s:}:%7D:g
s:|:%7C:g
s:\\:%5C:g
s:\^:%5E:g
s:~:%7E:g
s:\[:%5B:g
s:\]:%5D:g
s:`:%60:g
s:;:%3B:g
s:/:%2F:g
s:?:%3F:g
s^:^%3A^g
s:#:%40:g
s:=:%3D:g
s:&:%26:g
s:\$:%24:g
s:\!:%21:g
s:\*:%2A:g
While it is not impossible to write such a script in BASH (probably using xxd and a very lengthy ruleset) capable of handing UTF-8 input, there are faster and more reliable ways. Attempting to decode UTF-8 into UTF-32 is a non-trivial task to do with accuracy, though very easy to do inaccurately such that you think it works until the day it doesn't.
Even the Unicode Consortium removed their sample code after discovering it was no longer 100% compatible with the actual standard.
The Unicode standard is constantly evolving, and has become extremely nuanced. Any implementation you can whip together will not be properly compliant, and if by some extreme effort you managed it, it wouldn't stay compliant.
Use Perl's URI::Escape module and uri_escape function in the second line of your bash script:
...
value="$(perl -MURI::Escape -e 'print uri_escape($ARGV[0]);' "$2")"
...
Edit: Fix quoting problems, as suggested by Chris Johnsen in the comments. Thanks!
One of variants, may be ugly, but simple:
urlencode() {
local data
if [[ $# != 1 ]]; then
echo "Usage: $0 string-to-urlencode"
return 1
fi
data="$(curl -s -o /dev/null -w %{url_effective} --get --data-urlencode "$1" "")"
if [[ $? != 3 ]]; then
echo "Unexpected error" 1>&2
return 2
fi
echo "${data##/?}"
return 0
}
Here is the one-liner version for example (as suggested by Bruno):
date | curl -Gso /dev/null -w %{url_effective} --data-urlencode #- "" | cut -c 3-
# If you experience the trailing %0A, use
date | curl -Gso /dev/null -w %{url_effective} --data-urlencode #- "" | sed -E 's/..(.*).../\1/'
for the sake of completeness, many solutions using sed or awk only translate a special set of characters and are hence quite large by code size and also dont translate other special characters that should be encoded.
a safe way to urlencode would be to just encode every single byte - even those that would've been allowed.
echo -ne 'some random\nbytes' | xxd -plain | tr -d '\n' | sed 's/\(..\)/%\1/g'
xxd is taking care here that the input is handled as bytes and not characters.
edit:
xxd comes with the vim-common package in Debian and I was just on a system where it was not installed and I didnt want to install it. The altornative is to use hexdump from the bsdmainutils package in Debian. According to the following graph, bsdmainutils and vim-common should have an about equal likelihood to be installed:
http://qa.debian.org/popcon-png.php?packages=vim-common%2Cbsdmainutils&show_installed=1&want_legend=1&want_ticks=1
but nevertheless here a version which uses hexdump instead of xxd and allows to avoid the tr call:
echo -ne 'some random\nbytes' | hexdump -v -e '/1 "%02x"' | sed 's/\(..\)/%\1/g'
I find it more readable in python:
encoded_value=$(python3 -c "import urllib.parse; print urllib.parse.quote('''$value''')")
the triple ' ensures that single quotes in value won't hurt. urllib is in the standard library. It work for example for this crazy (real world) url:
"http://www.rai.it/dl/audio/" "1264165523944Ho servito il re d'Inghilterra - Puntata 7
I've found the following snippet useful to stick it into a chain of program calls, where URI::Escape might not be installed:
perl -p -e 's/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg'
(source)
If you wish to run GET request and use pure curl just add --get to #Jacob's solution.
Here is an example:
curl -v --get --data-urlencode "access_token=$(cat .fb_access_token)" https://graph.facebook.com/me/feed
This may be the best one:
after=$(echo -e "$before" | od -An -tx1 | tr ' ' % | xargs printf "%s")
Direct link to awk version : http://www.shelldorado.com/scripts/cmds/urlencode
I used it for years and it works like a charm
:
##########################################################################
# Title : urlencode - encode URL data
# Author : Heiner Steven (heiner.steven#odn.de)
# Date : 2000-03-15
# Requires : awk
# Categories : File Conversion, WWW, CGI
# SCCS-Id. : #(#) urlencode 1.4 06/10/29
##########################################################################
# Description
# Encode data according to
# RFC 1738: "Uniform Resource Locators (URL)" and
# RFC 1866: "Hypertext Markup Language - 2.0" (HTML)
#
# This encoding is used i.e. for the MIME type
# "application/x-www-form-urlencoded"
#
# Notes
# o The default behaviour is not to encode the line endings. This
# may not be what was intended, because the result will be
# multiple lines of output (which cannot be used in an URL or a
# HTTP "POST" request). If the desired output should be one
# line, use the "-l" option.
#
# o The "-l" option assumes, that the end-of-line is denoted by
# the character LF (ASCII 10). This is not true for Windows or
# Mac systems, where the end of a line is denoted by the two
# characters CR LF (ASCII 13 10).
# We use this for symmetry; data processed in the following way:
# cat | urlencode -l | urldecode -l
# should (and will) result in the original data
#
# o Large lines (or binary files) will break many AWK
# implementations. If you get the message
# awk: record `...' too long
# record number xxx
# consider using GNU AWK (gawk).
#
# o urlencode will always terminate it's output with an EOL
# character
#
# Thanks to Stefan Brozinski for pointing out a bug related to non-standard
# locales.
#
# See also
# urldecode
##########################################################################
PN=`basename "$0"` # Program name
VER='1.4'
: ${AWK=awk}
Usage () {
echo >&2 "$PN - encode URL data, $VER
usage: $PN [-l] [file ...]
-l: encode line endings (result will be one line of output)
The default is to encode each input line on its own."
exit 1
}
Msg () {
for MsgLine
do echo "$PN: $MsgLine" >&2
done
}
Fatal () { Msg "$#"; exit 1; }
set -- `getopt hl "$#" 2>/dev/null` || Usage
[ $# -lt 1 ] && Usage # "getopt" detected an error
EncodeEOL=no
while [ $# -gt 0 ]
do
case "$1" in
-l) EncodeEOL=yes;;
--) shift; break;;
-h) Usage;;
-*) Usage;;
*) break;; # First file name
esac
shift
done
LANG=C export LANG
$AWK '
BEGIN {
# We assume an awk implementation that is just plain dumb.
# We will convert an character to its ASCII value with the
# table ord[], and produce two-digit hexadecimal output
# without the printf("%02X") feature.
EOL = "%0A" # "end of line" string (encoded)
split ("1 2 3 4 5 6 7 8 9 A B C D E F", hextab, " ")
hextab [0] = 0
for ( i=1; i<=255; ++i ) ord [ sprintf ("%c", i) "" ] = i + 0
if ("'"$EncodeEOL"'" == "yes") EncodeEOL = 1; else EncodeEOL = 0
}
{
encoded = ""
for ( i=1; i<=length ($0); ++i ) {
c = substr ($0, i, 1)
if ( c ~ /[a-zA-Z0-9.-]/ ) {
encoded = encoded c # safe character
} else if ( c == " " ) {
encoded = encoded "+" # special handling
} else {
# unsafe character, encode it as a two-digit hex-number
lo = ord [c] % 16
hi = int (ord [c] / 16);
encoded = encoded "%" hextab [hi] hextab [lo]
}
}
if ( EncodeEOL ) {
printf ("%s", encoded EOL)
} else {
print encoded
}
}
END {
#if ( EncodeEOL ) print ""
}
' "$#"
Here's a Bash solution which doesn't invoke any external programs:
uriencode() {
s="${1//'%'/%25}"
s="${s//' '/%20}"
s="${s//'"'/%22}"
s="${s//'#'/%23}"
s="${s//'$'/%24}"
s="${s//'&'/%26}"
s="${s//'+'/%2B}"
s="${s//','/%2C}"
s="${s//'/'/%2F}"
s="${s//':'/%3A}"
s="${s//';'/%3B}"
s="${s//'='/%3D}"
s="${s//'?'/%3F}"
s="${s//'#'/%40}"
s="${s//'['/%5B}"
s="${s//']'/%5D}"
printf %s "$s"
}
url=$(echo "$1" | sed -e 's/%/%25/g' -e 's/ /%20/g' -e 's/!/%21/g' -e 's/"/%22/g' -e 's/#/%23/g' -e 's/\$/%24/g' -e 's/\&/%26/g' -e 's/'\''/%27/g' -e 's/(/%28/g' -e 's/)/%29/g' -e 's/\*/%2a/g' -e 's/+/%2b/g' -e 's/,/%2c/g' -e 's/-/%2d/g' -e 's/\./%2e/g' -e 's/\//%2f/g' -e 's/:/%3a/g' -e 's/;/%3b/g' -e 's//%3e/g' -e 's/?/%3f/g' -e 's/#/%40/g' -e 's/\[/%5b/g' -e 's/\\/%5c/g' -e 's/\]/%5d/g' -e 's/\^/%5e/g' -e 's/_/%5f/g' -e 's/`/%60/g' -e 's/{/%7b/g' -e 's/|/%7c/g' -e 's/}/%7d/g' -e 's/~/%7e/g')
this will encode the string inside of $1 and output it in $url. although you don't have to put it in a var if you want. BTW didn't include the sed for tab thought it would turn it into spaces
Using php from a shell script:
value="http://www.google.com"
encoded=$(php -r "echo rawurlencode('$value');")
# encoded = "http%3A%2F%2Fwww.google.com"
echo $(php -r "echo rawurldecode('$encoded');")
# returns: "http://www.google.com"
http://www.php.net/manual/en/function.rawurlencode.php
http://www.php.net/manual/en/function.rawurldecode.php
If you don't want to depend on Perl you can also use sed. It's a bit messy, as each character has to be escaped individually. Make a file with the following contents and call it urlencode.sed
s/%/%25/g
s/ /%20/g
s/ /%09/g
s/!/%21/g
s/"/%22/g
s/#/%23/g
s/\$/%24/g
s/\&/%26/g
s/'\''/%27/g
s/(/%28/g
s/)/%29/g
s/\*/%2a/g
s/+/%2b/g
s/,/%2c/g
s/-/%2d/g
s/\./%2e/g
s/\//%2f/g
s/:/%3a/g
s/;/%3b/g
s//%3e/g
s/?/%3f/g
s/#/%40/g
s/\[/%5b/g
s/\\/%5c/g
s/\]/%5d/g
s/\^/%5e/g
s/_/%5f/g
s/`/%60/g
s/{/%7b/g
s/|/%7c/g
s/}/%7d/g
s/~/%7e/g
s/ /%09/g
To use it do the following.
STR1=$(echo "https://www.example.com/change&$ ^this to?%checkthe#-functionality" | cut -d\? -f1)
STR2=$(echo "https://www.example.com/change&$ ^this to?%checkthe#-functionality" | cut -d\? -f2)
OUT2=$(echo "$STR2" | sed -f urlencode.sed)
echo "$STR1?$OUT2"
This will split the string into a part that needs encoding, and the part that is fine, encode the part that needs it, then stitches back together.
You can put that into a sh script for convenience, maybe have it take a parameter to encode, put it on your path and then you can just call:
urlencode https://www.exxample.com?isThisFun=HellNo
source
You can emulate javascript's encodeURIComponent in perl. Here's the command:
perl -pe 's/([^a-zA-Z0-9_.!~*()'\''-])/sprintf("%%%02X", ord($1))/ge'
You could set this as a bash alias in .bash_profile:
alias encodeURIComponent='perl -pe '\''s/([^a-zA-Z0-9_.!~*()'\''\'\'''\''-])/sprintf("%%%02X",ord($1))/ge'\'
Now you can pipe into encodeURIComponent:
$ echo -n 'hèllo wôrld!' | encodeURIComponent
h%C3%A8llo%20w%C3%B4rld!
Python 3 based on #sandro's good answer from 2010:
echo "Test & /me" | python -c "import urllib.parse;print (urllib.parse.quote(input()))"
Test%20%26%20/me
This nodejs-based answer will use encodeURIComponent on stdin:
uriencode_stdin() {
node -p 'encodeURIComponent(require("fs").readFileSync(0))'
}
echo -n $'hello\nwörld' | uriencode_stdin
hello%0Aw%C3%B6rld
For those of you looking for a solution that doesn't need perl, here is one that only needs hexdump and awk:
url_encode() {
[ $# -lt 1 ] && { return; }
encodedurl="$1";
# make sure hexdump exists, if not, just give back the url
[ ! -x "/usr/bin/hexdump" ] && { return; }
encodedurl=`
echo $encodedurl | hexdump -v -e '1/1 "%02x\t"' -e '1/1 "%_c\n"' |
LANG=C awk '
$1 == "20" { printf("%s", "+"); next } # space becomes plus
$1 ~ /0[adAD]/ { next } # strip newlines
$2 ~ /^[a-zA-Z0-9.*()\/-]$/ { printf("%s", $2); next } # pass through what we can
{ printf("%%%s", $1) } # take hex value of everything else
'`
}
Stitched together from a couple of places across the net and some local trial and error. It works great!
uni2ascii is very handy:
$ echo -ne '你好世界' | uni2ascii -aJ
%E4%BD%A0%E5%A5%BD%E4%B8%96%E7%95%8C
Simple PHP option:
echo 'part-that-needs-encoding' | php -R 'echo urlencode($argn);'
What would parse URLs better than javascript?
node -p "encodeURIComponent('$url')"
Here is a POSIX function to do that:
url_encode() {
awk 'BEGIN {
for (n = 0; n < 125; n++) {
m[sprintf("%c", n)] = n
}
n = 1
while (1) {
s = substr(ARGV[1], n, 1)
if (s == "") {
break
}
t = s ~ /[[:alnum:]_.!~*\47()-]/ ? t s : t sprintf("%%%02X", m[s])
n++
}
print t
}' "$1"
}
Example:
value=$(url_encode "$2")
The question is about doing this in bash and there's no need for python or perl as there is in fact a single command that does exactly what you want - "urlencode".
value=$(urlencode "${2}")
This is also much better, as the above perl answer, for example, doesn't encode all characters correctly. Try it with the long dash you get from Word and you get the wrong encoding.
Note, you need "gridsite-clients" installed to provide this command:
sudo apt install gridsite-clients
Here's the node version:
uriencode() {
node -p "encodeURIComponent('${1//\'/\\\'}')"
}
Another php approach:
echo "encode me" | php -r "echo urlencode(file_get_contents('php://stdin'));"
Here is my version for busybox ash shell for an embedded system, I originally adopted Orwellophile's variant:
urlencode()
{
local S="${1}"
local encoded=""
local ch
local o
for i in $(seq 0 $((${#S} - 1)) )
do
ch=${S:$i:1}
case "${ch}" in
[-_.~a-zA-Z0-9])
o="${ch}"
;;
*)
o=$(printf '%%%02x' "'$ch")
;;
esac
encoded="${encoded}${o}"
done
echo ${encoded}
}
urldecode()
{
# urldecode <string>
local url_encoded="${1//+/ }"
printf '%b' "${url_encoded//%/\\x}"
}
Ruby, for completeness
value="$(ruby -r cgi -e 'puts CGI.escape(ARGV[0])' "$2")"
Here's a one-line conversion using Lua, similar to blueyed's answer except with all the RFC 3986 Unreserved Characters left unencoded (like this answer):
url=$(echo 'print((arg[1]:gsub("([^%w%-%.%_%~])",function(c)return("%%%02X"):format(c:byte())end)))' | lua - "$1")
Additionally, you may need to ensure that newlines in your string are converted from LF to CRLF, in which case you can insert a gsub("\r?\n", "\r\n") in the chain before the percent-encoding.
Here's a variant that, in the non-standard style of application/x-www-form-urlencoded, does that newline normalization, as well as encoding spaces as '+' instead of '%20' (which could probably be added to the Perl snippet using a similar technique).
url=$(echo 'print((arg[1]:gsub("\r?\n", "\r\n"):gsub("([^%w%-%.%_%~ ]))",function(c)return("%%%02X"):format(c:byte())end):gsub(" ","+"))' | lua - "$1")
In this case, I needed to URL encode the hostname. Don't ask why. Being a minimalist, and a Perl fan, here's what I came up with.
url_encode()
{
echo -n "$1" | perl -pe 's/[^a-zA-Z0-9\/_.~-]/sprintf "%%%02x", ord($&)/ge'
}
Works perfectly for me.

Resources