How can I provide the $INPUT_RECORD_SEPARATOR to ruby -n -e? - ruby

I'd like to use Ruby's $INPUT_RECORD_SEPARATOR aka $/ to operate on a tab-separated file.
The input file looks like this (grossly simplified):
a b c
(the values are separated by tabs).
I want to get the following output:
a---
b---
c---
I can easily achieve this by using ruby -e and setting the $INPUT_RECORD_SEPARATOR alias $/:
cat bla.txt | ruby -e '$/ = "\t"; ARGF.each {|line| puts line.chop + "---" }'
This works, but what I'd really like is this:
cat bla.txt | ruby -n -e '$/ = "\t"; puts $_.chop + "---" '
However, this prints:
a b c---
Apparently, it doesn't use the provided separator - presumably because it has already read the first line before the separator was set. I tried to provide it as an environment variable:
cat bla.txt | $/="\n" ruby -n -e 'puts $_.chop + "---" '
but this confuses the shell - it tries to interpret $/ as a command (I also tried escaping the $ with one, two, three or four backslashes, all to no avail).
So how can I combine $/ with ruby -n -e ?

Use the -0 option :
cat bla.txt | ruby -011 -n -e 'puts $_.chop + "---" '
a---
b---
c---
-0[ octal] Sets default record separator ($/) as an octal. Defaults to \0 if octal not specified.
tabs have an ascii code of 9, which in octal is 11. Hence the -011

Use a BEGIN block, which is processed before Ruby begins looping over the lines:
$ echo "foo\tbar\tbaz" | \
> ruby -n -e 'BEGIN { $/ = "\t" }; puts $_.chop + "---"'
foo---
bar---
baz---
Or, more readably:
#!/usr/bin/env ruby -n
BEGIN {
$/ = "\t"
}
puts $_.chop + "---"
Then:
$ chmod u+x script.rb
$ echo "foo\tbar\tbaz" | ./script.rb
foo---
bar---
baz---
If this is more than a one-off script (i.e. other people might use it), it may be worthwhile to make it configurable with an argument or an environment variable, e.g. $/ = ENV['IFS'] || "\t".

Related

Inspect null character from Bash's read command

I am on a system that does not have hexdump. I know there's a null character on STDIN, but I want to show/prove it. I've got Ruby on the system. I've found that I can directly print it like this:
$ printf 'before\000after' | (ruby -e "stdin_contents = STDIN.read_nonblock(10000) rescue nil; puts 'stdin contents: ' + stdin_contents.inspect")
stdin contents: "before\x00after"
However, I need to run this inside of a bash script i.e. STDIN is not being directly piped to my script. I have to get it via running read in bash.
When I try to use read to get the stdin characters, it seems to be truncating them and it doesn't work:
$ printf 'before\000after' | (read -r -t 1 -n 1000000; printf "$REPLY" | ruby -e "stdin_contents = STDIN.read_nonblock(10000) rescue nil; puts 'stdin contents: ' + stdin_contents.inspect")
stdin contents: "before"
My question is this: How can I get the full/raw output including the null character from read

How can extract a value from .ini using sed [duplicate]

I have a parameters.ini file, such as:
[parameters.ini]
database_user = user
database_version = 20110611142248
I want to read in and use the database version specified in the parameters.ini file from within a bash shell script so I can process it.
#!/bin/sh
# Need to get database version from parameters.ini file to use in script
php app/console doctrine:migrations:migrate $DATABASE_VERSION
How would I do this?
How about grepping for that line then using awk
version=$(awk -F "=" '/database_version/ {print $2}' parameters.ini)
You can use bash native parser to interpret ini values, by:
$ source <(grep = file.ini)
Sample file:
[section-a]
var1=value1
var2=value2
IPS=( "1.2.3.4" "1.2.3.5" )
To access variables, you simply printing them: echo $var1. You may also use arrays as shown above (echo ${IPS[#]}).
If you only want a single value just grep for it:
source <(grep var1 file.ini)
For the demo, check this recording at asciinema.
It is simple as you don't need for any external library to parse the data, but it comes with some disadvantages. For example:
If you have spaces between = (variable name and value), then you've to trim the spaces first, e.g.
$ source <(grep = file.ini | sed 's/ *= */=/g')
Or if you don't care about the spaces (including in the middle), use:
$ source <(grep = file.ini | tr -d ' ')
To support ; comments, replace them with #:
$ sed "s/;/#/g" foo.ini | source /dev/stdin
The sections aren't supported (e.g. if you've [section-name], then you've to filter it out as shown above, e.g. grep =), the same for other unexpected errors.
If you need to read specific value under specific section, use grep -A, sed, awk or ex).
E.g.
source <(grep = <(grep -A5 '\[section-b\]' file.ini))
Note: Where -A5 is the number of rows to read in the section. Replace source with cat to debug.
If you've got any parsing errors, ignore them by adding: 2>/dev/null
See also:
How to parse and convert ini file into bash array variables? at serverfault SE
Are there any tools for modifying INI style files from shell script
Sed one-liner, that takes sections into account. Example file:
[section1]
param1=123
param2=345
param3=678
[section2]
param1=abc
param2=def
param3=ghi
[section3]
param1=000
param2=111
param3=222
Say you want param2 from section2. Run the following:
sed -nr "/^\[section2\]/ { :l /^param2[ ]*=/ { s/[^=]*=[ ]*//; p; q;}; n; b l;}" ./file.ini
will give you
def
Bash does not provide a parser for these files. Obviously you can use an awk command or a couple of sed calls, but if you are bash-priest and don't want to use any other shell, then you can try the following obscure code:
#!/usr/bin/env bash
cfg_parser ()
{
ini="$(<$1)" # read the file
ini="${ini//[/\[}" # escape [
ini="${ini//]/\]}" # escape ]
IFS=$'\n' && ini=( ${ini} ) # convert to line-array
ini=( ${ini[*]//;*/} ) # remove comments with ;
ini=( ${ini[*]/\ =/=} ) # remove tabs before =
ini=( ${ini[*]/=\ /=} ) # remove tabs after =
ini=( ${ini[*]/\ =\ /=} ) # remove anything with a space around =
ini=( ${ini[*]/#\\[/\}$'\n'cfg.section.} ) # set section prefix
ini=( ${ini[*]/%\\]/ \(} ) # convert text2function (1)
ini=( ${ini[*]/=/=\( } ) # convert item to array
ini=( ${ini[*]/%/ \)} ) # close array parenthesis
ini=( ${ini[*]/%\\ \)/ \\} ) # the multiline trick
ini=( ${ini[*]/%\( \)/\(\) \{} ) # convert text2function (2)
ini=( ${ini[*]/%\} \)/\}} ) # remove extra parenthesis
ini[0]="" # remove first element
ini[${#ini[*]} + 1]='}' # add the last brace
eval "$(echo "${ini[*]}")" # eval the result
}
cfg_writer ()
{
IFS=' '$'\n'
fun="$(declare -F)"
fun="${fun//declare -f/}"
for f in $fun; do
[ "${f#cfg.section}" == "${f}" ] && continue
item="$(declare -f ${f})"
item="${item##*\{}"
item="${item%\}}"
item="${item//=*;/}"
vars="${item//=*/}"
eval $f
echo "[${f#cfg.section.}]"
for var in $vars; do
echo $var=\"${!var}\"
done
done
}
Usage:
# parse the config file called 'myfile.ini', with the following
# contents::
# [sec2]
# var2='something'
cfg.parser 'myfile.ini'
# enable section called 'sec2' (in the file [sec2]) for reading
cfg.section.sec2
# read the content of the variable called 'var2' (in the file
# var2=XXX). If your var2 is an array, then you can use
# ${var[index]}
echo "$var2"
Bash ini-parser can be found at The Old School DevOps blog site.
Just include your .ini file into bash body:
File example.ini:
DBNAME=test
DBUSER=scott
DBPASSWORD=tiger
File example.sh
#!/bin/bash
#Including .ini file
. example.ini
#Test
echo "${DBNAME} ${DBUSER} ${DBPASSWORD}"
All of the solutions I've seen so far also hit on commented out lines. This one didn't, if the comment code is ;:
awk -F '=' '{if (! ($0 ~ /^;/) && $0 ~ /database_version/) print $2}' file.ini
You may use crudini tool to get ini values, e.g.:
DATABASE_VERSION=$(crudini --get parameters.ini '' database_version)
one of more possible solutions
dbver=$(sed -n 's/.*database_version *= *\([^ ]*.*\)/\1/p' < parameters.ini)
echo $dbver
Display the value of my_key in an ini-style my_file:
sed -n -e 's/^\s*my_key\s*=\s*//p' my_file
-n -- do not print anything by default
-e -- execute the expression
s/PATTERN//p -- display anything following this pattern
In the pattern:
^ -- pattern begins at the beginning of the line
\s -- whitespace character
* -- zero or many (whitespace characters)
Example:
$ cat my_file
# Example INI file
something = foo
my_key = bar
not_my_key = baz
my_key_2 = bing
$ sed -n -e 's/^\s*my_key\s*=\s*//p' my_file
bar
So:
Find a pattern where the line begins with zero or many whitespace characters,
followed by the string my_key, followed by zero or many whitespace characters, an equal sign, then zero or many whitespace characters again. Display the rest of the content on that line following that pattern.
Similar to the other Python answers, you can do this using the -c flag to execute a sequence of Python statements given on the command line:
$ python3 -c "import configparser; c = configparser.ConfigParser(); c.read('parameters.ini'); print(c['parameters.ini']['database_version'])"
20110611142248
This has the advantage of requiring only the Python standard library and the advantage of not writing a separate script file.
Or use a here document for better readability, thusly:
#!/bin/bash
python << EOI
import configparser
c = configparser.ConfigParser()
c.read('params.txt')
print c['chassis']['serialNumber']
EOI
serialNumber=$(python << EOI
import configparser
c = configparser.ConfigParser()
c.read('params.txt')
print c['chassis']['serialNumber']
EOI
)
echo $serialNumber
sed
You can use sed to parse the ini configuration file, especially when you've section names like:
# last modified 1 April 2001 by John Doe
[owner]
name=John Doe
organization=Acme Widgets Inc.
[database]
# use IP address in case network name resolution is not working
server=192.0.2.62
port=143
file=payroll.dat
so you can use the following sed script to parse above data:
# Configuration bindings found outside any section are given to
# to the default section.
1 {
x
s/^/default/
x
}
# Lines starting with a #-character are comments.
/#/n
# Sections are unpacked and stored in the hold space.
/\[/ {
s/\[\(.*\)\]/\1/
x
b
}
# Bindings are unpacked and decorated with the section
# they belong to, before being printed.
/=/ {
s/^[[:space:]]*//
s/[[:space:]]*=[[:space:]]*/|/
G
s/\(.*\)\n\(.*\)/\2|\1/
p
}
this will convert the ini data into this flat format:
owner|name|John Doe
owner|organization|Acme Widgets Inc.
database|server|192.0.2.62
database|port|143
database|file|payroll.dat
so it'll be easier to parse using sed, awk or read by having section names in every line.
Credits & source: Configuration files for shell scripts, Michael Grünewald
Alternatively, you can use this project: chilladx/config-parser, a configuration parser using sed.
For people (like me) looking to read INI files from shell scripts (read shell, not bash) - I've knocked up the a little helper library which tries to do exactly that:
https://github.com/wallyhall/shini (MIT license, do with it as you please. I've linked above including it inline as the code is quite lengthy.)
It's somewhat more "complicated" than the simple sed lines suggested above - but works on a very similar basis.
Function reads in a file line-by-line - looking for section markers ([section]) and key/value declarations (key=value).
Ultimately you get a callback to your own function - section, key and value.
Here is my version, which parses sections and populates a global associative array g_iniProperties with it.
Note that this works only with bash v4.2 and higher.
function parseIniFile() { #accepts the name of the file to parse as argument ($1)
#declare syntax below (-gA) only works with bash 4.2 and higher
unset g_iniProperties
declare -gA g_iniProperties
currentSection=""
while read -r line
do
if [[ $line = [* ]] ; then
if [[ $line = [* ]] ; then
currentSection=$(echo $line | sed -e 's/\r//g' | tr -d "[]")
fi
else
if [[ $line = *=* ]] ; then
cleanLine=$(echo $line | sed -e 's/\r//g')
key=$currentSection.$(echo $cleanLine | awk -F: '{ st = index($0,"=");print substr($0,0,st-1)}')
value=$(echo $cleanLine | awk -F: '{ st = index($0,"=");print substr($0,st+1)}')
g_iniProperties[$key]=$value
fi
fi;
done < $1
}
And here is a sample code using the function above:
parseIniFile "/path/to/myFile.ini"
for key in "${!g_iniProperties[#]}"; do
echo "Found key/value $key = ${g_iniProperties[$key]}"
done
Yet another implementation using awk with a little more flexibility.
function parse_ini() {
cat /dev/stdin | awk -v section="$1" -v key="$2" '
BEGIN {
if (length(key) > 0) { params=2 }
else if (length(section) > 0) { params=1 }
else { params=0 }
}
match($0,/#/) { next }
match($0,/^\[(.+)\]$/){
current=substr($0, RSTART+1, RLENGTH-2)
found=current==section
if (params==0) { print current }
}
match($0,/(.+)=(.+)/) {
if (found) {
if (params==2 && key==$1) { print $3 }
if (params==1) { printf "%s=%s\n",$1,$3 }
}
}'
}
You can use calling passing between 0 and 2 params:
cat myfile1.ini myfile2.ini | parse_ini # List section names
cat myfile1.ini myfile2.ini | parse_ini 'my-section' # Prints keys and values from a section
cat myfile1.ini myfile2.ini | parse_ini 'my-section' 'my-key' # Print a single value
complex simplicity
ini file
test.ini
[section1]
name1=value1
name2=value2
[section2]
name1=value_1
name2 = value_2
bash script with read and execute
/bin/parseini
#!/bin/bash
set +a
while read p; do
reSec='^\[(.*)\]$'
#reNV='[ ]*([^ ]*)+[ ]*=(.*)' #Remove only spaces around name
reNV='[ ]*([^ ]*)+[ ]*=[ ]*(.*)' #Remove spaces around name and spaces before value
if [[ $p =~ $reSec ]]; then
section=${BASH_REMATCH[1]}
elif [[ $p =~ $reNV ]]; then
sNm=${section}_${BASH_REMATCH[1]}
sVa=${BASH_REMATCH[2]}
set -a
eval "$(echo "$sNm"=\""$sVa"\")"
set +a
fi
done < $1
then in another script I source the results of the command and can use any variables within
test.sh
#!/bin/bash
source parseini test.ini
echo $section2_name2
finally from command line the output is thus
# ./test.sh
value_2
Some of the answers don't respect comments. Some don't respect sections. Some recognize only one syntax (only ":" or only "="). Some Python answers fail on my machine because of differing captialization or failing to import the sys module. All are a bit too terse for me.
So I wrote my own, and if you have a modern Python, you can probably call this from your Bash shell. It has the advantage of adhering to some of the common Python coding conventions, and even provides sensible error messages and help. To use it, name it something like myconfig.py (do NOT call it configparser.py or it may try to import itself,) make it executable, and call it like
value=$(myconfig.py something.ini sectionname value)
Here's my code for Python 3.5 on Linux:
#!/usr/bin/env python3
# Last Modified: Thu Aug 3 13:58:50 PDT 2017
"""A program that Bash can call to parse an .ini file"""
import sys
import configparser
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="A program that Bash can call to parse an .ini file")
parser.add_argument("inifile", help="name of the .ini file")
parser.add_argument("section", help="name of the section in the .ini file")
parser.add_argument("itemname", help="name of the desired value")
args = parser.parse_args()
config = configparser.ConfigParser()
config.read(args.inifile)
print(config.get(args.section, args.itemname))
I wrote a quick and easy python script to include in my bash script.
For example, your ini file is called food.ini
and in the file you can have some sections and some lines:
[FRUIT]
Oranges = 14
Apples = 6
Copy this small 6 line Python script and save it as configparser.py
#!/usr/bin/python
import configparser
import sys
config = configparser.ConfigParser()
config.read(sys.argv[1])
print config.get(sys.argv[2],sys.argv[3])
Now, in your bash script you could do this for example.
OrangeQty=$(python configparser.py food.ini FRUIT Oranges)
or
ApplesQty=$(python configparser.py food.ini FRUIT Apples)
echo $ApplesQty
This presupposes:
you have Python installed
you have the configparser library installed (this should come with a std python installation)
Hope it helps
:¬)
The explanation to the answer for the one-liner sed.
[section1]
param1=123
param2=345
param3=678
[section2]
param1=abc
param2=def
param3=ghi
[section3]
param1=000
param2=111
param3=222
sed -nr "/^\[section2\]/ { :l /^\s*[^#].*/ p; n; /^\[/ q; b l; }" ./file.ini
To understand, it will be easier to format the line like this:
sed -nr "
# start processing when we found the word \"section2\"
/^\[section2\]/ { #the set of commands inside { } will be executed
#create a label \"l\" (https://www.grymoire.com/Unix/Sed.html#uh-58)
:l /^\s*[^#].*/ p;
# move on to the next line. For the first run it is the \"param1=abc\"
n;
# check if this line is beginning of new section. If yes - then exit.
/^\[/ q
#otherwise jump to the label \"l\"
b l
}
" file.ini
This script will get parameters as follow :
meaning that if your ini has :
pars_ini.ksh < path to ini file > < name of Sector in Ini file > < the name in name=value to return >
eg. how to call it :
[ environment ]
a=x
[ DataBase_Sector ]
DSN = something
Then calling :
pars_ini.ksh /users/bubu_user/parameters.ini DataBase_Sector DSN
this will retrieve the following "something"
the script "pars_ini.ksh" :
\#!/bin/ksh
\#INI_FILE=path/to/file.ini
\#INI_SECTION=TheSection
\# BEGIN parse-ini-file.sh
\# SET UP THE MINIMUM VARS FIRST
alias sed=/usr/local/bin/sed
INI_FILE=$1
INI_SECTION=$2
INI_NAME=$3
INI_VALUE=""
eval `sed -e 's/[[:space:]]*\=[[:space:]]*/=/g' \
-e 's/;.*$//' \
-e 's/[[:space:]]*$//' \
-e 's/^[[:space:]]*//' \
-e "s/^\(.*\)=\([^\"']*\)$/\1=\"\2\"/" \
< $INI_FILE \
| sed -n -e "/^\[$INI_SECTION\]/,/^\s*\[/{/^[^;].*\=.*/p;}"`
TEMP_VALUE=`echo "$"$INI_NAME`
echo `eval echo $TEMP_VALUE`
This implementation uses awk and has the following advantages:
Will only return the first matching entry
Ignores lines that start with a ;
Trims leading and trailing whitespace, but not internal whitespace
Formatted version:
awk -F '=' '/^\s*database_version\s*=/ {
sub(/^ +/, "", $2);
sub(/ +$/, "", $2);
print $2;
exit;
}' parameters.ini
One-liner:
awk -F '=' '/^\s*database_version\s*=/ { sub(/^ +/, "", $2); sub(/ +$/, "", $2); print $2; exit; }' parameters.ini
You can use a CSV parser xsv as parsing INI data.
cargo install xsv
$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
$ xsv select -d "=" - <<< "$( cat /etc/*release )" | xsv search --no-headers --select 1 "DISTRIB_CODENAME" | xsv select 2
xenial
or from a file.
$ xsv select -d "=" - file.ini | xsv search --no-headers --select 1 "DISTRIB_CODENAME" | xsv select 2
My version of the one-liner
#!/bin/bash
#Reader for MS Windows 3.1 Ini-files
#Usage: inireader.sh
# e.g.: inireader.sh win.ini ERRORS DISABLE
# would return value "no" from the section of win.ini
#[ERRORS]
#DISABLE=no
INIFILE=$1
SECTION=$2
ITEM=$3
cat $INIFILE | sed -n /^\[$SECTION\]/,/^\[.*\]/p | grep "^[:space:]*$ITEM[:space:]*=" | sed s/.*=[:space:]*//
Just finished writing my own parser. I tried to use various parser found here, none seems to work with both ksh93 (AIX) and bash (Linux).
It's old programming style - parsing line by line. Pretty fast since it used few external commands. A bit slower because of all the eval required for dynamic name of the array.
The ini support 3 special syntaxs:
includefile=ini file -->
Load an additionnal ini file. Useful for splitting ini in multiple files, or re-use some piece of configuration
includedir=directory -->
Same as includefile, but include a complete directory
includesection=section -->
Copy an existing section to the current section.
I used all thoses syntax to have pretty complex, re-usable ini file. Useful to install products when installing a new OS - we do that a lot.
Values can be accessed with ${ini[$section.$item]}. The array MUST be defined before calling this.
Have fun. Hope it's useful for someone else!
function Show_Debug {
[[ $DEBUG = YES ]] && echo "DEBUG $#"
}
function Fatal {
echo "$#. Script aborted"
exit 2
}
#-------------------------------------------------------------------------------
# This function load an ini file in the array "ini"
# The "ini" array must be defined in the calling program (typeset -A ini)
#
# It could be any array name, the default array name is "ini".
#
# There is heavy usage of "eval" since ksh and bash do not support
# reference variable. The name of the ini is passed as variable, and must
# be "eval" at run-time to work. Very specific syntax was used and must be
# understood before making any modifications.
#
# It complexify greatly the program, but add flexibility.
#-------------------------------------------------------------------------------
function Load_Ini {
Show_Debug "$0($#)"
typeset ini_file="$1"
# Name of the array to fill. By default, it's "ini"
typeset ini_array_name="${2:-ini}"
typeset section variable value line my_section file subsection value_array include_directory all_index index sections pre_parse
typeset LF="
"
if [[ ! -s $ini_file ]]; then
Fatal "The ini file is empty or absent in $0 [$ini_file]"
fi
include_directory=$(dirname $ini_file)
include_directory=${include_directory:-$(pwd)}
Show_Debug "include_directory=$include_directory"
section=""
# Since this code support both bash and ksh93, you cannot use
# the syntax "echo xyz|while read line". bash doesn't work like
# that.
# It forces the use of "<<<", introduced in bash and ksh93.
Show_Debug "Reading file $ini_file and putting the results in array $ini_array_name"
pre_parse="$(sed 's/^ *//g;s/#.*//g;s/ *$//g' <$ini_file | egrep -v '^$')"
while read line; do
if [[ ${line:0:1} = "[" ]]; then # Is the line starting with "["?
# Replace [section_name] to section_name by removing the first and last character
section="${line:1}"
section="${section%\]}"
eval "sections=\${$ini_array_name[sections_list]}"
sections="$sections${sections:+ }$section"
eval "$ini_array_name[sections_list]=\"$sections\""
Show_Debug "$ini_array_name[sections_list]=\"$sections\""
eval "$ini_array_name[$section.exist]=YES"
Show_Debug "$ini_array_name[$section.exist]='YES'"
else
variable=${line%%=*} # content before the =
value=${line#*=} # content after the =
if [[ $variable = includefile ]]; then
# Include a single file
Load_Ini "$include_directory/$value" "$ini_array_name"
continue
elif [[ $variable = includedir ]]; then
# Include a directory
# If the value doesn't start with a /, add the calculated include_directory
if [[ $value != /* ]]; then
value="$include_directory/$value"
fi
# go thru each file
for file in $(ls $value/*.ini 2>/dev/null); do
if [[ $file != *.ini ]]; then continue; fi
# Load a single file
Load_Ini "$file" "$ini_array_name"
done
continue
elif [[ $variable = includesection ]]; then
# Copy an existing section into the current section
eval "all_index=\"\${!$ini_array_name[#]}\""
# It's not necessarily fast. Need to go thru all the array
for index in $all_index; do
# Only if it is the requested section
if [[ $index = $value.* ]]; then
# Evaluate the subsection [section.subsection] --> subsection
subsection=${index#*.}
# Get the current value (source section)
eval "value_array=\"\${$ini_array_name[$index]}\""
# Assign the value to the current section
# The $value_array must be resolved on the second pass of the eval, so make sure the
# first pass doesn't resolve it (\$value_array instead of $value_array).
# It must be evaluated on the second pass in case there is special character like $1,
# or ' or " in it (code).
eval "$ini_array_name[$section.$subsection]=\"\$value_array\""
Show_Debug "$ini_array_name[$section.$subsection]=\"$value_array\""
fi
done
fi
# Add the value to the array
eval "current_value=\"\${$ini_array_name[$section.$variable]}\""
# If there's already something for this field, add it with the current
# content separated by a LF (line_feed)
new_value="$current_value${current_value:+$LF}$value"
# Assign the content
# The $new_value must be resolved on the second pass of the eval, so make sure the
# first pass doesn't resolve it (\$new_value instead of $new_value).
# It must be evaluated on the second pass in case there is special character like $1,
# or ' or " in it (code).
eval "$ini_array_name[$section.$variable]=\"\$new_value\""
Show_Debug "$ini_array_name[$section.$variable]=\"$new_value\""
fi
done <<< "$pre_parse"
Show_Debug "exit $0($#)\n"
}
When I use a password in base64, I put the separator ":" because the base64 string may has "=". For example (I use ksh):
> echo "Abc123" | base64
QWJjMTIzCg==
In parameters.ini put the line pass:QWJjMTIzCg==, and finally:
> PASS=`awk -F":" '/pass/ {print $2 }' parameters.ini | base64 --decode`
> echo "$PASS"
Abc123
If the line has spaces like "pass : QWJjMTIzCg== " add | tr -d ' ' to trim them:
> PASS=`awk -F":" '/pass/ {print $2 }' parameters.ini | tr -d ' ' | base64 --decode`
> echo "[$PASS]"
[Abc123]
This uses the system perl and clean regular expressions:
cat parameters.ini | perl -0777ne 'print "$1" if /\[\s*parameters\.ini\s*\][\s\S]*?\sdatabase_version\s*=\s*(.*)/'
The answer of "Karen Gabrielyan" among another answers was the best but in some environments we dont have awk, like typical busybox, i changed the answer by below code.
trim()
{
local trimmed="$1"
# Strip leading space.
trimmed="${trimmed## }"
# Strip trailing space.
trimmed="${trimmed%% }"
echo "$trimmed"
}
function parseIniFile() { #accepts the name of the file to parse as argument ($1)
#declare syntax below (-gA) only works with bash 4.2 and higher
unset g_iniProperties
declare -gA g_iniProperties
currentSection=""
while read -r line
do
if [[ $line = [* ]] ; then
if [[ $line = [* ]] ; then
currentSection=$(echo $line | sed -e 's/\r//g' | tr -d "[]")
fi
else
if [[ $line = *=* ]] ; then
cleanLine=$(echo $line | sed -e 's/\r//g')
key=$(trim $currentSection.$(echo $cleanLine | cut -d'=' -f1'))
value=$(trim $(echo $cleanLine | cut -d'=' -f2))
g_iniProperties[$key]=$value
fi
fi;
done < $1
}
If Python is available, the following will read all the sections, keys and values and save them in variables with their names following the format "[section]_[key]". Python can read .ini files properly, so we make use of it.
#!/bin/bash
eval $(python3 << EOP
from configparser import SafeConfigParser
config = SafeConfigParser()
config.read("config.ini"))
for section in config.sections():
for (key, val) in config.items(section):
print(section + "_" + key + "=\"" + val + "\"")
EOP
)
echo "Environment_type: ${Environment_type}"
echo "Environment_name: ${Environment_name}"
config.ini
[Environment]
type = DEV
name = D01
If using sections, this will do the job :
Example raw output :
$ ./settings
[section]
SETTING_ONE=this is setting one
SETTING_TWO=This is the second setting
ANOTHER_SETTING=This is another setting
Regexp parsing :
$ ./settings | sed -n -E "/^\[.*\]/{s/\[(.*)\]/\1/;h;n;};/^[a-zA-Z]/{s/#.*//;G;s/([^ ]*) *= *(.*)\n(.*)/\3_\1='\2'/;p;}"
section_SETTING_ONE='this is setting one'
section_SETTING_TWO='This is the second setting'
section_ANOTHER_SETTING='This is another setting'
Now all together :
$ eval "$(./settings | sed -n -E "/^\[.*\]/{s/\[(.*)\]/\1/;h;n;};/^[a-zA-Z]/{s/#.*//;G;s/([^ ]*) *= *(.*)\n(.*)/\3_\1='\2'/;p;}")"
$ echo $section_SETTING_TWO
This is the second setting
I have nice one-liner (assuimng you have php and jq installed):
cat file.ini | php -r "echo json_encode(parse_ini_string(file_get_contents('php://stdin'), true, INI_SCANNER_RAW));" | jq '.section.key'
This thread does not have enough solutions to choose from, thus here my solution, it does not require tools like sed or awk :
grep '^\[section\]' -A 999 config.ini | tail -n +2 | grep -B 999 '^\[' | head -n -1 | grep '^key' | cut -d '=' -f 2
If your are to expect sections with more than 999 lines, feel free to adapt the example above. Note that you may want to trim the resulting value, to remove spaces or a comment string after the value. Remove the ^ if you need to match keys that do not start at the beginning of the line, as in the example of the question. Better, match explicitly for white spaces and tabs, in such a case.
If you have multiple values in a given section you want to read, but want to avoid reading the file multiple times:
CONFIG_SECTION=$(grep '^\[section\]' -A 999 config.ini | tail -n +2 | grep -B 999 '^\[' | head -n -1)
KEY1=$(echo ${CONFIG_SECTION} | tr ' ' '\n' | grep key1 | cut -d '=' -f 2)
echo "KEY1=${KEY1}"
KEY2=$(echo ${CONFIG_SECTION} | tr ' ' '\n' | grep key2 | cut -d '=' -f 2)
echo "KEY2=${KEY2}"

Use `sed` to replace text in code block with output of command at the top of the code block

I have a markdown file that has snippets of code resembling the following example:
```
$ cat docs/code_sample.sh
#!/usr/bin/env bash
echo "Hello, world"
```
This means there there's a file at the location docs/code_sample.sh, whose contents is:
#!/usr/bin/env bash
echo "Hello, world"
I'd like to parse the markdown file with sed (awk or perl works too) and replace the bottom section of the code snippet with whatever the above bash command evaluates to, for example whatever cat docs/code_sample.sh evaluates to.
Perl to the rescue!
perl -0777 -pe 's/(?<=```\n)^(\$ (.*)\n\n)(?^s:.*?)(?=```)/"$1".qx($2)/meg' < input > output
-0777 slurps the whole file into memory
-p prints the input after processing
s/PATTERN/REPLACEMENT/ works similarly to a substitution in sed
/g replaces globally, i.e. as many times as it can
/m makes ^ match start of each line instead of start of the whole input string
/e evaluates the replacement as code
(?<=```\n) means "preceded by three backquotes and a newline"
(?^s:.*?) changes the behaviour of . to match newlines as well, so it matches (frugally because of the *?) the rest of the preformatted block
(?=```) means "followed by three backquotes`
qx runs the parameter in a shell and returns its output
A sed-only solution is easier if you have the GNU version with an e command.
That said, here's a quick, simplistic, and kinda clumsy version I knocked out that doesn't bother to check the values of previous or following lines - it just assumes your format is good, and bulls through without any looping or anything else. Still, for my example code, it worked.
I started by making an a, a b, and an x that is the markup file.
$: cat a
#! /bin/bash
echo "Hello, World!"
$: cat b
#! /bin/bash
echo "SCREW YOU!!!!"
$: cat x
```
$ cat a
foo
bar
" b a z ! "
```
```
$ cat b
foo
bar
" b a z ! "
```
Then I wrote s which is the sed script.
$: cat s
#! /bin/env bash
sed -En '
/^```$/,/^```$/ {
# for the lines starting with the $ prompt
/^[$] / {
# save the command to the hold space
x
# write the ``` header to the pattern space
s/.*/```/
# print the fabricated header
p
# swap the command back in
x
# the next line should be blank - add it to the current pattern space
N
# first print the line of code as-is with the (assumed) following blank line
p
# scrub the $ (prompt) off the command
s/^[$] //
# execute the command - store the output into the pattern space
e
# print the output
p
# put the markdown footer back
s/.*/```/
# and print that
p
}
# for the (to be discarded) existing lines of "content"
/^[^`$]/d
}
' $*
It does the job and might get you started.
$: s x
```
$ cat a
#! /bin/bash
echo "Hello, World!"
```
```
$ cat b
#! /bin/bash
echo "SCREW YOU!!!!"
```
Lots of caveats - better to actually check that the $ follows a line of backticks and is followed by a blank line, maybe make sure nothing bogus could be in the file to get executed... but this does what you asked, with (GNU) sed.
Good luck.
A rare case when use of getline would be appropriate:
$ cat tst.awk
state == "importing" {
while ( (getline line < $NF) > 0 ) {
print line
}
close($NF)
state = "imported"
}
$0 == "```" { state = (state ? "" : "importing") }
state != "imported" { print }
$ awk -f tst.awk file
See http://awk.freeshell.org/AllAboutGetline for getline uses and caveats.

Multiline CSV: output on a single line, with double-quoted input lines, using a different separator

I'm trying to get a multiline output from a CSV into one line in Bash.
My CSV file looks like this:
hi,bye
hello,goodbye
The end goal is for it to look like this:
"hi/bye", "hello/goodbye"
This is currently where I'm at:
INPUT=mycsvfile.csv
while IFS=, read col1 col2 || [ -n "$col1" ]
do
source=$(awk '{print;}' | sed -e 's/,/\//g' )
echo "$source";
done < $INPUT
The output is on every line and I'm able to change the , to a / but I'm not sure how to put the output on one line with quotes around it.
I've tried BEGIN:
source=$(awk 'BEGIN { ORS=", " }; {print;}'| sed -e 's/,/\//g' )
But this only outputs the last line, and omits the first hi/bye:
hello/goodbye
Would anyone be able to help me?
Just do the whole thing (mostly) in awk. The final sed is just here to trim some trailing cruft and inject a newline at the end:
< mycsvfile.csv awk '{print "\""$1, $2"\""}' FS=, OFS=/ ORS=", " | sed 's/, $//'
If you're willing to install trl, a utility of mine, the command can be simplified as follows:
input=mycsvfile.csv
trl -R '| ' < "$input" | tr ',|' '/,'
trl transforms multiline input into double-quoted single-line output separated by ,<space> by default.
-R '| ' (temporarily) uses |<space> as the separator instead; this assumes that your data doesn't contain | instances, but you can choose any char. that you know not be part of your data.
tr ',|' '/,' then translates all , instances (field-internal to the input lines) into / instances, and all | instances (the temporary separator) into , instances, yielding the overall result as desired.
Installation of trl from the npm registry (Linux and macOS)
Note: Even if you don't use Node.js, npm, its package manager, works across platforms and is easy to install; try
curl -L https://git.io/n-install | bash
With Node.js installed, install as follows:
[sudo] npm install trl -g
Note:
Whether you need sudo depends on how you installed Node.js and whether you've changed permissions later; if you get an EACCES error, try again with sudo.
The -g ensures global installation and is needed to put trl in your system's $PATH.
Manual installation (any Unix platform with bash)
Download this bash script as trl.
Make it executable with chmod +x trl.
Move it or symlink it to a folder in your $PATH, such as /usr/local/bin (macOS) or /usr/bin (Linux).
$ awk -F, -v OFS='/' -v ORS='"' '{$1=s ORS $1; s=", "; print} END{printf RS}' file
"hi/bye", "hello/goodbye"
There is no need for a bash loop, which is invariably slow.
sed and tr can do this more efficiently:
input=mycsvfile.csv
sed 's/,/\//g; s/.*/"&", /; $s/, $//' "$input" | tr -d '\n'
s/,/\//g uses replaces all (g) , instances with / instances (escaped as \/ here).
s/.*/"&", / encloses the resulting line in "...", followed by ,<space>:
regex .* matches the entire pattern space (the potentially modified input line)
& in the replacement string represent that match.
$s/, $// removes the undesired trailing ,<space> from the final line ($)
tr -d '\n' then simply removes the newlines (\n) from the result, because sed invariably outputs each line with a trailing newline.
Note that the above command's single-line output will not have a trailing newline; simply append ; printf '\n' if it is needed.
In awk:
$ awk '{sub(/,/,"/");gsub(/^|$/,"\"");b=b (NR==1?"":", ")$0}END{print b}' file
"hi/bye", "hello/goodbye"
Explained:
$ awk '
{
sub(/,/,"/") # replace comma
gsub(/^|$/,"\"") # add quotes
b=b (NR==1?"":", ") $0 # buffer to add delimiters
}
END { print b } # output
' file
I'm assuming you just have 2 lines in your file? If you have alternating 2 line pairs, let me know in comments and I will expand for that general case. Here is a one-line awk conversion for you:
# NOTE: I am using the octal ascii code for the
# double quote char (\42=") in my printf statement
$ awk '{gsub(/,/,"/")}NR==1{printf("\42%s\42, ",$0)}NR==2{printf("\42%s\42\n",$0)}' file
output:
"hi/bye", "hello/goodbye"
Here is my attempt in awk:
awk 'BEGIN{ ORS = " " }{ a++; gsub(/,/, "/"); gsub(/[a-z]+\/[a-z]+/, "\"&\""); print $0; if (a == 1){ print "," }}{ if (a==2){ printf "\n"; a = 0 } }'
Works also if your Input has more than two lines.If you need some explanation feel free to ask :)

How to urlencode data for curl command?

I am trying to write a bash script for testing that takes a parameter and sends it through curl to web site. I need to url encode the value to make sure that special characters are processed properly. What is the best way to do this?
Here is my basic script so far:
#!/bin/bash
host=${1:?'bad host'}
value=$2
shift
shift
curl -v -d "param=${value}" http://${host}/somepath $#
Use curl --data-urlencode; from man curl:
This posts data, similar to the other --data options with the exception that this performs URL-encoding. To be CGI-compliant, the <data> part should begin with a name followed by a separator and a content specification.
Example usage:
curl \
--data-urlencode "paramName=value" \
--data-urlencode "secondParam=value" \
http://example.com
See the man page for more info.
This requires curl 7.18.0 or newer (released January 2008). Use curl -V to check which version you have.
You can as well encode the query string:
curl --get \
--data-urlencode "p1=value 1" \
--data-urlencode "p2=value 2" \
http://example.com
# http://example.com?p1=value%201&p2=value%202
Another option is to use jq:
$ printf %s 'input text'|jq -sRr #uri
input%20text
$ jq -rn --arg x 'input text' '$x|#uri'
input%20text
-r (--raw-output) outputs the raw contents of strings instead of JSON string literals. -n (--null-input) doesn't read input from STDIN.
-R (--raw-input) treats input lines as strings instead of parsing them as JSON, and -sR (--slurp --raw-input) reads the input into a single string. You can replace -sRr with -Rr if your input only contains a single line or if you don't want to replace linefeeds with %0A:
$ printf %s\\n multiple\ lines of\ text|jq -Rr #uri
multiple%20lines
of%20text
$ printf %s\\n multiple\ lines of\ text|jq -sRr #uri
multiple%20lines%0Aof%20text%0A
Or this percent-encodes all bytes:
xxd -p|tr -d \\n|sed 's/../%&/g'
Here is the pure BASH answer.
Update: Since many changes have been discussed, I have placed this on https://github.com/sfinktah/bash/blob/master/rawurlencode.inc.sh for anybody to issue a PR against.
Note: This solution is not intended to encode unicode or multi-byte characters - which are quite outside BASH's humble native capabilities. It's only intended to encode symbols that would otherwise ruin argument passing in POST or GET requests, e.g. '&', '=' and so forth.
Very Important Note: DO NOT ATTEMPT TO WRITE YOUR OWN UNICODE CONVERSION FUNCTION, IN ANY LANGUAGE, EVER. See end of answer.
rawurlencode() {
local string="${1}"
local strlen=${#string}
local encoded=""
local pos c o
for (( pos=0 ; pos<strlen ; pos++ )); do
c=${string:$pos:1}
case "$c" in
[-_.~a-zA-Z0-9] ) o="${c}" ;;
* ) printf -v o '%%%02x' "'$c"
esac
encoded+="${o}"
done
echo "${encoded}" # You can either set a return variable (FASTER)
REPLY="${encoded}" #+or echo the result (EASIER)... or both... :p
}
You can use it in two ways:
easier: echo http://url/q?=$( rawurlencode "$args" )
faster: rawurlencode "$args"; echo http://url/q?${REPLY}
[edited]
Here's the matching rawurldecode() function, which - with all modesty - is awesome.
# Returns a string in which the sequences with percent (%) signs followed by
# two hex digits have been replaced with literal characters.
rawurldecode() {
# This is perhaps a risky gambit, but since all escape characters must be
# encoded, we can replace %NN with \xNN and pass the lot to printf -b, which
# will decode hex for us
printf -v REPLY '%b' "${1//%/\\x}" # You can either set a return variable (FASTER)
echo "${REPLY}" #+or echo the result (EASIER)... or both... :p
}
With the matching set, we can now perform some simple tests:
$ diff rawurlencode.inc.sh \
<( rawurldecode "$( rawurlencode "$( cat rawurlencode.inc.sh )" )" ) \
&& echo Matched
Output: Matched
And if you really really feel that you need an external tool (well, it will go a lot faster, and might do binary files and such...) I found this on my OpenWRT router...
replace_value=$(echo $replace_value | sed -f /usr/lib/ddns/url_escape.sed)
Where url_escape.sed was a file that contained these rules:
# sed url escaping
s:%:%25:g
s: :%20:g
s:<:%3C:g
s:>:%3E:g
s:#:%23:g
s:{:%7B:g
s:}:%7D:g
s:|:%7C:g
s:\\:%5C:g
s:\^:%5E:g
s:~:%7E:g
s:\[:%5B:g
s:\]:%5D:g
s:`:%60:g
s:;:%3B:g
s:/:%2F:g
s:?:%3F:g
s^:^%3A^g
s:#:%40:g
s:=:%3D:g
s:&:%26:g
s:\$:%24:g
s:\!:%21:g
s:\*:%2A:g
While it is not impossible to write such a script in BASH (probably using xxd and a very lengthy ruleset) capable of handing UTF-8 input, there are faster and more reliable ways. Attempting to decode UTF-8 into UTF-32 is a non-trivial task to do with accuracy, though very easy to do inaccurately such that you think it works until the day it doesn't.
Even the Unicode Consortium removed their sample code after discovering it was no longer 100% compatible with the actual standard.
The Unicode standard is constantly evolving, and has become extremely nuanced. Any implementation you can whip together will not be properly compliant, and if by some extreme effort you managed it, it wouldn't stay compliant.
Use Perl's URI::Escape module and uri_escape function in the second line of your bash script:
...
value="$(perl -MURI::Escape -e 'print uri_escape($ARGV[0]);' "$2")"
...
Edit: Fix quoting problems, as suggested by Chris Johnsen in the comments. Thanks!
One of variants, may be ugly, but simple:
urlencode() {
local data
if [[ $# != 1 ]]; then
echo "Usage: $0 string-to-urlencode"
return 1
fi
data="$(curl -s -o /dev/null -w %{url_effective} --get --data-urlencode "$1" "")"
if [[ $? != 3 ]]; then
echo "Unexpected error" 1>&2
return 2
fi
echo "${data##/?}"
return 0
}
Here is the one-liner version for example (as suggested by Bruno):
date | curl -Gso /dev/null -w %{url_effective} --data-urlencode #- "" | cut -c 3-
# If you experience the trailing %0A, use
date | curl -Gso /dev/null -w %{url_effective} --data-urlencode #- "" | sed -E 's/..(.*).../\1/'
for the sake of completeness, many solutions using sed or awk only translate a special set of characters and are hence quite large by code size and also dont translate other special characters that should be encoded.
a safe way to urlencode would be to just encode every single byte - even those that would've been allowed.
echo -ne 'some random\nbytes' | xxd -plain | tr -d '\n' | sed 's/\(..\)/%\1/g'
xxd is taking care here that the input is handled as bytes and not characters.
edit:
xxd comes with the vim-common package in Debian and I was just on a system where it was not installed and I didnt want to install it. The altornative is to use hexdump from the bsdmainutils package in Debian. According to the following graph, bsdmainutils and vim-common should have an about equal likelihood to be installed:
http://qa.debian.org/popcon-png.php?packages=vim-common%2Cbsdmainutils&show_installed=1&want_legend=1&want_ticks=1
but nevertheless here a version which uses hexdump instead of xxd and allows to avoid the tr call:
echo -ne 'some random\nbytes' | hexdump -v -e '/1 "%02x"' | sed 's/\(..\)/%\1/g'
I find it more readable in python:
encoded_value=$(python3 -c "import urllib.parse; print urllib.parse.quote('''$value''')")
the triple ' ensures that single quotes in value won't hurt. urllib is in the standard library. It work for example for this crazy (real world) url:
"http://www.rai.it/dl/audio/" "1264165523944Ho servito il re d'Inghilterra - Puntata 7
I've found the following snippet useful to stick it into a chain of program calls, where URI::Escape might not be installed:
perl -p -e 's/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg'
(source)
If you wish to run GET request and use pure curl just add --get to #Jacob's solution.
Here is an example:
curl -v --get --data-urlencode "access_token=$(cat .fb_access_token)" https://graph.facebook.com/me/feed
This may be the best one:
after=$(echo -e "$before" | od -An -tx1 | tr ' ' % | xargs printf "%s")
Direct link to awk version : http://www.shelldorado.com/scripts/cmds/urlencode
I used it for years and it works like a charm
:
##########################################################################
# Title : urlencode - encode URL data
# Author : Heiner Steven (heiner.steven#odn.de)
# Date : 2000-03-15
# Requires : awk
# Categories : File Conversion, WWW, CGI
# SCCS-Id. : #(#) urlencode 1.4 06/10/29
##########################################################################
# Description
# Encode data according to
# RFC 1738: "Uniform Resource Locators (URL)" and
# RFC 1866: "Hypertext Markup Language - 2.0" (HTML)
#
# This encoding is used i.e. for the MIME type
# "application/x-www-form-urlencoded"
#
# Notes
# o The default behaviour is not to encode the line endings. This
# may not be what was intended, because the result will be
# multiple lines of output (which cannot be used in an URL or a
# HTTP "POST" request). If the desired output should be one
# line, use the "-l" option.
#
# o The "-l" option assumes, that the end-of-line is denoted by
# the character LF (ASCII 10). This is not true for Windows or
# Mac systems, where the end of a line is denoted by the two
# characters CR LF (ASCII 13 10).
# We use this for symmetry; data processed in the following way:
# cat | urlencode -l | urldecode -l
# should (and will) result in the original data
#
# o Large lines (or binary files) will break many AWK
# implementations. If you get the message
# awk: record `...' too long
# record number xxx
# consider using GNU AWK (gawk).
#
# o urlencode will always terminate it's output with an EOL
# character
#
# Thanks to Stefan Brozinski for pointing out a bug related to non-standard
# locales.
#
# See also
# urldecode
##########################################################################
PN=`basename "$0"` # Program name
VER='1.4'
: ${AWK=awk}
Usage () {
echo >&2 "$PN - encode URL data, $VER
usage: $PN [-l] [file ...]
-l: encode line endings (result will be one line of output)
The default is to encode each input line on its own."
exit 1
}
Msg () {
for MsgLine
do echo "$PN: $MsgLine" >&2
done
}
Fatal () { Msg "$#"; exit 1; }
set -- `getopt hl "$#" 2>/dev/null` || Usage
[ $# -lt 1 ] && Usage # "getopt" detected an error
EncodeEOL=no
while [ $# -gt 0 ]
do
case "$1" in
-l) EncodeEOL=yes;;
--) shift; break;;
-h) Usage;;
-*) Usage;;
*) break;; # First file name
esac
shift
done
LANG=C export LANG
$AWK '
BEGIN {
# We assume an awk implementation that is just plain dumb.
# We will convert an character to its ASCII value with the
# table ord[], and produce two-digit hexadecimal output
# without the printf("%02X") feature.
EOL = "%0A" # "end of line" string (encoded)
split ("1 2 3 4 5 6 7 8 9 A B C D E F", hextab, " ")
hextab [0] = 0
for ( i=1; i<=255; ++i ) ord [ sprintf ("%c", i) "" ] = i + 0
if ("'"$EncodeEOL"'" == "yes") EncodeEOL = 1; else EncodeEOL = 0
}
{
encoded = ""
for ( i=1; i<=length ($0); ++i ) {
c = substr ($0, i, 1)
if ( c ~ /[a-zA-Z0-9.-]/ ) {
encoded = encoded c # safe character
} else if ( c == " " ) {
encoded = encoded "+" # special handling
} else {
# unsafe character, encode it as a two-digit hex-number
lo = ord [c] % 16
hi = int (ord [c] / 16);
encoded = encoded "%" hextab [hi] hextab [lo]
}
}
if ( EncodeEOL ) {
printf ("%s", encoded EOL)
} else {
print encoded
}
}
END {
#if ( EncodeEOL ) print ""
}
' "$#"
Here's a Bash solution which doesn't invoke any external programs:
uriencode() {
s="${1//'%'/%25}"
s="${s//' '/%20}"
s="${s//'"'/%22}"
s="${s//'#'/%23}"
s="${s//'$'/%24}"
s="${s//'&'/%26}"
s="${s//'+'/%2B}"
s="${s//','/%2C}"
s="${s//'/'/%2F}"
s="${s//':'/%3A}"
s="${s//';'/%3B}"
s="${s//'='/%3D}"
s="${s//'?'/%3F}"
s="${s//'#'/%40}"
s="${s//'['/%5B}"
s="${s//']'/%5D}"
printf %s "$s"
}
url=$(echo "$1" | sed -e 's/%/%25/g' -e 's/ /%20/g' -e 's/!/%21/g' -e 's/"/%22/g' -e 's/#/%23/g' -e 's/\$/%24/g' -e 's/\&/%26/g' -e 's/'\''/%27/g' -e 's/(/%28/g' -e 's/)/%29/g' -e 's/\*/%2a/g' -e 's/+/%2b/g' -e 's/,/%2c/g' -e 's/-/%2d/g' -e 's/\./%2e/g' -e 's/\//%2f/g' -e 's/:/%3a/g' -e 's/;/%3b/g' -e 's//%3e/g' -e 's/?/%3f/g' -e 's/#/%40/g' -e 's/\[/%5b/g' -e 's/\\/%5c/g' -e 's/\]/%5d/g' -e 's/\^/%5e/g' -e 's/_/%5f/g' -e 's/`/%60/g' -e 's/{/%7b/g' -e 's/|/%7c/g' -e 's/}/%7d/g' -e 's/~/%7e/g')
this will encode the string inside of $1 and output it in $url. although you don't have to put it in a var if you want. BTW didn't include the sed for tab thought it would turn it into spaces
Using php from a shell script:
value="http://www.google.com"
encoded=$(php -r "echo rawurlencode('$value');")
# encoded = "http%3A%2F%2Fwww.google.com"
echo $(php -r "echo rawurldecode('$encoded');")
# returns: "http://www.google.com"
http://www.php.net/manual/en/function.rawurlencode.php
http://www.php.net/manual/en/function.rawurldecode.php
If you don't want to depend on Perl you can also use sed. It's a bit messy, as each character has to be escaped individually. Make a file with the following contents and call it urlencode.sed
s/%/%25/g
s/ /%20/g
s/ /%09/g
s/!/%21/g
s/"/%22/g
s/#/%23/g
s/\$/%24/g
s/\&/%26/g
s/'\''/%27/g
s/(/%28/g
s/)/%29/g
s/\*/%2a/g
s/+/%2b/g
s/,/%2c/g
s/-/%2d/g
s/\./%2e/g
s/\//%2f/g
s/:/%3a/g
s/;/%3b/g
s//%3e/g
s/?/%3f/g
s/#/%40/g
s/\[/%5b/g
s/\\/%5c/g
s/\]/%5d/g
s/\^/%5e/g
s/_/%5f/g
s/`/%60/g
s/{/%7b/g
s/|/%7c/g
s/}/%7d/g
s/~/%7e/g
s/ /%09/g
To use it do the following.
STR1=$(echo "https://www.example.com/change&$ ^this to?%checkthe#-functionality" | cut -d\? -f1)
STR2=$(echo "https://www.example.com/change&$ ^this to?%checkthe#-functionality" | cut -d\? -f2)
OUT2=$(echo "$STR2" | sed -f urlencode.sed)
echo "$STR1?$OUT2"
This will split the string into a part that needs encoding, and the part that is fine, encode the part that needs it, then stitches back together.
You can put that into a sh script for convenience, maybe have it take a parameter to encode, put it on your path and then you can just call:
urlencode https://www.exxample.com?isThisFun=HellNo
source
You can emulate javascript's encodeURIComponent in perl. Here's the command:
perl -pe 's/([^a-zA-Z0-9_.!~*()'\''-])/sprintf("%%%02X", ord($1))/ge'
You could set this as a bash alias in .bash_profile:
alias encodeURIComponent='perl -pe '\''s/([^a-zA-Z0-9_.!~*()'\''\'\'''\''-])/sprintf("%%%02X",ord($1))/ge'\'
Now you can pipe into encodeURIComponent:
$ echo -n 'hèllo wôrld!' | encodeURIComponent
h%C3%A8llo%20w%C3%B4rld!
Python 3 based on #sandro's good answer from 2010:
echo "Test & /me" | python -c "import urllib.parse;print (urllib.parse.quote(input()))"
Test%20%26%20/me
This nodejs-based answer will use encodeURIComponent on stdin:
uriencode_stdin() {
node -p 'encodeURIComponent(require("fs").readFileSync(0))'
}
echo -n $'hello\nwörld' | uriencode_stdin
hello%0Aw%C3%B6rld
For those of you looking for a solution that doesn't need perl, here is one that only needs hexdump and awk:
url_encode() {
[ $# -lt 1 ] && { return; }
encodedurl="$1";
# make sure hexdump exists, if not, just give back the url
[ ! -x "/usr/bin/hexdump" ] && { return; }
encodedurl=`
echo $encodedurl | hexdump -v -e '1/1 "%02x\t"' -e '1/1 "%_c\n"' |
LANG=C awk '
$1 == "20" { printf("%s", "+"); next } # space becomes plus
$1 ~ /0[adAD]/ { next } # strip newlines
$2 ~ /^[a-zA-Z0-9.*()\/-]$/ { printf("%s", $2); next } # pass through what we can
{ printf("%%%s", $1) } # take hex value of everything else
'`
}
Stitched together from a couple of places across the net and some local trial and error. It works great!
uni2ascii is very handy:
$ echo -ne '你好世界' | uni2ascii -aJ
%E4%BD%A0%E5%A5%BD%E4%B8%96%E7%95%8C
Simple PHP option:
echo 'part-that-needs-encoding' | php -R 'echo urlencode($argn);'
What would parse URLs better than javascript?
node -p "encodeURIComponent('$url')"
Here is a POSIX function to do that:
url_encode() {
awk 'BEGIN {
for (n = 0; n < 125; n++) {
m[sprintf("%c", n)] = n
}
n = 1
while (1) {
s = substr(ARGV[1], n, 1)
if (s == "") {
break
}
t = s ~ /[[:alnum:]_.!~*\47()-]/ ? t s : t sprintf("%%%02X", m[s])
n++
}
print t
}' "$1"
}
Example:
value=$(url_encode "$2")
The question is about doing this in bash and there's no need for python or perl as there is in fact a single command that does exactly what you want - "urlencode".
value=$(urlencode "${2}")
This is also much better, as the above perl answer, for example, doesn't encode all characters correctly. Try it with the long dash you get from Word and you get the wrong encoding.
Note, you need "gridsite-clients" installed to provide this command:
sudo apt install gridsite-clients
Here's the node version:
uriencode() {
node -p "encodeURIComponent('${1//\'/\\\'}')"
}
Another php approach:
echo "encode me" | php -r "echo urlencode(file_get_contents('php://stdin'));"
Here is my version for busybox ash shell for an embedded system, I originally adopted Orwellophile's variant:
urlencode()
{
local S="${1}"
local encoded=""
local ch
local o
for i in $(seq 0 $((${#S} - 1)) )
do
ch=${S:$i:1}
case "${ch}" in
[-_.~a-zA-Z0-9])
o="${ch}"
;;
*)
o=$(printf '%%%02x' "'$ch")
;;
esac
encoded="${encoded}${o}"
done
echo ${encoded}
}
urldecode()
{
# urldecode <string>
local url_encoded="${1//+/ }"
printf '%b' "${url_encoded//%/\\x}"
}
Ruby, for completeness
value="$(ruby -r cgi -e 'puts CGI.escape(ARGV[0])' "$2")"
Here's a one-line conversion using Lua, similar to blueyed's answer except with all the RFC 3986 Unreserved Characters left unencoded (like this answer):
url=$(echo 'print((arg[1]:gsub("([^%w%-%.%_%~])",function(c)return("%%%02X"):format(c:byte())end)))' | lua - "$1")
Additionally, you may need to ensure that newlines in your string are converted from LF to CRLF, in which case you can insert a gsub("\r?\n", "\r\n") in the chain before the percent-encoding.
Here's a variant that, in the non-standard style of application/x-www-form-urlencoded, does that newline normalization, as well as encoding spaces as '+' instead of '%20' (which could probably be added to the Perl snippet using a similar technique).
url=$(echo 'print((arg[1]:gsub("\r?\n", "\r\n"):gsub("([^%w%-%.%_%~ ]))",function(c)return("%%%02X"):format(c:byte())end):gsub(" ","+"))' | lua - "$1")
In this case, I needed to URL encode the hostname. Don't ask why. Being a minimalist, and a Perl fan, here's what I came up with.
url_encode()
{
echo -n "$1" | perl -pe 's/[^a-zA-Z0-9\/_.~-]/sprintf "%%%02x", ord($&)/ge'
}
Works perfectly for me.

Resources