Split string using \r\n using IFS in bash - bash

I would like to split string contains \r\n in bash but carriage return and \n gives issue. Can anyone give me hint for different IFS? I tried IFS=' |\' too.
input:
projects.google.tests.inbox.document_01\r\nprojects.google.tests.inbox.document_02\r\nprojects.google.tests.inbox.global_02
Code:
IFS=$'\r'
inputData="projects.google.tests.inbox.document_01\r\nprojects.google.tests.inbox.document_02\r\nprojects.google.tests.inbox.global_02"
for line1 in ${inputData}; do
line2=`echo "${line1}"`
echo ${line2} //Expected one by one entry
done
Expected:
projects.google.tests.inbox.document_01
projects.google.tests.inbox.document_02
projects.google.tests.inbox.global_02

inputData=$'projects.google.tests.inbox.document_01\r\nprojects.google.tests.inbox.document_02\r\nprojects.google.tests.inbox.global_02'
while IFS= read -r line; do
line=${line%$'\r'}
echo "$line"
done <<<"$inputData"
Note:
The string is defined as string=$'foo\r\n', not string="foo\r\n". The latter does not put an actual CRLF sequence in your variable. See ANSI C-like strings on the bash-hackers' wiki for a description of this syntax.
${line%$'\r'} is a parameter expansion which strips a literal carriage return off the end of the contents of the variable line, should one exist.
The practice for reading an input stream line-by-line (used here) is described in detail in BashFAQ #1. Unlike iterating with for, it does not attempt to expand your data as globs.

Following awk could help you in your question.
awk '{gsub(/\\r\\n/,RS)} 1' Input_file
OR
echo "$var" | awk '{gsub(/\\r\\n/,RS)} 1'
Output will be as follows.
projects.google.tests.inbox.document_01
projects.google.tests.inbox.document_02
projects.google.tests.inbox.global_02
Explanation: Using awk's gsub utility which is used for globally substitution and it's method is gsub(/regex_to_be_subsituted/,variable/new_value,current_line/variable), so here I am giving \\r\\n(point to be noted here I am escaping here \\ which means it will take it as a literal character) with RS(record separator, whose default value is new line) in the current line. Then 1 means, awk works on method of condition and action, so by mentioning 1 I am making condition as TRUE and no action is given, so default action print of current will happen.
EDIT: With a variable you could use as following.
var="projects.google.tests.inbox.document_01\r\nprojects.google.tests.inbox.document_02\r\nprojects.google.tests.inbox.global_02"
echo "$var" | awk '{gsub(/\\r\\n/,RS)} 1'
projects.google.tests.inbox.document_01
projects.google.tests.inbox.document_02
projects.google.tests.inbox.global_02

Related

bash - IFS changes behavior of echo -n in for loop

I have code that requires a response within a for loop.
Prior to the loop I set IFS="\n"
Within the loop echo -n is ignored (except for the last line).
Note: this is just an example of the behavior of echo -n
Example:
IFS='\n'
for line in `cat file`
do
echo -n $line
done
This outputs:
this is a test
this is a test
this is a test$
with the user prompt occuring only at the end of the last line.
Why is this occuring and is there a fix?
Neither IFS="\n" nor IFS='\n' set $IFS to a newline; instead they set it to literal \ followed by literal n.
You'd have to use an ANSI C-quoted string in order to assign an actual newline: IFS=$'\n'; alternatively, you could use a normal string literal that contains an actual newline (spans 2 lines).
Assigning literal \n had the effect that the output from cat file was not split into lines, because an actual newline was not present in $IFS; potentially - though not with your sample file content - the output could have been split into fields by embedded \ and n characters.
Without either, the entire file contents were passed at once, resulting in a single iteration of your for loop.
That said, your approach to looping over lines from a file is ill-advised; try something like the following instead:
while IFS= read -r line; do
echo -n "$line"
done < file
Never use for loops when parsing files in bash. Use while loops instead. Here is a really good tutorial on that.
http://mywiki.wooledge.org/BashFAQ/001

Understanding sed command

Please excuse if the question is too naive. I am new to shell scripting and am not able to find any good resource to understand the specifics. I am trying to make sense of a legacy script. Please can someone tell me what the following command does:
sed "s#s3AtlasExtractName#$i#g" load_xyz.sql >> load_abc.sql;
This command will replace all occurrences of s3AtlasExtractName with whatever $i is.
s - Substitute
# - Delimiter
s3AtlasExtractName - Word that needs substituting
# - Delimiter
$i - i variable that will be used to replace s3AtlasExtractName
# - Delimiter
g - Global Replace all instance of s3AtlasExtractName in a single line and not just the first occurrence of it
So this will parse through load_xyz.sql and change all occurrences of s3AtlasExtractName to the value of $i and append the whole of the contents of load_xyz.sql to a file called load_abc.sql with the sed substitutions.
sed is a command line stream editor. You can find information about it here:
http://www.computerhope.com/unix/used.htm
An easy example is shown below where sed is used to replace the word "test" with the word "example" in myfile.txt but output is sent to newfile.txt
sed 's/test/example/g' myfile.txt > newfile.txt
It seems that your script is performing a similar function by replacing the content of the load_xyz.sql file and storing it in a new file load_abc.sql Without more code I am just guessing but it seems that the parameter $i could be used as counter to insert similar but new values into the load_abc.sql file.
In short, this reads load_xyz.sql and replaces every occurrence of "s3AtlasExtractName" by whatever has been stored in the shell variable "i".
The long version is that sed accepts many subcommands with different formattings. Any "simple" sed command will look like 'sed '. The first letter of the subcommand tells you which operation sed is going to do with your files.
The "s" operation stands for "substitution" and is the most commonly used. It is followed by a Perl-like regexp: separator, regexp to look for, separator, value to substitute, separator, PREG flags. In your case, the separator is '#' which is pretty unusual but not forbidden, so the command substitues '$i' to every instance of 's3AtlasExtractName'. The 'g' PREG flag tells sed to replace every occurrence of the pattern (the default is to only replace its first occurrence on every line in the input).
Finally, the use of "$i" inside a double-quote-delimited string tells the shell to actually expand the shell variable 'i' so you'll want to look for a shell statement setting that (possibly a 'for' statement).
Hope this helps.
edit: I focused on the 'sed' part and kinda missed the redirection part. The '>>' token tells the shell to take the output of the sed command (i.e. the contents of load_xyz.sql with all occurrences of s3AtlasExtractName replaced by the contents of $i) and append it to the file 'load_abc.sql'.

What defines a "column" in bash? In awk?

I was viewing this question: Bash - Take nth column in a text file
I want to make a function that writes to a textfile that I can then parse using the method above. So, for example, I want my function to write 'dates' in the first column, 'ID's in the second column, and 'addresses' in the third column. Then once I have this, a user could, for example, see if a certain ID is present in the file by querying for the second column, then looking at each item there. The user could do this using the method discussed in the question above.
What defines a column? Is it just a space delimiter? Is it a tab?
If I want to output this information as stated above, what would the method where I write to the file look like? So far I have:
cat "$DATE $ID $ADDRESS \n" > myfile.data
In bash, as opposed to awk, columns are separated by characters in IFS.
That is to say, if you set:
IFS=$'\t'
...then columns, as understood by bash builtins such as read first second rest, will be separated by tabs. On the output side, printf '%s\n' "${array[*]}" will print the items in the array array separated by the first character of IFS.
The default value of IFS is equivalent to $' \t\n' -- that is, the space, the tab, and the newline character.
To write a file with a delimiter of your choice, and (presumably) more than one row (replace the while read with however you're actually getting your data, or only use the inside of the loop if you're only writing one line):
while read -r date id address; do
printf '%s\t' "$date" "$id" "$address" >&3; printf '\n' >&3
done 3>filename
...or, if you don't want the trailing tab left by the above:
IFS=$'\t' # use a tab as the field separator for output
while IFS=$' \t\n' read -r date id address; do
entry=( "$date" "$id" "$address" )
printf '%s\n' "${entry[*]}" >&3
done 3>filename
Putting 3>filename on the outside of the loop is much more efficient than >>filename on each line, which re-opens the output file once per line written.
If you're going to use awk, the columns are separated by the field separator. See FS in man awk for details.
Most tools support some ways of changing the column separator:
cut -f
sort -t
bash itself uses the IFS variable (Internal Field Separator) for word splitting.
cat awaits file as an argument. To output a string, use echo instead.
If we are talking about awk then the space character is the default column separator.
Its easy to change what is used as the "Field Separator" (FS) when awk is parsing a file: awk '{FS=",";print $2}'. Will use comma as the separator (note: does not respect quotes and stuff like a csv parser).
To write to the file I would use echo and the double carrot >>.
>> appends whereas > rewrites the file.
echo -e will let echo recognize \n and similar special chars
So the command would be
echo -e "$DATE $ID $ADDRESS \n" >> myfile.data

How to do string separation using using keyword?

I have the STRING as given below. There is no specific separator between each key. The only way is to identify the keys is using the keyword "key_1" or "key_2" etc..
All keys begin with "key_" and can never appear in the value of another:
STRING="key_1=mislanious_string1 key_2=miscellaneous_string2"
I want the output as below.
echo $STRING1 should print:
key_1=mislanious_string1
echo $STRING2 should print:
key_2=mislanious_string2
e.g:
If STRING="key_1=foobarzkey_2=bash" , then the output should look like , STRING1=key_1=foobarz and STRING2=key_2=bash.
There may be more keys like key_1 , key_2 , key_3 etc. Each key starts with "key_" and can never appear in the value of another:
How to this in UNIX bash shell?
Using grep -P (PCRE) to support multiple key-value pairs in input:
STRING="key_1=mislanious_string1key_2=miscellaneous_string2key_3=fookey_4=BASH"
grep -oP 'key_[^=]+=.*?(?=key_|$)' <<< "$STRING"
key_1=mislanious_string1
key_2=miscellaneous_string2
key_3=foo
key_4=BASH
To store them into BASH array you can use:
read -d '' -ra arr < <(grep -oP 'key_[^=]+=.*?(?=key_|$)' <<< "$STRING")
printf "%s\n" "${arr[#]}"
key_1=mislanious_string1
key_2=miscellaneous_string2
key_3=foo
key_4=BASH
declare -p arr
declare -a arr='([0]="key_1=mislanious_string1" [1]="key_2=miscellaneous_string2" [2]="key_3=foo" [3]="key_4=BASH")'
UPDATE:: Here is a pure BASH (non-gnu) way of splitting these strings. We first insert an invisible character before every occurrence of key_ string and then use that for splitting the string:
STRING="key_1=mislanious_string1key_2=miscellaneous_string2key_3=fookey_4=BASH"
c=$'\x06'
s="${STRING//key_/${c}key_}"
arr=()
while [[ "$s" =~ ${c}(key_[^=]+=[^${c}]+)(.*) ]]; do
arr+=( "${BASH_REMATCH[1]}" )
s="${BASH_REMATCH[2]}"
done
Then to test:
printf "<%s>\n" "${arr[#]}"
<key_1=mislanious_string1>
<key_2=miscellaneous_string2>
<key_3=foo>
<key_4=BASH>
I like anubhava's grep -oP solution best. Here's an awk solution:
STRING="key_15=foobarzkey_3=bash"
awk -v RS="key_" 'NR>1{split($0, a, /=/); print "STRING" a[1] "=" RS $0}' <<< "$STRING"
STRING15=key_15=foobarz
STRING3=key_3=bash
So, to create that output as shell variables
eval $(awk -v RS="key_" 'NR>1{split($0, a, /=/); print "STRING" a[1] "=" RS $0}' <<< "$STRING")
echo $STRING3 # => key_3=bash
echo $STRING15 # => key_15=foobarz
This answer originally didn't recognize keys not preceded by whitespace. This has been fixed. In its current form this answer provides value as a portable solution. If you disagree, please let me know.
The answers provided by Glenn Jackman and anubhava are helpful, but use GNU extensions not available on all platforms (grep -P, awk with a multi-char. RS value).
Here's a POSIX-compliant sed solution that should work on most platforms, using either bash, ksh, or zsh as the shell:
str='key_1=mislanious_string1 key_2=miscellaneous_string2key_3=last'
while read -r varDef; do
[[ -n $varDef ]] && typeset "$varDef"
done < <(sed 's/\(key_\([0-9]\{1,\}\)=\)/\'$'\n''string\2=\1/g' <<<"$str")
#'# Print the variables created ($string1, $string2, $string3).
typeset -p ${!string#}
Note that lowercase variable names (string1, ...) are used so as to prevent potential conflicts with environment variables.
sed is used to split the string into key-value tokens each on their own line, preceded by the desired target variable name and =, effectively outputting shell variable assignments; e.g., for key_1, the sed command passes out:
string1=key_1=mislanious_string1 
The while loop then reads each output line and uses typeset to declare and assign the variable (note that typeset was chosen for ksh compatibility - while typeset also works in bash and zsh you'd typically use declare there); [[ -n $varDef ]] ignores the empty line that the sed output starts with.
Note: This solution trims trailing whitespace from values, consistent with the example in the question. This trimming happens due to use of read with the default $IFS value (internal field separators) - to preserve trailing whitespace, simply use IFS= read instead of just read.
Also note that use of process substitution to provide input (while ... <(sed ...)) (as opposed to a pipeline (sed ... | while ...) is required to ensure that the variables are defined in the current shell (rather than in a subshell, which would result in variables not visible to the current shell).
Some background info on what makes the above sed command POSIX-compliant:
POSIX only mandates basic regular expressions for sed, which takes away many features (e.g., quantifiers ? and +, alternation (|)) and makes escaping more cumbersome (e.g., ( and ) must be \-escaped).
POSIX sed also doesn't support escape sequences such as \n in replacement strings passed to s, so ANSI-C quoting is used to splice an \-escaped actual newline into the replacement string using $'\n'.
As an example of how useful the non-POSIX GNU sed extensions are, here's an equivalent command taking full advantage of GNU sed's features (extended regular expressions, support for \n), resulting in a shorter and more readable command:
sed -r 's/(key_([0-9]+)=)/\nstring\2=\1/g' <<<"$str"
Sometimes the simplest solution can be overlooked:
STRING="key_1=mislanious_string1key_2=miscellaneous_string2"
read STRING1 STRING2<<<${STRING//key_/ key_}
echo $STRING1
echo $STRING2

How do you escape a user-provided search term that you don't want evaluated for sed?

I'm trying to escape a user-provided search string that can contain any arbitrary character and give it to sed, but can't figure out how to make it safe for sed to use. In sed, we do s/search/replace/, and I want to search for exactly the characters in the search string without sed interpreting them (e.g., the '/' in 'my/path' would not close the sed expression).
I read this related question concerning how to escape the replace term. I would have thought you'd do the same thing to the search, but apparently not because sed complains.
Here's a sample program that creates a file called "my_searches". Then it reads each line of that file and performs a search and replace using sed.
#!/bin/bash
# The contents of this heredoc will be the lines of our file.
read -d '' SAMPLES << 'EOF'
/usr/include
P#$$W0RD$?
"I didn't", said Jane O'Brien.
`ls -l`
~!##$%^&*()_+-=:'}{[]/.,`"\|
EOF
echo "$SAMPLES" > my_searches
# Now for each line in the file, do some search and replace
while read line
do
echo "------===[ BEGIN $line ]===------"
# Escape every character in $line (e.g., ab/c becomes \a\b\/\c). I got
# this solution from the accepted answer in the linked SO question.
ES=$(echo "$line" | awk '{gsub(".", "\\\\&");print}')
# Search for the line we read from the file and replace it with
# the text "replaced"
sed 's/'"$ES"'/replaced/' < my_searches # Does not work
# Search for the text "Jane" and replace it with the line we read.
sed 's/Jane/'"$ES"'/' < my_searches # Works
# Search for the line we read and replace it with itself.
sed 's/'"$ES"'/'"$ES"'/' < my_searches # Does not work
echo "------===[ END ]===------"
echo
done < my_searches
When you run the program, you get sed: xregcomp: Invalid content of \{\} for the last line of the file when it's used as the 'search' term, but not the 'replace' term. I've marked the lines that give this error with # Does not work above.
------===[ BEGIN ~!##$%^&*()_+-=:'}{[]/.,`"| ]===------
sed: xregcomp: Invalid content of \{\}
------===[ END ]===------
If you don't escape the characters in $line (i.e., sed 's/'"$line"'/replaced/' < my_searches), you get this error instead because sed tries to interpret various characters:
------===[ BEGIN ~!##$%^&*()_+-=:'}{[]/.,`"| ]===------
sed: bad format in substitution expression
sed: No previous regexp.
------===[ END ]===------
So how do I escape the search term for sed so that the user can provide any arbitrary text to search for? Or more precisely, what can I replace the ES= line in my code with so that the sed command works for arbitrary text from a file?
I'm using sed because I'm limited to a subset of utilities included in busybox. Although I can use another method (like a C program), it'd be nice to know for sure whether or not there's a solution to this problem.
This is a relatively famous problem—given a string, produce a pattern that matches only that string. It is easier in some languages than others, and sed is one of the annoying ones. My advice would be to avoid sed and to write a custom program in some other language.
You could write a custom C program, using the standard library function strstr. If this is not fast enough, you could use any of the Boyer-Moore string matchers you can find with Google—they will make search extremely fast (sublinear time).
You could write this easily enough in Lua:
local function quote(s) return (s:gsub('%W', '%%%1')) end
local function replace(first, second, s)
return (s:gsub(quote(first), second))
end
for l in io.lines() do io.write(replace(arg[1], arg[2], l), '\n') end
If not fast enough, speed things up by applying quote to arg[1] only once, and inline frunciton replace.
As ghostdog mentioned, awk '{gsub(".", "\\\\&");print}' is incorrect because it escapes out non-special characters. What you really want to do is perhaps something like:
awk 'gsub(/[^[:alpha:]]/, "\\\\&")'
This will escape out non-alpha characters. For some reason I have yet to determine, I still cant replace "I didn't", said Jane O'Brien. even though my code above correctly escapes it to
\"I\ didn\'t\"\,\ said\ Jane\ O\'Brien\.
It's quite odd because this works perfectly fine
$ echo "\"I didn't\", said Jane O'Brien." | sed s/\"I\ didn\'t\"\,\ said\ Jane\ O\'Brien\./replaced/
replaced`
this : echo "$line" | awk '{gsub(".", "\\\\&");print}' escapes every character in $line, which is wrong!. do an echo $ES after that and $ES appears to be \/\u\s\r\/\i\n\c\l\u\d\e. Then when you pass to the next sed, (below)
sed 's/'"$ES"'/replaced/' my_searches
, it will not work because there is no line that has pattern \/\u\s\r\/\i\n\c\l\u\d\e. The correct way is something like:
$ sed 's|\([#$#^&*!~+-={}/]\)|\\\1|g' file
\/usr\/include
P\#\$\$W0RD\$?
"I didn't", said Jane O'Brien.
\`ls -l\`
\~\!\#\#\$%\^\&\*()_\+-\=:'\}\{[]\/.,\`"\|
you put all the characters you want escaped inside [], and choose a suitable delimiter for sed that is not in your character class, eg i chose "|". Then use the "g" (global) flag.
tell us what you are actually trying to do, ie an actual problem you are trying to solve.
This seems to work for FreeBSD sed:
# using FreeBSD & Mac OS X sed
ES="$(printf "%q" "${line}")"
ES="${ES//+/\\+}"
sed -E s$'\777'"${ES}"$'\777'replaced$'\777' < my_searches
sed -E s$'\777'Jane$'\777'"${line}"$'\777' < my_searches
sed -E s$'\777'"${ES}"$'\777'"${line}"$'\777' < my_searches
The -E option of FreeBSD sed is used to turn on extended regular expressions.
The same is available for GNU sed via the -r or --regexp-extended options respectively.
For the differences between basic and extended regular expressions see, for example:
http://www.gnu.org/software/sed/manual/sed.html#Extended-regexps
Maybe you can use FreeBSD-compatible minised instead of GNU sed?
# example using FreeBSD-compatible minised,
# http://www.exactcode.de/site/open_source/minised/
# escape some punctuation characters with printf
help printf
printf "%s\n" '!"#$%&'"'"'()*+,-./:;<=>?#[\]^_`{|}~'
printf "%q\n" '!"#$%&'"'"'()*+,-./:;<=>?#[\]^_`{|}~'
# example line
line='!"#$%&'"'"'()*+,-./:;<=>?#[\]^_`{|}~ ... and Jane ...'
# escapes in regular expression
ES="$(printf "%q" "${line}")" # escape some punctuation characters
ES="${ES//./\\.}" # . -> \.
ES="${ES//\\\\(/(}" # \( -> (
ES="${ES//\\\\)/)}" # \) -> )
# escapes in replacement string
lineEscaped="${line//&/\&}" # & -> \&
minised s$'\777'"${ES}"$'\777'REPLACED$'\777' <<< "${line}"
minised s$'\777'Jane$'\777'"${lineEscaped}"$'\777' <<< "${line}"
minised s$'\777'"${ES}"$'\777'"${lineEscaped}"$'\777' <<< "${line}"
To avoid potential backslash confusion, we could (or rather should) use a backslash variable like so:
backSlash='\\'
ES="${ES//${backSlash}(/(}" # \( -> (
ES="${ES//${backSlash})/)}" # \) -> )
(By the way using variables in such a way seems like a good approach for tackling parameter expansion issues ...)
... or to complete the backslash confusion ...
backSlash='\\'
lineEscaped="${line//${backSlash}/${backSlash}}" # double backslashes
lineEscaped="${lineEscaped//&/\&}" # & -> \&
If you have bash, and you're just doing a pattern replacement, just do it natively in bash. The ${parameter/pattern/string} expansion in Bash will work very well for you, since you can just use a variable in place of the "pattern" and replacement "string" and the variable's contents will be safe from word expansion. And it's that word expansion which makes piping to sed such a hassle. :)
It'll be faster than forking a child process and piping to sed anyway. You already know how to do the whole while read line thing, so creatively applying the capabilities in Bash's existing parameter expansion documentation can help you reproduce pretty much anything you can do with sed. Check out the bash man page to start...

Resources