Referencing ANSI escape sequences in environment variables within awk - bash

I'm sharing a source file defining different ANSI escape codes for different colors. The codes are sourced in a shellscript (bash) which also starts an awk script and the envvars are referenced in awk.
However I'm not getting the output I want - i.e., the colors.
Examples in bash:
export Red='\033[0;31m'
export Color_Off='\033[0m'
# This works, Output is "Hello" in Red
echo $Red Hello $Color_Off
Examples in awk (the envvars are still exported/set):
# This does not
$ awk 'BEGIN { print "Output: " ENVIRON["Red"] "Hello" ENVIRON["Color_Off"] }'
Output: \033[0;31mHello\033[0m
# This works, Output is "Hello" in Red
awk 'BEGIN { R="\033[0;31m" ; O="\033[0m" ; print R "Hello" O }'
I'm assuming the answer is lying there right in front of me, but I fail to find it just now.

There is a way to achieve this, but you need to declare color escape sequences slightly differently. Use ANSI C quoting to directly insert escape sequences instead of using a escape-led string and letting the shell expand later:
export Red=$'\e[0;31m'
export Color_Off=$'\e[0m'
awk 'BEGIN { print "Output: " ENVIRON["Red"] "Hello" ENVIRON["Color_Off"] " Bye" }'
This should work as expected. I also believe this is the superior way to declare colors (for instance, it's done this way in Zsh's colors contrib function).

Try the following awk command instead (slightly modified from yours):
$ awk -v R=$Red -v O=$Color_Off 'BEGIN { print R "Hello" O }'

Related

zsh sed expanding a variable with special characters and keeping them

I'm trying to store a string in a variable, then expand that variable in a sed command.
Several of the values I'm going to put in the variable before calling the command will have parentheses (with and without slashes before the left parentheses, but never before the right), new lines and other special characters. Also, the string will have double quotes around it in the file that's being searched, and I'd like to use those to limit only to the string I'm querying.
The command needs to be able to match with those special characters in the file. Using zsh / Mac OS, although if the command was compatible with bash 4.2 that'd be a nice bonus. echoing to xargs is fine too. Also, if awk would be better for this, I have no requirement to use sed.
Something like...
sed 's/"\"$(echo -E - ${val})\""/"${key}.localized"/g' "${files}"
Given that $val is the variable I described above, $key has no spaces (but underscores) & $files is an array of file paths (preferably compatible with spaces, but not required).
Example Input values for $val...
... "something \(customStringConvertible) here" ...
... "something (notVar) here" ...
... "something %# here" ...
... "something # 100% here" ...
... "something for $100.00" ...
Example Output:
... "some_key".localized ...
I was using the sed command to replace the examples above. The text I'm overwriting it with is pretty straight forward.
The key problem I'm having is getting the command to match with the special characters instead of expanding them and then trying to match.
Thanks in advance for any assistance.
awk is better since it provides functions that work with literal strings:
$ val='something \(customStringConvertible) here' awk 'index($0,ENVIRON["val"])' file
... "something \(customStringConvertible) here" ...
$ val='something for $100.00' awk 'index($0,ENVIRON["val"])' file
... "something for $100.00" ...
The above was run on this input file:
$ cat file
... "something \(customStringConvertible) here" ...
... "something (notVar) here" ...
... "something %# here" ...
... "something # 100% here" ...
... "something for $100.00" ...
With sed you'd have to follow the instructions at Is it possible to escape regex metacharacters reliably with sed to try to fake sed out.
It's not clear what your real goal is so edit your question to provide concise, testable sample input and expected output if you need more help. Having said that, it looks like you're doing a substitution so maybe this is what you want:
$ old='"something for $100.00"' new='here & there' awk '
s=index($0,ENVIRON["old"]) { print substr($0,1,s-1) ENVIRON["new"] substr($0,s+length(ENVIRON["old"])) }
' file
... here & there ...
or if you prefer:
$ old='"something for $100.00"' new='here & there' awk '
BEGIN { old=ENVIRON["old"]; new=ENVIRON["new"]; lgth=length(old) }
s=index($0,old) { print substr($0,1,s-1) new substr($0,s+lgth) }
' file
or:
awk '
BEGIN { old=ARGV[1]; new=ARGV[2]; ARGV[1]=ARGV[2]=""; lgth=length(old) }
s=index($0,old) { print substr($0,1,s-1) new substr($0,s+lgth) }
' '"something for $100.00"' 'here & there' file
... here & there ...
See How do I use shell variables in an awk script? for info on how I'm using ENVIRON[] vs ARGV[] above.

Use `sed` to replace text in code block with output of command at the top of the code block

I have a markdown file that has snippets of code resembling the following example:
```
$ cat docs/code_sample.sh
#!/usr/bin/env bash
echo "Hello, world"
```
This means there there's a file at the location docs/code_sample.sh, whose contents is:
#!/usr/bin/env bash
echo "Hello, world"
I'd like to parse the markdown file with sed (awk or perl works too) and replace the bottom section of the code snippet with whatever the above bash command evaluates to, for example whatever cat docs/code_sample.sh evaluates to.
Perl to the rescue!
perl -0777 -pe 's/(?<=```\n)^(\$ (.*)\n\n)(?^s:.*?)(?=```)/"$1".qx($2)/meg' < input > output
-0777 slurps the whole file into memory
-p prints the input after processing
s/PATTERN/REPLACEMENT/ works similarly to a substitution in sed
/g replaces globally, i.e. as many times as it can
/m makes ^ match start of each line instead of start of the whole input string
/e evaluates the replacement as code
(?<=```\n) means "preceded by three backquotes and a newline"
(?^s:.*?) changes the behaviour of . to match newlines as well, so it matches (frugally because of the *?) the rest of the preformatted block
(?=```) means "followed by three backquotes`
qx runs the parameter in a shell and returns its output
A sed-only solution is easier if you have the GNU version with an e command.
That said, here's a quick, simplistic, and kinda clumsy version I knocked out that doesn't bother to check the values of previous or following lines - it just assumes your format is good, and bulls through without any looping or anything else. Still, for my example code, it worked.
I started by making an a, a b, and an x that is the markup file.
$: cat a
#! /bin/bash
echo "Hello, World!"
$: cat b
#! /bin/bash
echo "SCREW YOU!!!!"
$: cat x
```
$ cat a
foo
bar
" b a z ! "
```
```
$ cat b
foo
bar
" b a z ! "
```
Then I wrote s which is the sed script.
$: cat s
#! /bin/env bash
sed -En '
/^```$/,/^```$/ {
# for the lines starting with the $ prompt
/^[$] / {
# save the command to the hold space
x
# write the ``` header to the pattern space
s/.*/```/
# print the fabricated header
p
# swap the command back in
x
# the next line should be blank - add it to the current pattern space
N
# first print the line of code as-is with the (assumed) following blank line
p
# scrub the $ (prompt) off the command
s/^[$] //
# execute the command - store the output into the pattern space
e
# print the output
p
# put the markdown footer back
s/.*/```/
# and print that
p
}
# for the (to be discarded) existing lines of "content"
/^[^`$]/d
}
' $*
It does the job and might get you started.
$: s x
```
$ cat a
#! /bin/bash
echo "Hello, World!"
```
```
$ cat b
#! /bin/bash
echo "SCREW YOU!!!!"
```
Lots of caveats - better to actually check that the $ follows a line of backticks and is followed by a blank line, maybe make sure nothing bogus could be in the file to get executed... but this does what you asked, with (GNU) sed.
Good luck.
A rare case when use of getline would be appropriate:
$ cat tst.awk
state == "importing" {
while ( (getline line < $NF) > 0 ) {
print line
}
close($NF)
state = "imported"
}
$0 == "```" { state = (state ? "" : "importing") }
state != "imported" { print }
$ awk -f tst.awk file
See http://awk.freeshell.org/AllAboutGetline for getline uses and caveats.

bash script to modify and extract information

I am creating a bash script to modify and summarize information with grep and sed. But it gets stuck.
#!/bin/bash
# This script extracts some basic information
# from text files and prints it to screen.
#
# Usage: ./myscript.sh </path/to/text-file>
#Extract lines starting with ">#HWI"
ONLY=`grep -v ^\>#HWI`
#replaces A and G with R in lines
ONLYR=`sed -e s/A/R/g -e s/G/R/g $ONLY`
grep R $ONLYR | wc -l
The correct way to write a shell script to do what you seem to be trying to do is:
awk '
!/^>#HWI/ {
gsub(/[AG]/,"R")
if (/R/) {
++cnt
}
END { print cnt+0 }
' "$#"
Just put that in the file myscript.sh and execute it as you do today.
To be clear - the bulk of the above code is an awk script, the shell script part is the first and last lines where the shell just calls awk and passes it the input file names.
If you WANT to have intermediate variables then you can create/print them with:
awk '
!/^>#HWI/ {
only = $0
onlyR = only
gsub(/[AG]/,"R",onlyR)
print "only:", only
print "onlyR:", onlyR
if (/R/) {
++cnt
}
END { print cnt+0 }
' "$#"
The above will work robustly, portably, and efficiently on all UNIX systems.
First of all, and as #fedorqui commented - you're not providing grep with a source of input, against which it will perform line matching.
Second, there are some problems in your script, which will result in unwanted behavior in the future, when you decide to manipulate some data:
Store matching lines in an array, or a file from which you'll later read values. The variable ONLY is not the right data structure for the task.
By convention, environment variables (PATH, EDITOR, SHELL, ...) and internal shell variables (BASH_VERSION, RANDOM, ...) are fully capitalized. All other variable names should be lowercase. Since
variable names are case-sensitive, this convention avoids accidentally overriding environmental and internal variables.
Here's a better version of your script, considering these points, but with an open question regarding what you were trying to do in the last line : grep R $ONLYR | wc -l :
#!/bin/bash
# This script extracts some basic information
# from text files and prints it to screen.
#
# Usage: ./myscript.sh </path/to/text-file>
input_file=$1
# Read lines not matching the provided regex, from $input_file
mapfile -t only < <(grep -v '^\>#HWI' "$input_file")
#replaces A and G with R in lines
for((i=0;i<${#only[#]};i++)); do
only[i]="${only[i]//[AG]/R}"
done
# DEBUG
printf '%s\n' "Here are the lines, after relpace:"
printf '%s\n' "${only[#]}"
# I'm not sure what you were trying to do here. Am I gueesing right that you wanted
# to count the number of R's in ALL lines ?
# grep R $ONLYR | wc -l

how to find the position of a string in a file in unix shell script

Can you please help me solve this puzzle? I am trying to print the location of a string (i.e., line #) in a file, first to the std output, and then capture that value in a variable to be used later. The string is “my string”, the file name is “myFile” which is defined as follows:
this is first line
this is second line
this is my string on the third line
this is fourth line
the end
Now, when I use this command directly at the command prompt:
% awk ‘s=index($0, “my string”) { print “line=” NR, “position= ” s}’ myFile
I get exactly the result I want:
% line= 3, position= 9
My question is: if I define a variable VAR=”my string”, why can’t I get the same result when I do this:
% awk ‘s=index($0, $VAR) { print “line=” NR, “position= ” s}’ myFile
It just won’t work!! I even tried putting the $VAR in quotation marks, to no avail? I tried using VAR (without the $ sign), no luck. I tried everything I could possibly think of ... Am I missing something?
awk variables are not the same as shell variables. You need to define them with the -v flag
For example:
$ awk -v var="..." '$0~var{print NR}' file
will print the line number(s) of pattern matches. Or for your case with the index
$ awk -v var="$Var" 'p=index($0,var){print NR,p}' file
using all uppercase may not be good convention since you may accidentally overwrite other variables.
to capture the output into a shell variable
$ info=$(awk ...)
for multi line output assignment to shell array, you can do
$ values=( $(awk ...) ); echo ${values[0]}
however, if the output contains more than one field, it will be assigned it's own array index. You can change it with setting the IFS variable, such as
$ IFS=$(echo -en "\n\b"); values=( $(awk ...) )
which will capture the complete lines as the array values.

set multiple variables from one awk command?

This is a very common script:
#!/bin/bash
teststr="col1 col2"
var1=`echo ${teststr} | awk '{print $1}'`
var2=`echo ${teststr} | awk '{print $2}'`
echo var1=${var1}
echo var2=${var2}
However I dont like this, especially when there are more fields to parse.
I guess there should be a better way like:
(var1,var2)=`echo ${teststr} | awk '{print $1 $2}'
(in my imagination)
Is that so?
Thanks for help to improve effeciency and save some CPU power.
This might work for you:
var=(col0 col1 col2)
echo "${var[1]}"
col1
Yes, you can, but the best practice is to use the awk way to pass variables to awk.
Example using shell script variables
awk -v awkVar1="$scriptVar1" -v awkVar2="$scriptVar2" '<your awk code>'
Example using environmental variables
awk -v awkVar1=ENVIRON["ENV_VAR1"] -v awkVar2=ENVIRON["ENV_VAR2"] '<your awk code>'
It's possible to use script and environmental variables at the same time
awk -v awkVar1=ENVIRON["ENV_VAR1"] -v awkVar2="$scriptVar2" '<your awk code>'
You may find bash tricks to circumvent the awk way to do it, but it's not safe.
Explanation and more examples
Awk works this way, because it's a programming language by itself and has it's own way to use variables 'inside' awk statements.
By 'inside' i mean the part between the single quotes.
Let's see an example, where we turn off DHCP in a config file, all done using variables in a shell script. I'm going to explain the last line of code.
The script isn't optimal, it's main purpose is to use script variables. Explaining how the script does its job is out of scope of this answer, the focus is on explaining the use of variables.
#!/bin/bash
# set some variables
# set path to the config file to edit
CONFIG_FILE=/etc/netplan/01-netcfg.yaml
# find the line number of the line to change using awk and assign it to a variable
DHCP_LINE=$(awk '/dhcp4: yes/{print FNR}' $CONFIG_FILE)
# get the number of spaces used for identation using awk and assign it to a variable
SPACES=$(awk -v awkDHCP_LINE="$DHCP_LINE" 'FNR==awkDHCP_LINE {print match($0,/[^ ]|$/)-1}' $CONFIG_FILE)
# find DHCP setting and turn it off if needed
awk -v awkDHCP_LINE="$DHCP_LINE" -v awkSPACES="$SPACES" 'FNR==awkDHCP_LINE {sub("dhcp4: yes", "dhcp4: no")}' $CONFIG_FILE
Let's break this last line up to pieces for explanation.
awk -v awkDHCP_LINE="$DHCP_LINE" -v awkSPACES="$SPACES"
This part above assigns the value of DHCP_LINE script variable to the awkDHCP_LINE awk variable and the the value of SPACES script variable to the awkSPACESawk variable.
Please note, that the SPACES variable is passed to awk for the sake of showing how to pass multiple variables only; the awk command doesn't process it.
'FNR==awkDHCP_LINE {sub("dhcp4: yes", "dhcp4: no")}'
This one above is the 'inside' part of awk where the variable(s) passed to awk can be used.
$CONFIG_FILE
This part is outside awk, a generic script variable is used to specify the file that should be processed.
I hope this clears things a bit :)
Note: if you have lots of variables to pass, the solution provided by #potong may prove a better approach depending on your use case.
Bash has Array Support, We just need to supply values dynamically :)
function test_set_array_from_awk(){
# Note : -a is required as declaring array
let -a myArr;
# Hard Coded Valeus
# myArr=( "Foo" "Bar" "other" );
# echo "${myArr[1]}" # Print Bar
# Dynamic values
myArr=( $(echo "" | awk '{print "Foo"; print "Bar"; print "Fooo-And-Bar"; }') );
# Value #index 0
echo "${myArr[0]}" # Print Foo
# Value #index 1
echo "${myArr[1]}" # Print Bar
# Array Length
echo ${#myArr[#]} # Print 3 as array length
# Safe Reading with Default value
echo "${myArr[10]-"Some-Default-Value"}" # Print Some-Default-Value
echo "${myArr[10]-0}" # Print 0
echo "${myArr[10]-''}" # Print ''
echo "${myArr[10]-}" # Print nothing
# With Dynamic Index
local n=2
echo "${myArr["${n}"]-}" # Print Fooo-And-Bar
}
# calling test function
test_set_array_from_awk
Bash Array Documentation : http://tldp.org/LDP/abs/html/arrays.html
You can also use shell set builtin to place whitespace seperated (or more accurately, IFS seperated) into the variables $1, $2 and so on:
#!/bin/bash
teststr="col1 col2"
set -- $teststr
echo "$1" # col1
echo "$2" # col2

Resources