How do I escape an argument of a bash script in awk? - bash

I have the following issue. I want to run a script with 2 parameters like:
./Myscript.sh $1 $2
$1 is a number, nothing special but $2 It's actually a message that looks like this:
“My message`-12355, this is a message !56432-I am sure it`s a message-46583”.
This message was actually extracted with an awk from some log files. Myscript.sh executes a curl http post and uses the $1 and $2 as parameters for creating the json in curl command like
-d '{"number":"$1","message":"$2"}'
My question is how do I “escape” the argument $2 since the message contains special characters?
Thanks
I'm calling Myscript.sh from another script in a awk command using:
system(./Myscript.sh “$1” \”$2\”)
I was thinking to use backslashes to “escape” but this not seems to work. Any ideas or help would be great. Thanks a lot!

I suggest to use " and not “.
system("./Myscript.sh \"" $1 "\" \"" $2 "\"")

you have exclamation mark ("!") in your shell $2 - I'd safely single-quote them if I were you ::
sh: ./Myscript.sh: No such file or directory
system() command :::
'./Myscript.sh' '1114111'
'My message`-12355, this is a
message !56432-I'\''m sure it`s a message-46583'
# gawk profile, created Mon Jan 9 05:37:09 2023
# Rule(s)
1 ' {
1 print "system() command :::\f\f\r\t",
system_cmd = escSQ($1) (_ = " ")
escSQ($2)_ escSQ($3)
1 system(system_cmd)
}
3 function escSQ(__,_) {
3 _ = "\47"
3 gsub(_,"&\\" (_)_,__)
3 return (_)(__)_
}'
ps : this approach is okay only if none of the command items have items that need to be interpreted by the shell, e.g. ~ (as prefix for script itself) or $? etc

Related

How to collate multiple files in AWK?

I am trying to collate a series of .csv log files that are named by date (e.g., 2019-02-24.csv). There are a bunch of them, so I'm trying to script the process. I've crafted an AWK script that combines individual files:
awk ' FNR==1 { while (/"_time",PIN,FULLNAME,OFFICE,Acronym,Name/) getline; } 1 { print } ' 2019-01-01.csv >> usage_history.csv
But I am failing when I try to string the AWK commands together with a control loop in BASH:
for i in {01..28}; do echo "awk ' FNR==1 { while (/\"_time\",PIN,FULLNAME,OFFCODE,Acronym,Name/) getline; } 1 { print } ' 2019-01-$i.csv >> user_history.csv"; done
When I run this, it prints out the correct commands to the command line, but the awk scripts are not executed (they only get printed). If I run it without echo, I get errors telling me that the file doesn't exist; though all files are present:
bash: awk ' FNR==1 { while (/"_time",PIN,FULLNAME,OFFCODE,Acronym,Name/) getline; } 1 { print } ' 2019-01-01.csv >> user_history.csv: No such file or directory
What am I missing in my loop?
Here is a condensed sample of the command and the error messages:
$ for i in {01..02}; do "awk ' FNR==1 { while (/\"_time\",PIN,FULLNAME,OFFCODE,Acronym,Name/) getline; } 1 { print } ' 2019-01-$i.csv >> user_history.csv"; done
bash: awk ' FNR==1 { while (/"_time",PIN,FULLNAME,OFFCODE,Acronym,Name/) getline; } 1 { print } ' 2019-01-01.csv >> user_history.csv: No such file or directory
bash: awk ' FNR==1 { while (/"_time",PIN,FULLNAME,OFFCODE,Acronym,Name/) getline; } 1 { print } ' 2019-01-02.csv >> user_history.csv: No such file or directory
Could you please try following.
awk '!/"_time",PIN,FULLNAME,OFFCODE,Acronym,Name/' 2019-01-[0-9]*.csv >> user_history.csv
Here following are the points why one could use this approach:
1- Use of for loop and calling awk command in that each time will be a overkill. We should use smart approach when awk could read multiple files then we should sue it.
2- Now comes the getline part which you tried in your code, so if we want to negate any string then simply negate it by using !/string_to_be_skipped/ so it will look for only those lines which are NOT having this string.
3- While mentioning file(multiple files) to single awk command I used 2019-01-[0-9]*.csv why because since you have NOT told if files will be created daily basis or not so in case we give it a loop style and that specific file is NOT present then we will get an error. For an example let's say I use following awk command where I intentionally removed file named(2019-01-02.csv).
awk '........' 2019-01-{01..29}.csv
awk: cannot open 2019-01-02.csv (No such file or directory)
So to avoid these kind of situations I have used 2019-01-[0-9]*.csv where it will only look for files which have digits after 2019-01-0 and will loop NOT run in a loop and complaint us that some xyz etc file is missing.
Try this:
for i in {01..28}; do awk '!/"_time",PIN,FULLNAME,OFFCODE,Acronym,Name/' 2019-01-$i.csv >>user_history.csv;done
The commands after do should not be quoted.
And what you were doing essentially equals to ignore the title lines.
The {print} after 1 is unnecessary -- single 1 implies {print}. The 1 is to provide a true.
-- When there's only an expression but no block, the block implies to {print}.
-- And only a regexp equals $0~/regex/, and here I negated it.
If there's no other command inside the loop, you can simplify the loop with one awk command:
awk '!/"_time",PIN,FULLNAME,OFFCODE,Acronym,Name/' 2019-01-{01..28}.csv >>user_history.csv
But this one will throw error and stop executing when one of the files not existed.
Another way is:
awk '!/"_time",PIN,FULLNAME,OFFCODE,Acronym,Name/' 2019-01-[0-3][0-9].csv >>user_history.csv
This one will only match filenames, instead of loop for them.
It won't stop executing nor throw error, So if there's file missing you wouldn't know. And it will match extra files if exist.
For example it will read 2019-01-34.csv if it exists.
So if you want the warnings (warnings won't affect the results), but don't want the commands to stop, then use the first for loop one.
Pitfalls:
[0-3][1-9] won't match 10,20 and 30, but will match 32 to 39.
[0-9]* will match any longer number, but with 20 to 29 before 3 or likewise, it's string order.
Thanks to #Tiw and #RavinderSingh13 for their guidance. Here is the final awk script that is working well for my case where I have daily files from multiple days, months, and years (only 2018 and 2019 in this case):
awk '!/"_time",PIN,FULLNAME,OFFCODE,Acronym,Name/' 201[8-9]-[0-1][0-2]-[0-3][0-9].csv >> user_history.csv

Update version number in property file using bash

I am new in bash scripting and I need help with awk. So the thing is that I have a property file with version inside and I want to update it.
version=1.1.1.0
and I use awk to do that
file="version.properties"
awk -F'["]' -v OFS='"' '/version=/{
split($4,a,".");
$4=a[1]"."a[2]"."a[3]"."a[4]+1
}
;1' $file > newFile && mv newFile $file
but I am getting strange result version="1.1.1.0""...1
Could someone help me please with this.
You mentioned in your comment you want to update the file in place. You can do that in a one-liner with perl:
perl -pe '/^version=/ and s/(\d+\.\d+\.\d+\.)(\d+)/$1 . ($2+1)/e' -i version.properties
Explanation
-e is followed by a script to run. With -p and -i, the effect is to run that script on each line, and modify the file in place if the script changes anything.
The script itself, broken down for explanation, is:
/^version=/ and # Do the following on lines starting with `version=`
s/ # Make a replacement on those lines
(\d+\.\d+\.\d+\.)(\d+)/ # Match x.y.z.w, and set $1 = `x.y.z.` and $2 = `w`
$1 . ($2+1)/ # Replace x.y.z.w with a copy of $1, followed by w+1
e # This tells Perl the replacement is Perl code rather
# than a text string.
Example run
$ cat foo.txt
version=1.1.1.2
$ perl -pe '/^version=/ and s/(\d+\.\d+\.\d+\.)(\d+)/$1 . ($2+1)/e' -i foo.txt
$ cat foo.txt
version=1.1.1.3
This is not the best way, but here's one fix.
Test case
I am assuming the input file has at least one line that is exactly version=1.1.1.0.
$ awk -F'["]' -v OFS='"' '/version=/{
> split($4,a,".");
> $4=a[1]"."a[2]"."a[3]"."a[4]+1
> }
> ;1' <<<'version=1.1.1.0'
Output:
version=1.1.1.0"""...1
The """ is because you are assigning to field 4 ($4). When you do that, awk adds field separators (OFS) between fields 1 and 2, 2 and 3, and 3 and 4. Three OFS => """, in your example.
Minimal change
$ awk -F'["]' -v OFS='"' '/version=/{
split($1,a,".");
$1=a[1]"."a[2]"."a[3]"."a[4]+1;
print
}
' <<<'version=1.1.1.0'
version=1.1.1.1
Two changes:
Change $4 to $1
Since the input field separator (-F) is ["], $4 is whatever would be after the third " (if there were any in the input). Therefore, split($4, ...) splits an empty field. The contents of the line, before the first " (if any), are in $1.
print at the end instead of ;1
The 1 after the closing curly brace is the next condition, and there is no action specified. The default action is to print the current line, as modified, so the 1 triggers printing. Instead, just print within your action when you are done processing. That way your action is self-contained. (Of course, if you needed to do other processing, you might want to print later, after that processing.)
You can use the = as the delimiter, like this:
awk -F= -v v=1.0.1 '$1=="version"{printf "version=\"%s\"\n", v}' file.properties

how to find the position of a string in a file in unix shell script

Can you please help me solve this puzzle? I am trying to print the location of a string (i.e., line #) in a file, first to the std output, and then capture that value in a variable to be used later. The string is “my string”, the file name is “myFile” which is defined as follows:
this is first line
this is second line
this is my string on the third line
this is fourth line
the end
Now, when I use this command directly at the command prompt:
% awk ‘s=index($0, “my string”) { print “line=” NR, “position= ” s}’ myFile
I get exactly the result I want:
% line= 3, position= 9
My question is: if I define a variable VAR=”my string”, why can’t I get the same result when I do this:
% awk ‘s=index($0, $VAR) { print “line=” NR, “position= ” s}’ myFile
It just won’t work!! I even tried putting the $VAR in quotation marks, to no avail? I tried using VAR (without the $ sign), no luck. I tried everything I could possibly think of ... Am I missing something?
awk variables are not the same as shell variables. You need to define them with the -v flag
For example:
$ awk -v var="..." '$0~var{print NR}' file
will print the line number(s) of pattern matches. Or for your case with the index
$ awk -v var="$Var" 'p=index($0,var){print NR,p}' file
using all uppercase may not be good convention since you may accidentally overwrite other variables.
to capture the output into a shell variable
$ info=$(awk ...)
for multi line output assignment to shell array, you can do
$ values=( $(awk ...) ); echo ${values[0]}
however, if the output contains more than one field, it will be assigned it's own array index. You can change it with setting the IFS variable, such as
$ IFS=$(echo -en "\n\b"); values=( $(awk ...) )
which will capture the complete lines as the array values.

passing command line argument to gawk script

I have a script chk.awk to which I want to pass some command line arguments. It has awk statements, sed command etc. Just for example I have taken a small program below to which I want to pass command line arguments.
#!/bin/bash
var1=$1
gawk '
BEGIN {
printf "argc = %d\n argv0=%s\n argv1=%s\n var1=%s\n",ARGC,ARGV[0],ARGV[1],$var1
}'
But when I try :
$ sh chk.awk 10 20
argc = 1
argv0=gawk
argv1=
var1=
Above I tried to display the command line arguments by both ways i.e. argv & $1, but none of them work. Can anyone let me know where I am going wrong here? What is the correct way to do that?
The problem is that you give arguments to the shell script, but not to the awk script.
You must add "$#" to the call of gawk.
#!/bin/bash
var1=$1
gawk '
BEGIN {
printf "argc = %d\n argv0=%s\n argv1=%s\n var1=%s\n",ARGC,ARGV[0],ARGV[1],$var1
}' "$#"
Otherwise you will your arguments in the shell-script and they will be not passed to gawk.
Update 1
If you have additional args (e.g. filenames that are to be processed),
you must remove the first portition of args first (in the BEGIN section):
#!/bin/bash
var1=$1
gawk '
BEGIN {
printf "argc = %d\n argv0=%s\n argv1=%s\n var1=%s\n",ARGC,ARGV[0],ARGV[1],$var1;
delete ARGV[1]
}' "$#" filename

how to validate if data has a trailing "/"

I have a file containing various information. The fields are delimited by |. One of the fields contains a directory. For example :
blah|blah|blah|/usr/local/etc/|blah|blah
I need to validate that the path field does not end with a "/". I'm using ksh. Any suggestions?
thanks.
Assuming the directory is always in the 4th field
line=0
while IFS='|' read -rA fields; do
let line++
[[ ${fields[3]} == */ ]] && echo line $line: ends with a slash
done < filename
Not ksh, but this is a natural job for awk:
awk -F\| '$4 ~ /\/$/ {
print "Trailing slash in line "NR":", $4
}' ${file:?}
Try this:
if [ line ~= '(/\w+)+(\||$)' ]
My shell syntax is rusty, so this might need a little massaging into shape
Don't forget special path like / (root)
I keep the / (root) in code below
echo "blah|blah|blah|/usr/local/etc/|blah|blah|
blah|blah|blah|/|blah|blah
blah|blah|blah|.|blah|blah
blah|blah|blah|/usr/local/etc|blah|blah" \
sed "
/\/\|/ {
/\|\/\|/ !s/\/|/|/
}"
explaination:
//\|/ treat line where a "/|" appear
//\|/ ! treat line where "|/|" doesn't appear (here in the case of previous test occur)
s//|/|/ replace "/|" by "|" (here when both test occur successfully)

Resources