Using shell for loop deal with json - shell

Here is my shell code,My question is I don't want the ',',at the end of json file.
#!/bin/bash
PATCH_VERSION_FILE=/root/workspace/patch_version.json
filepath=/root/workspace/txtdir
for file in "${filepath}"/*.txt; do
echo " {" >> ${PATCH_VERSION_FILE}
filename=`echo ${file} | awk -F'/' '{ print $(NF) }'`
filemd5=`md5sum "${file}" | awk '{ print $1 }'`
echo " \"${filename}\"":"\"$filemd5\"">>${PATCH_VERSION_FILE}
echo " },">>${PATCH_VERSION_FILE}
done
Output:
{
"2001.txt":"d41d8cd98f00b204e9800998ecf8427e"
},
{
"2002.txt":"d41d8cd98f00b204e9800998ecf8427e"
},
{
"2003.txt":"d41d8cd98f00b204e9800998ecf8427e"
},
{
"2004.txt":"d41d8cd98f00b204e9800998ecf8427e"
},
{
"2005.txt":"d41d8cd98f00b204e9800998ecf8427e"
},
I found a soulution,but it looks ugly,the code below:
n=0
for file in "${filepath}"/*.txt; do
if [ $n -ne 0 ];then
echo " ," >> ${PATCH_VERSION_FILE}
fi
echo " {" >> ${PATCH_VERSION_FILE}
filename=`echo ${file} | awk -F'/' '{ print $(NF) }'`
filemd5=`md5sum "${file}" | awk '{ print $1 }'`
echo " \"${filename}\"":"\"$filemd5\"">>${PATCH_VERSION_FILE}
echo " }">>${PATCH_VERSION_FILE}
n=$(( $n + 1 ))
done
but the ',' not the same line with '}',is there any ways to deal with this ?

You could add this at the end of your script (after your for loop). It will simply remove (actually will replace with empty string) the last character of the file :
sed -i '$ s/.$//' $PATCH_VERSION_FILE

In order to have valid JSON data, you can use a JSON aware tool like jq:
md5sum "$filepath"/*.txt | jq -R 'split(" ")|{(.[1]):.[0]}' >> ${PATCH_VERSION_FILE}
-R option allows jq to read strings from md5sum.
The string is splitted in 2 and then assigned to the key/value object.

Related

Change ouput for loop in bash

I've these data :
2020-01-01-00-00
2020-01-01-06-00
2020-01-01-12-00
2020-01-01-18-00
I would like to display these data like this :
[ 2020-01-01-00-00, 2020-01-01-06-00, 2020-01-01-12-00, 2020-01-01-18-00 ]
I try this :
for i in $(cat Test2.txt)
do
tr -d "\n" <<< $i," "
done
The output is :
2020-01-01-00-00, 2020-01-01-06-00, 2020-01-01-12-00, 2020-01-01-18-00,
Then I try :
for i in $(cat Test2.txt)
do
echo " [ `tr -d "\n" <<< "'$i'"," "` ]"
done
But the output is :
[ '2020-01-01-00-00', ]
[ '2020-01-01-06-00', ]
[ '2020-01-01-12-00', ]
[ '2020-01-01-18-00', ]
Could you show me how to do that ?
Don't read lines with for.
A common arrangement is to use a separator prefix which changes after the first iteration.
prefix='['
while read -r line; do
printf '%s %s' "$prefix" "$line"
prefix=','
done <Test2.txt
printf ' ]\n'
I'll second the suggestion to use a JSON-specific tool if your task is to generate valid JSON, though. This is pesky and somewhat brittle.
Your desired output looks like JSON, if so you can use jq for this. E.g:
jq -Rn '[inputs]' Test2.txt
Using printf
data="
2020-01-01-00-00
2020-01-01-06-00
2020-01-01-12-00
2020-01-01-18-00
"
printf -v data %s,\ $data
printf "[ ${data%, } ]"
a
tricky bit of perl:
perl -00 -pe 'chomp; s/\n/, /g; BEGIN {print "[ "} END {print " ]\n"}' Test2.txt
In sed,
$: sed -n '$!H; ${H; x; s/\n/, /g; s/$/ ]\n/; s/^,/[/; p;}' infile
In bash,
$: dat="$(printf "%s, " $(<infile))"; printf "[ ${dat%, } ]\n";
in 'awk',
$: awk 'BEGIN{ printf "[ "; sep=""; } { printf "%s%s", sep, $0; sep=", "; } END{ print " ]"; }' infile

Cutting string into different types of variables

Full script:
snapshot_details=`az snapshot show -n $snapshot_name -g $resource_group --query \[diskSizeGb,location,tags\] -o json`
echo $snapshot_details
IFS='",][' read -r -a array <<< $snapshot_details
echo ${array[#]}
IFS=' ' read -r -a array1 <<< ${array[#]}
echo ${array1[0]} #size
echo ${array1[1]} #location
How can I break this into 3 different variables:
a=5
b=eastus2
c={ "name": "20190912123307" "namespace": "aj-ssd" "pvc": "poc-ssd" }
and is there any easier way to parse c so that I can easy traverse over all the keys and values?
o/p of the above script is:
[ 5, "eastus2", { "name": "20190912123307", "namespace": "ajain-ssd", "pvc": "azure-poc-ssd" } ]
5 eastus2 { name : 20190912123307 namespace : ajain-ssd pvc : azure-poc-ssd }
5
eastus2
A JSON parser, such as jq, should always be used when splitting out items from a JSON array in bash. Line-oriented tools (such as awk) are unable to correctly escape JSON -- if you had a value with a tab, newline, or literal quote, it would be emitted incorrectly.
Consider the following code, runnable exactly as-is even by people not having your az command:
snapshot_details_json='[ 5, "eastus2", { "name": "20190912123307", "namespace": "ajain-ssd", "pvc": "azure-poc-ssd" } ]'
{ read -r diskSizeGb && read -r location && read -r tags; } < <(jq -cr '.[]' <<<"$snapshot_details_json")
# show that we really got the content
echo "diskSizeGb=$diskSizeGb"
echo "location=$location"
echo "tags=$tags"
...which emits as output:
diskSizeGb=5
location=eastus2
tags={"name":"20190912123307","namespace":"ajain-ssd","pvc":"azure-poc-ssd"}
Bash can do this with the awk command:
To extract the 5 :
awk -F " " '{ print $1 }'
To extract eastus2 :
awk -F "\"" '{ print $2 }'
To extract the last string :
awk -F "{" '{ print "{" $2 }'
As seen here :
To explain quickly
awk -F " " '{ print $1 }'
-F sets a delimiter, here we set space as the delimiter.
Then, we ask awk to print the first occurence before the first delimiter is hit.
The slightly more complex one:
awk -F "{" '{ print "{" $2 }'
Here we set { as the delimiter. Since we wouldn't have the bracket with only printing $2, we're also manually re-printing the bracket (print "{" $2)
It will not be nice in Bash, but this should work if your input format does not vary (including no {, } or spaces inside the key/value pairs):
S='5 "eastus2" { "name": "20190912123307" "namespace": "aj-ssd" "pvc": "poc-ssd" }'
a=`echo "$S" | awk '{print $1}'`
b=`echo "$S" | awk '{print $2}' | sed -e 's/\"//g'`
c=`echo "$S" | awk '{$1=$2=""; print $0}'`
echo "$a"
echo "$b"
echo "$c"
elems=`echo "$c" | sed -e 's/{//' | sed -e 's/}//' | sed -e 's/: //g'`
echo $elems
for e in $elems
do
kv=`echo "$e" | sed -e 's/\"\"/ /' | sed -e 's/\"//g'`
key=`echo "$kv" | awk '{print $1}'`
value=`echo "$kv" | awk '{print $2}'`
echo "key:$key; value:$value"
done
The idea in the iteration over key/value pairs is to:
(1) remove the space (and colon) between keys and corresponding value so that each key/value pair appears as one item.
(2) inside the loop, change the delimiter between keys and values (which is now "") to space and remove the double quotes (variable 'kv').
(3) extract the key/value as the first/second item of kv.
EDIT:
Avoid file name wildcard expansions.

shell script : comma in the beginning instead of end

This is a part of my shell script.
for line in `cat $1`
do
startNum=`echo $line | awk -F "," '{print $1}'`
endNum=`echo $line | awk -F "," '{print $2}'`
operator=`echo $line | awk -F "," '{print $3}'`
termPrefix=`echo $line | awk -F "," '{print $4}'`
if [[ "$endNum" == 81* ]] || [[ "$endNum" == 33* ]] || [[ "$endNum" == 55* ]]
then
areaCode="${endNum:0:2}"
series="${endNum:2:4}"
startCLI="${startNum:6:4}"
endCLI="${endNum:6:4}"
else
areaCode="${endNum:0:3}"
series="${endNum:3:3}"
startCLI="${startNum:6:4}"
endCLI="${endNum:6:4}"
fi
echo "Add,${areaCode},${series},${startCLI},${endCLI},${termPrefix},"
#>> ${File}
done
input is csv contains below many rows :
5557017101,5557017101,102,1694
5515585614,5515585614,102,084
Output od shell script :
,dd,55,5701,7101,7101,1694
,dd,55,1558,5614,5614,0848
Not sure why comma is coming in startign of output, instead as per shell script it should come in the end.
please help
Here is a suggested awk command that should replace all of your shell+awk code. This awk also takes care of trailing \r:
awk -v RS=$'\r' 'BEGIN{FS=OFS=","} NF>3{
startNum=$1; endNum=$2; termPrefix=$4;
if (endNum ~ /^(81|33|55)/) {
areaCode=substr(endNum,1,2); series=substr(endNum,3,4)
}
else {
areaCode=substr(endNum,1,3); series=substr(endNum,4,3)
}
startCLI=substr(startNum,7,4); endCLI=substr(endNum,7,4);
print "Add", areaCode, series, startCLI, endCLI, termPrefix
}' file
Add,55,5701,7101,7101,1694
Add,55,1558,8561,5614,084

Unix - How do I have my shell script process more than one file from the command line?

I'm trying to modify an existing script I have to take up to three text files and transform them. Currently the script will only transform the text from one file. Here's the existing script I have:
if [ $# -eq 1 ]
then
if [ -f $1 ]
then
name="My Name"
echo $name
date
starting_data=$1
sed '/^id/ d' $starting_data > raw_data3
sed 's/-//g' raw_data3 > raw_data4
cut -f1 -d, raw_data4 > cutfile1.col1
cut -f2 -d, raw_data4 > cutfile1.col2
cut -f3 -d, raw_data4 > cutfile1.col3
sed 's/$/:/' cutfile1.col2 > last
sed 's/^ //' last > last2
sed 's/^ //' cutfile1.col3 > first
paste -d\ first last2 cutfile1.col1 > final
cat final
else
echo "$1 cannot be found."
fi
else
echo "Please enter a filename."
fi
All those temp files are unnecessary. awk can do all of what sed and cut can do, so this should be what you want (pending the output field separator question)
if [ $# -eq 0 ]; then
echo "usage: $0 file ..."
exit 1
fi
for file in "$#"; do
if ! [ -f "$file" ]; then
echo "file not found: $file"
continue
fi
name="My Name"
echo "$name"
date
awk -F, -v OFS=" " '
/^id/ {next}
{
gsub(/-/, "")
sub(/^ /, "", $2)
sub(/^ /, "", $3)
print $3, $2 ":", $1
}
' "$file" > final
cat final
done
Note all my double quotes: those are required.

Awk shell scripting using gsub to remove whitespace

I have a shell script that I would like to export out the 'data' variable without any whitespace in it. I have tried gsub() but I cannot seem to get it work.
export data="`grep -e 'P/N :' "$xfile" | awk '{print substr($3,3)}' `"
if [ "$data" = "" ] && [ "$skipdata" = "0" ]
then
export data="`grep -e 'P/N:' "$xfile" | awk '{print substr($2,3)}' |
awk '{ if (index($1,"-D") != 0)
$1 = (substr($1, 1, (index($1,"-D") -1))) "-DIE" }
{ print $1 }' `"
if [ "$data" = "" ]
then
export data="`grep -e 'CUST PART NO:' "$xfile" | awk '{print substr($4,3)}' |
awk '{ if (index($1,"-D") != 0)
$1 = (substr($1, 1, (index($1,"-D") -1))) "-DIE" }
{ print $1 }' `"
fi
fi
Ultimately I would like $data to be whitespace free. Can I do like:
export data="awk '{gsub($data," ","");print}"
It LOOKS like your script should be written as just something like:
data=$(awk -F':' '
$1 ~ /^(P\/N[[:space:]]*|CUST PART NO)$/ {
sub(/-D.*/,"-DIE",$2)
gsub(/[[:space:]]+/,"",$2)
print $2
}
' "$xfile")
We can use that as a starting point and if it doesn't do exactly what you want then update your question to include some sample lines from $xfile and your desired output.
I think the correct syntax is
gsub(/[[:blank:]]+/,"")
so you could probably use
data=$(awk '{gsub(/[[:blank:]]+/,""); print}' <<< "$data")

Resources