Inside a bash script I want to pass arguments to the xml ed command of the xmlstarlet tools.
Here's the script:
#!/bin/bash
# this variable holds the arguments I want to pass
ED=' -u "/a/#id" -v NEW_ID -u "/a/b" -v NEW_VALUE'
# this variable holds the input xml
IN='
<a id="OLD_ID">
<b>OLD_VALUE</b>
</a>
'
# here I pass the arguments manually
echo $IN | xml ed -u "/a/#id" -v NEW_ID -u "/a/b" -v NEW_VALUE input.xml
# here I pass them using the variable from above
echo $IN | xml ed $ED
Why does the first call work, ie it gives the desired result:
# echo $IN | xml ed -u "/a/#id" -v NEW_ID -u "/a/b" -v NEW_VALUE input.xml
<?xml version="1.0"?>
<a id="NEW_ID">
<b>NEW_VALUE</b>
</a>
While the second call does not work, ie it gives:
# echo $IN | xml ed $ED
<?xml version="1.0"?>
<a id="OLD_ID">
<b>OLD_VALUE</b>
</a>
In bash, it is better to use arrays for lists of options like this. In this case, it doesn't make a difference, since none of the items embedded in ED contain whitespace.
#!/bin/bash
# this variable holds the arguments I want to pass
ED=( -u "/a/#id" -v NEW_ID -u "/a/b" -v NEW_VALUE)
# this variable holds the input xml
IN='
<a id="OLD_ID">
<b>OLD_VALUE</b>
</a>
'
# here I pass the arguments manually
echo $IN | xml ed -u "/a/#id" -v NEW_ID -u "/a/b" -v NEW_VALUE input.xml
# here I pass them using the variable from above
echo $IN | xml ed "${ED[#]}"
Get rid of the double quotes, because they're not processed after expanding variables:
ED=' -u /a/#id -v NEW_ID -u /a/b -v NEW_VALUE'
Related
Please, observe:
~$ az ad sp show --id $spn_id --query '[appId, displayName]'
[
"c...1",
"xyz"
]
~$
I would like to assign each returned value to its own bash variable, namely to APP_ID and APP_DISPLAY_NAME respectively. My current solution is this:
~$ x=(`az ad sp show --id $spn_id --query '[appId, displayName]' -o tsv | tr -d '\r'`)
~$ APP_ID=${x[0]}
~$ APP_DISPLAY_NAME=${x[1]}
~$
Which works fine:
~$ echo $APP_ID
c...1
~$ echo $APP_DISPLAY_NAME
xyz
~$
I am curious if it is possible to do it in a more concise way?
Yes, using jq would be the cleanest way:
str='[
"c...1",
"xyz"
]'
# The '.[N]' syntax gets the Nth item in the array
app_id=$(jq -r '.[0]' <<< "$str")
# 'jq' requires feeding the input string through standard input. use the '<<<' to do so
app_display_name=$(jq -r '.[1]' <<< "$str")
printf '%s\n' "id: $app_id"
printf '%s\n' "name: $app_display_name"
You can use the following with sed and xargs:
export $( az ad sp show --id $spn_id --query '[appId, displayName]' | tr -d "\n" | sed -E "s/\[\s+(.+)\,\s+(.+)\]/APP_ID=\"\1\" \nAPP_DISPLAY_NAME=\"\2\"/" | xargs -L 1)
~$ echo $APP_ID
c...1
~$ echo $APP_DISPLAY_NAME
xyz
I have a text file, named my_data.txt, with the following contents:
# define var1 and var2
var1=101
var2=202
// display var1 and var2
echo ${var1}
echo ${var2}
I want search all occurrences of var1 but not those in a line starts with '#' or '//'. I can do this:
grep var1 my_data.txt | grep -v '^#' | grep -v '^//'
output:
var1=101
echo ${var1}
The results is correct. The question: is there any way to pass both values '^#' and '^//' to a single -v option?
I suggest:
grep -v -e '^#' -e '^//' file | grep 'var1'
Or with an extended regular expression (-E):
grep -v -E '^(#|//)' file | grep 'var1'
Output:
var1=101
echo ${var1}
If you can make use of grep -P you can use a Perl-compatible regular expression and a single grep command.
grep -P "^(?!#|//).*\bvar1\b" my_data.txt
The pattern matches
^ Start of string
(?!#|//) Negative lookahead, assert not # or //
.*\bvar1\b Match the word var1 in the line
Or using awk skipping the line that starts with # or // and print a line that contains var1
awk '/^(#|\/\/)/{next};index($0, "var1")' my_data.txt
The examples will output:
var1=101
echo ${var1}
Suppose I have an xml file:
<?xml version='1.0' encoding='utf-8' standalone='yes' ?>
<map>
<string name="a"></string>
</map>
And I want to set the value of string with attribute a with something big:
$ xmlstarlet ed -u '/map/string[#name="a"]' -v $(for ((i=0;i<200000;i++)); do echo -n a; done) example.xml > o.xml
This will result in bash error "Argument list is too long". I was unable to find option in xmlstarlet which accept result from a file. So, how would I set the value of xml tag with 200KB data+?
Solution
After trying to feed chunks into the xmlstarlet by argument -a (append), I realized that I am having additional difficulties like escape of special characters and the order in which xmlstarlet accepts these chunks.
Eventually I reverted to simpler tools like xml2/sed/2xml. I am dropping the code as a separate post below.
This, as a workaround for your own example that bombs because of the ARG_MAX limit:
#!/bin/bash
# (remove 'echo' commands and quotes around '>' characters when it looks good)
echo xmlstarlet ed -u '/map/string[#name="a"]' -v '' example.xml '>' o.xml
for ((i = 0; i < 100; i++))
do
echo xmlstarlet ed -u '/map/string[#name="a"]' -a -v $(for ((i=0;i<2000;i++)); do echo -n a; done) example.xml '>>' o.xml
done
SOLUTION
I am not proud of it, but at least it works.
a.xml - what was proposed as an example in the starting post
source.txt - what has to be inserted into a.xml as xml tag
b.xml - output
#!/usr/bin/env bash
ixml="a.xml"
oxml="b.xml"
s="source.txt"
echo "$ixml --> $oxml"
t="$ixml.xml2"
t2="$ixml.xml2.edited"
t3="$ixml.2xml"
# Convert xml into simple string representation
cat "$ixml" | xml2 > "$t"
# Get the string number of xml tag of interest, increment it by one and delete everything after it
# For this to work, the tag of interest should be at the very end of xml file
cat "$t" | grep -n -E 'string.*name=.*a' | cut -f1 -d: | xargs -I{} echo "{}+1" | bc | xargs -I{} sed '{},$d' "$t" > "$t2"
# Rebuild the deleted end of the xml2-file with the escaped content of s-file and convert everything back to xml
# * The apostrophe escape is necessary for apk xml files
sed "s:':\\\':g" "$s" | sed -e 's:^:/map/string=:' >> "$t2"
cat "$t2" | 2xml > "$t3"
# Make xml more readable
xmllint --pretty 1 --encode utf-8 "$t3" > "$oxml"
# Delete temporary files
rm -f "$t"
rm -f "$t2"
rm -f "$t3"
Currently I have
DATE_LIST=$(cat "$OUT_FILE" | xmlstarlet sel -T -t -m "//*[local-name()='entry']//*[local-name()='$start_position_date'][#name='beginposition']" -v '.' -n)
The result is something like:
DATE_LIST= 2015-10-10
2015-11-11
... and so on
IFS='\n' read -a array <<< "$DATE_LIST"
echo "${array[0]}" //I get the first one
echo "${array[1]}" //I get nothing
How to parse it correctly? DATE_LIST is generated from xml and strings are separated with \n.
This appends each line from the output into an array, supporting lines with
withespaces.
array=()
IFS='
'
for line in $(cat "$OUT_FILE" | xmlstarlet set -T ...)
do
array+=("$line")
done
unset IFS
given an input stream with following lines:
123
456
789
098
...
I would like to call
curl -s http://foo.bar/some.php?id=xxx
with xxx being the number for each line, and everytime let an awk script fetch some information from the curl output which is written to the output stream. I am wondering if this is possible without using the awk "system()" call in following way:
cat lines | grep "^[0-9]*$" | awk '
{
system("curl -s " $0 \
" | awk \'{ #parsing; print }\'")
}'
You can use bash and avoid awk system call:
grep "^[0-9]*$" lines | while read line; do
curl -s "http://foo.bar/some.php?id=$line" | awk 'do your parsing ...'
done
A shell loop would achieve a similar result, as follows:
#!/bin/bash
for f in $(cat lines|grep "^[0-9]*$"); do
curl -s "http://foo.bar/some.php?id=$f" | awk '{....}'
done
Alternative methods for doing similar tasks include using Perl or Python with an HTTP client.
If your file gets dynamically appended the id's, you can daemonize a small while loop to keep checking for more data in the file, like this:
while IFS= read -d $'\n' -r a || sleep 1; do [[ -n "$a" ]] && curl -s "http://foo.bar/some.php?id=${a}"; done < lines.txt
Otherwise if it's static, you can change the sleep 1 to break and it will read the file and quit when there is no data left, pretty useful to know how to do.