How to dump json output to the file using jq? - bash

I am trying to copy certain key:value pairs from a json file to normal text file using jq within a bash script. I am doing:
#!/usr/bin/env bash
TestConfig="[...]/config_test.json"
#create new empty file if it doesn't exist
echo -n "" > test_sample.txt
echo "X=$(jq -r .abc $TestConfig '.' > test_sample.txt)"
echo "Y=$(jq -r .xyz $TestConfig '.' > test_sample.txt)"
But this copies only "values" (only values of .abc and .xyz) to test_sample.txt. But I am expecting:
cat test_sample.txt
X=test1
Y=test2
The config_test.json is:
{
"abc": "test1",
"xyz": "test2"
}
Can anyone please let me know what needs to be changed to have expected outcome? The .json file is quite big and I am extracting more key:value pairs from it. Hence any looping way to reduce jq operation time would be helpful.
Thanks in advance.
P.S: Please let me know if any info is missing.

If I understood correctly, you want to convert a JSON object's fields to raw text, following a key=value structure.
Use to_entries to decompose the object, iterate over its items with [], and output a formatted string using the .key and the .value. Make sure the output is raw text using -r:
jq -r 'to_entries[] | "\(.key)=\(.value)"' config_test.json > test_sample.txt
abc=test1
xyz=test2
Demo

Related

Bash array containing output from 'find' function is incorrectly structured

I am trying to create an array in bash that contains filenames for a subset of files stored in a single folder. I want the array to contain only filenames with the common string "zzz", and I want the array to contain one filename per element. I have been trying to use the find function to get filenames containing "zzz", and store the results in myarray.
Here is what I'm doing:
# Define folder containing files
file_dir=./my_files
# Define the common string
pattern="*zzz*"
# Store find output to myarray
readarray -d ' ' -t myarray < <(find ${file_dir} -name ${pattern})
# Print myarray
echo $myarray
Output:
./my_files/abc_zzz_1.nii.gz ./my_files/def_zzz_763.nii.gz ./my_files/ghi_zzz_628.nii.gz
myarray contains the correct filenames, however it does not appear to be structured in a way that allows indexing - I would like to be able to index the nth filename in myarray with ${myarray[n]}, however it seems that the full output from find is stored in a single element. echo ${myarray[0]} prints the same output as above, while echo ${myarray[1]} prints an empty line.
I figured that the whole output from find was being stored as a single string in ${myarray[0]}, so I tried to break the string up using:
read -r -a myarray2 <<< "${myarray[0]}"
...but this did not work as intended, because echo ${myarray2} only returns a single filename.
What am I doing wrong here?

How do I concatenate dummy values in JQ based on field value, and then CSV-aggregate these concatenations?

In my bash script, when I run the following jq against my curl result:
curl -u someKey:someSecret someURL 2>/dev/null | jq -r '.schema' | jq -r -c '.fields'
I get back a JSON array as follows:
[{"name":"id","type":"int","doc":"Documentation for the id field."},{"name":"test_string","type":"string","doc":"Documentation for the test_string field"}]
My goal is to do a call with jq applied to return the following (given the example above):
{"id":1234567890,"test_string":"xxxxxxxxxx"}
NB: I am trying to automatically generate templated values that match the "schema" JSON shown above.
So just to clarify, that is:
all array objects (there could be more than 2 shown above) returned in a single comma-delimited row
doc fields are ignored
the values for "name" (including their surrounding double-quotes) are concatenated with either:
:1234567890 ...when the "type" for that object is "int"
":xxxxxxxxxx" ...when the "type" for that object is "string"
NB: these will be the only types we ever get for now
Can someone show me how I can expand upon my initial jq to return this?
NB: I tried working down the following path but am failing beyond this...
curl -u someKey:someSecret someURL 2>/dev/null | jq -r '.schema' | jq -r -c '.fields' | "\(.name):xxxxxxxxxxx"'
If it's not possible in pure JQ (my preference) I'm also happy for a solution that mixes in a bit of sed/awk magic :)
Cheers,
Stan
Given the JSON shown, you could add the following to your pipeline:
jq -c 'map({(.name): (if .type == "int" then 1234567890 else "xxxxxxxxxx" end)})|add'
With that JSON, the output would be:
{"id":1234567890,"test_string":"xxxxxxxxxx"}
However, it would be far better if you combined the three calls to jq into one.

Parse yaml file with varying number of key:values

Long story short, I will be parsing yaml files in a directory with bash using yq. My yaml files could look like this:
CLIENT_FIRST_NAME: bob
CLIENT_LAST_NAME: smith
Or
CLIENT_FIRST_NAME: bob
CLIENT_LAST_NAME: smith
CLIENT_MIDDLE_NAME: michael
So I am looping through each file with a do loop and setting the variables to values
For example:
for f in $FILES
do
FIRSTNAME=$(yq r $f CLIENT_FIRST_NAME)
LASTNAME=$(yq r $f CLIENT_LAST_NAME)
add client --firstname=${FIRSTNAME} --lastname=${LASTNAME}
done
But sometimes I will have that middle name and I would need to include that:
add client --firstname=${FIRSTNAME} --lastname=${LASTNAME} --middlename=${MIDDLENAME}
The order doesn't matter, I just need to be able to account for additional fields that may show up in the yaml that need to be added to the 'add client' command. EVERY line in the yaml will be added to the command. Every key added will be a viable parameter for the 'add client' command. I don't have to worry about whether or not a key in the yaml is a valid parameter. They WILL be.
Curious on the best approach to the unknown here. Thanks!
I'm assuming yq returns nothing if it doesn't find a key.
I might make the entire flag based on whether yq returns something, like
for f in "${FILES[#]}"
do
FIRSTNAME=$(yq r "$f" CLIENT_FIRST_NAME)
MIDDLENAME=$(yq r "$f" CLIENT_MIDDLE_NAME)
LASTNAME=$(yq r "$f" CLIENT_LAST_NAME)
[[ -n $MIDDLENAME ]] && MIDDLENAME="--middlename=${MIDDLENAME}"
add client --firstname="${FIRSTNAME}" --lastname="${LASTNAME}" "${MIDDLENAME}"
done
This code would be far more efficient if you only ran yq once per input file, not once per data item per input file. Consider:
for f in *.yml; do
{ read -r firstname; read -r middlename; read -r lastname; } < <(
yq -r '(.CLIENT_FIRST_NAME, .CLIENT_MIDDLE_NAME // "", .CLIENT_LAST_NAME)' "$f"
)
add client \
--firstname="$firstname" \
${middlename:+--middlename="$middlename"} \
--lastname="$lastname"
done
Some notes to use in reading this:
Each read command in bash reads one line, when -d is not used to modify this.
The above yq command outputs one line per data item.
Using // "" causes the empty string, instead of null, to be used when no CLIENT_MIDDLE_NAME is found.
${foo:+...words here...} expands to ...words here... if-and-only-if foo is set to a non-empty value.

Get json field value with JQ from different directory

Title may be incorrect as I'm not actually sure where this is failing. I have a bash script running in one directory, and a JSON file I need a value from in a different directory. I want to copy the value from the external directory into an identical JSON file in the current directory.
I'm using jq to grab the value, but I can't figure out how to grab from a directory other than the one the script is running in.
The relevant bits of file structure are as follows;
cloudformation
- parameters_v13.json
environment_files
- prepare_stack_files.json (the script this is run from)
- directory, changes based on where the script is pointed
- created directory where created files are being output
- GREPNAME_parameters.json
The chunk of the JSON file I'm interested in looks like this;
[
{
"ParameterKey": "RTSMEMAIL",
"ParameterValue": "secretemail"
}
]
The script needs to get the "secretemail" from cloudformation/parameters_v13.json and paste it into the matching RTSMEMAIL field in the GREPNAME_parameters.json file.
I've been attempting the following with no luck - nothing is output. No error message either, just blank output. I know the GREPNAME path is correct because it's used elsewhere with no issues.
jq --arg email "$EMAIL" '(.[] | select(.ParameterKey == "RTSMEMAIL") | .ParameterValue) |= $email' ../cloudformation/parameters_v13.json | sponge ${GREPNAME}_parameters.json
This jq filter should help you get secretmail string
jq '.[] | select(.ParameterKey=="RTSMEMAIL") | .ParameterValue' json
"secretemail"
Add a -r file for raw output to remove quotes around the value
jq -r '.[] | select(.ParameterKey=="RTSMEMAIL") | .ParameterValue' json
secretemail
--raw-output / -r:
With this option, if the filter’s result is a string then it will be written directly to standard output rather than being formatted as a JSON string with quotes. This can be useful for making jq filters talk to non-JSON-based systems.
As I could see it you are trying to pass args to jq filter, for extraction you can do something first by setting the variable in bash
email="RTSMEMAIL"
and now pass it to the filter as
jq --arg email "$email" -r '.[] | select(.ParameterKey==$email) | .ParameterValue' json
secretemail
Now to replace the string obtained from parameters_v13.json file to your GREPNAME_parameters.json do the following steps:-
First storing the result from the first file in a variable to re-use later, I have used the file to extract as json, this actually points your parameters_v13.json file in another path.
replacementValue=$(jq --arg email "$email" -r '.[] | select(.ParameterKey==$email) | .ParameterValue' json)
now the $replacementValue will hold the secretmail which you want to update to another file. As you have indicated previously GREPNAME_parameters.json has a similar syntax as of the first file. Something like below,
$ cat GREPNAME_parameters.json
[
{
"ParameterKey": "SOMEJUNK",
"ParameterValue": "somejunkvalue"
}
]
Now I understand your intention is replace "ParameterValue" from the above file to the value obtained from the other file. To achieve that,
jq --arg replace "$replacementValue" '.[] | .ParameterValue = $replace' GREPNAME_parameters.json
{
"ParameterKey": "SOMEJUNK",
"ParameterValue": "secretemail"
}
You can then write this output to the a temp file and move it back as the GREPNAME_parameters.json. Hope this answers your question.
#Alex -
(1) sponge simply provides a convenient way to modify a file without having to manage a temporary file. You could use it like this:
jq ........ input.json | sponge input.json
Here, "input.json" is the file that you want to edit "in place". If you want to avoid overwriting the input file, you would not use sponge. In fact, I would recommend against doing so until you're absolutely sure that's what you want.
(2) There are several strategies for achieving what you have described using jq. They basically fall into two categories: (a) invoke jq twice; (b) invoke jq once.
Ignoring the sponge part:
the pattern for using jq twice would be as follows:
param=$(jq -r '.[]
| select(.ParameterKey == "RTSMEMAIL")|.ParameterValue
' cloudformation/parameters_v13.json )
jq --arg param "$param" -f edit.jq input.json
assuming you have jq 1.5, the pattern for doing everything with just one invocation of jq would be:
jq --argfile p cloudformation/parameters_v13.json -f manage.jq input.json
Here, edit.jq and manage.jq are files containing suitable jq programs.
Based on my understanding of your requirements, edit.jq might look like this:
(.[] | select(.ParameterKey == "RTSMEMAIL")|.ParameterValue) |= $param
And manage.jq might look like this:
($p[] | select(.ParameterKey == "RTSMEMAIL")|.ParameterValue) as $param
| (.[]| select(.ParameterKey == "RTSMEMAIL")|.ParameterValue) |= $param

converting lines to json in bash

I would like to convert a list into JSON array. I'm looking at jq for this but the examples are mostly about parsing JSON (not creating it). It would be nice to know proper escaping will occur. My list is single line elements so the new line will probably be the best delimiter.
I was also trying to convert a bunch of lines into a JSON array, and was at a standstill until I realized that -s was the only way I could handle more than one line at a time in the jq expression, even if that meant I'd have to parse the newlines manually.
jq -R -s -c 'split("\n")' < just_lines.txt
-R to read raw input
-s to read all input as a single string
-c to not pretty print the output
Easy peasy.
Edit: I'm on jq ≥ 1.4, which is apparently when the split built-in was introduced.
--raw-input, then --slurp
Just summarizing what the others have said in a hopefully quicker to understand form:
cat /etc/hosts | jq --raw-input . | jq --slurp .
will return you:
[
"fe00::0 ip6-localnet",
"ff00::0 ip6-mcastprefix",
"ff02::1 ip6-allnodes",
"ff02::2 ip6-allrouters"
]
Explanation
--raw-input/-R:
Don´t parse the input as JSON. Instead, each line of text is passed
to the filter as a string. If combined with --slurp, then the
entire input is passed to the filter as a single long string.
--slurp/-s:
Instead of running the filter for each JSON object in the input,
read the entire input stream into a large array and run the filter
just once.
You can also use jq -R . to format each line as a JSON string and then jq -s (--slurp) to create an array for the input lines after parsing them as JSON:
$ printf %s\\n aa bb|jq -R .|jq -s .
[
"aa",
"bb"
]
The method in chbrown's answer adds an empty element to the end if the input ends with a linefeed, but you can use printf %s "$(cat)" to remove trailing linefeeds:
$ printf %s\\n aa bb|jq -R -s 'split("\n")'
[
"aa",
"bb",
""
]
$ printf %s\\n aa bb|printf %s "$(cat)"|jq -R -s 'split("\n")'
[
"aa",
"bb"
]
If the input lines don't contain ASCII control characters (which have to be escaped in strings in valid JSON), you can use sed:
$ printf %s\\n aa bb|sed 's/["\]/\\&/g;s/.*/"&"/;1s/^/[/;$s/$/]/;$!s/$/,/'
["aa",
"bb"]
Update: If your jq has inputs you can simply write:
jq -nR [inputs] /etc/hosts
to produce a JSON array of strings. This avoids having to read the text file as a whole.
I found in the man page for jq and through experimentation what seems to me to be a simpler answer.
$ cat test_file.txt | jq -Rsc '. / "\n" - [""]'
["aa","bb"]
The -R is to read without trying to parse json, the -s says to read all of the input as one string, and the -c is for one-line output - not necessary, but it's what I was looking for.
Then in the string I pass to jq, the '.' says take the input as it is. The '/ \n' says to divide the string (split it) on newlines. The '- [""]' says to remove from the resulting array any empty strings (resulting from an extra newline at the end).
It's one line and without any complicated constructs, using just simple built in jq features.

Resources