hokay, I am trying to write a script that takes information from the yum - repolist all and puts it into pretty JSON for me to use in some data collecting.. Right now I have my output from the yum command looking like this.
All I have for code right now is just the yum repolist command.
#!/bin/bash -x
yum -v repolist all | grep -B2 -A6 "enabled" | sed 's/[[:space:]]//g' , 's/--//g' , 's/name=name=/name=/g'
the output from that command looks like:
Repo-id: wazuh_repo
Repo-name: Wazuhrepository
Repo-status: enabled
Repo-revision: 1536348945
Repo-updated: FriSep712:35:512018
Repo-pkgs: 73
Repo-size: 920M
Repo-baseurl: https://packages.wazuh.com/3.x/yum/
Repo-expire: 21,600second(s)(last:WedOct3108:59:002018)
There are about 8 entries and the titles are always the same... Can someone explain like I am five how to convert this into json, I've read the jq man page, I've read about hash's. nothing seems to make sense. I know I need to have a "key"/"value" how to I designate these?
I just want to take the output and make it look like pretty JSON, this is part of a larger script I am writing to help keep ontop of the repos we use at work. I am just totally not getting JSON though.
edit: I would prefer not to use a wrapper function and do/learn the proper way
So, first, so people who don't have yum can test this, let's make a wrapper function:
write_output() { cat <<EOF
Repo-id: wazuh_repo
Repo-name: Wazuhrepository
Repo-status: enabled
Repo-revision: 1536348945
Repo-updated: FriSep712:35:512018
Repo-pkgs: 73
Repo-size: 920M
Repo-baseurl: https://packages.wazuh.com/3.x/yum/
Repo-expire: 21,600second(s)(last:WedOct3108:59:002018)
EOF
}
Notably, all your keys come before the string :, and the values come after them -- so we want to read line-by-line, split based on colon-space sequences, treat what was in front as a key, and treat what's in back as a value.
Given that:
jq -Rn '[inputs | split(": ")] | reduce .[] as $kv ({}; .[$kv[0]] = $kv[1])' < <(write_output)
...properly emits:
{
"Repo-id": "wazuh_repo",
"Repo-name": "Wazuhrepository",
"Repo-status": "enabled",
"Repo-revision": "1536348945",
"Repo-updated": "FriSep712:35:512018",
"Repo-pkgs": "73",
"Repo-size": "920M",
"Repo-baseurl": "https://packages.wazuh.com/3.x/yum/",
"Repo-expire": "21,600second(s)(last:WedOct3108:59:002018)"
}
...so, how does that work?
jq -R turns on raw input mode; input is parsed as a sequence of raw strings, not as a sequence of JSON documents.
jq -n treats null as the only direct input, so one can then use input and inputs primitives inside the script where needed.
[ inputs ] reads all your lines of input, and puts them into a single array.
[ inputs | split(": ")] changes that from an array of strings to an array of lists -- with content both before and after the ": " sequence.
reduce .[] as $kv ( {}; ... ) starts a reducer, with an initial value of {}, and then feeds each value that .[] evaluates to (which is to say, each item in your list) into that reducer (the ... code) as the $kv variable, replacing the . value each time.
To run this with your yum command as the real input, change < <(write_output) to < <(yum -v repolist all | grep -B2 -A6 "enabled" | sed 's/[[:space:]]//g' , 's/--//g' , 's/name=name=/name=/g').
Here is a slightly more robust variation of #CharlesDuffy's answer. Since the latter provides excellent explanatory notes, further explanations are not given here.
jq -nR '
[inputs | index(": ") as $ix | {(.[:$ix]): .[$ix+2:]}]
| add'
This avoids using split in case the "value" contains ": ". It might, however, be still better not to assume that a space follows the first relevant ":".
Notice also that add is used here instead of reduce, solely for compactness and simplicity.
For these sorts of problems, I would prefer to use a regular expression to match keys and values. Otherwise, I would take an approach similar to Charles's.
$ ... | jq -Rn 'reduce (inputs | capture("(?<k>[^:]+):\\s*(?<v>.+)")) as {$k, $v} ({}; .[$k] = $v)'
Related
I have a JSON data as follows in data.json file
[
{"original_name":"pdf_convert","changed_name":"pdf_convert_1"},
{"original_name":"video_encode","changed_name":"video_encode_1"},
{"original_name":"video_transcode","changed_name":"video_transcode_1"}
]
I want to iterate through the array and extract the value for each element in a loop. I saw jq. I find it difficult to use it to iterate. How can I do that?
Just use a filter that would return each item in the array. Then loop over the results, just make sure you use the compact output option (-c) so each result is put on a single line and is treated as one item in the loop.
jq -c '.[]' input.json | while read i; do
# do stuff with $i
done
By leveraging the power of Bash arrays, you can do something like:
# read each item in the JSON array to an item in the Bash array
readarray -t my_array < <(jq --compact-output '.[]' input.json)
# iterate through the Bash array
for item in "${my_array[#]}"; do
original_name=$(jq --raw-output '.original_name' <<< "$item")
changed_name=$(jq --raw-output '.changed_name' <<< "$item")
# do your stuff
done
jq has a shell formatting option: #sh.
You can use the following to format your json data as shell parameters:
cat data.json | jq '. | map([.original_name, .changed_name])' | jq #sh
The output will look like:
"'pdf_convert' 'pdf_convert_1'"
"'video_encode' 'video_encode_1'",
"'video_transcode' 'video_transcode_1'"
To process each row, we need to do a couple of things:
Set the bash for-loop to read the entire row, rather than stopping at the first space (default behavior).
Strip the enclosing double-quotes off of each row, so each value can be passed as a parameter to the function which processes each row.
To read the entire row on each iteration of the bash for-loop, set the IFS variable, as described in this answer.
To strip off the double-quotes, we'll run it through the bash shell interpreter using xargs:
stripped=$(echo $original | xargs echo)
Putting it all together, we have:
#!/bin/bash
function processRow() {
original_name=$1
changed_name=$2
# TODO
}
IFS=$'\n' # Each iteration of the for loop should read until we find an end-of-line
for row in $(cat data.json | jq '. | map([.original_name, .changed_name])' | jq #sh)
do
# Run the row through the shell interpreter to remove enclosing double-quotes
stripped=$(echo $row | xargs echo)
# Call our function to process the row
# eval must be used to interpret the spaces in $stripped as separating arguments
eval processRow $stripped
done
unset IFS # Return IFS to its original value
From Iterate over json array of dates in bash (has whitespace)
items=$(echo "$JSON_Content" | jq -c -r '.[]')
for item in ${items[#]}; do
echo $item
# whatever you are trying to do ...
done
Try Build it around this example. (Source: Original Site)
Example:
jq '[foreach .[] as $item ([[],[]]; if $item == null then [[],.[0]] else [(.[0] + [$item]),[]] end; if $item == null then .[1] else empty end)]'
Input [1,2,3,4,null,"a","b",null]
Output [[1,2,3,4],["a","b"]]
None of the answers here worked for me, out-of-the-box.
What did work was a combination of a few:
projectList=$(echo "$projRes" | jq -c '.projects[]')
IFS=$'\n' # Read till newline
for project in ${projectList[#]}; do
projectId=$(jq '.id' <<< "$project")
projectName=$(jq -r '.name' <<< "$project")
...
done
unset IFS
NOTE: I'm not using the same data as the question does, in this example assume projRes is the output from an API that gives us a JSON list of projects, eg:
{
"projects": [
{"id":1,"name":"Project"},
... // array of projects
]
}
An earlier answer in this thread suggested using jq's foreach, but that may be much more complicated than needed, especially given the stated task. Specifically, foreach (and reduce) are intended for certain cases where you need to accumulate results.
In many cases (including some cases where eventually a reduction step is necessary), it's better to use .[] or map(_). The latter is just another way of writing [.[] | _] so if you are going to use jq, it's really useful to understand that .[] simply creates a stream of values.
For example, [1,2,3] | .[] produces a stream of the three values.
To take a simple map-reduce example, suppose you want to find the maximum length of an array of strings. One solution would be [ .[] | length] | max.
Here is a simple example that works in zch shell:
DOMAINS='["google","amazon"]'
arr=$(echo $DOMAINS | jq -c '.[]')
for d in $arr; do
printf "Here is your domain: ${d}\n"
done
I stopped using jq and started using jp, since JMESpath is the same language as used by the --query argument of my cloud service and I find it difficult to juggle both languages at once. You can quickly learn the basics of JMESpath expressions here: https://jmespath.org/tutorial.html
Since you didn't specifically ask for a jq answer but instead, an approach to iterating JSON in bash, I think it's an appropriate answer.
Style points:
I use backticks and those have fallen out of fashion. You can substitute with another command substitution operator.
I use cat to pipe the input contents into the command. Yes, you can also specify the filename as a parameter, but I find this distracting because it breaks my left-to-right reading of the sequence of operations. Of course you can update this from my style to yours.
set -u has no function in this solution, but is important if you are fiddling with bash to get something to work. The command forces you to declare variables and therefore doesn't allow you to misspell a variable name.
Here's how I do it:
#!/bin/bash
set -u
# exploit the JMESpath length() function to get a count of list elements to iterate
export COUNT=`cat data.json | jp "length( [*] )"`
# The `seq` command produces the sequence `0 1 2` for our indexes
# The $(( )) operator in bash produces an arithmetic result ($COUNT minus one)
for i in `seq 0 $((COUNT - 1))` ; do
# The list elements in JMESpath are zero-indexed
echo "Here is element $i:"
cat data.json | jp "[$i]"
# Add or replace whatever operation you like here.
done
Now, it would also be a common use case to pull the original JSON data from an online API and not from a local file. In that case, I use a slightly modified technique of caching the full result in a variable:
#!/bin/bash
set -u
# cache the JSON content in a stack variable, downloading it only once
export DATA=`api --profile foo compute instance list --query "bar"`
export COUNT=`echo "$DATA" | jp "length( [*] )"`
for i in `seq 0 $((COUNT - 1))` ; do
echo "Here is element $i:"
echo "$DATA" | jp "[$i]"
done
This second example has the added benefit that if the data is changing rapidly, you are guaranteed to have a consistent count between the elements you are iterating through, and the elements in the iterated data.
This is what I have done so far
arr=$(echo "$array" | jq -c -r '.[]')
for item in ${arr[#]}; do
original_name=$(echo $item | jq -r '.original_name')
changed_name=$(echo $item | jq -r '.changed_name')
echo $original_name $changed_name
done
I want to insert new json objects in between json objects using bash generated uuid.
input json file test.json
{"name":"a","type":1}
{"name":"b","type":2}
{"name":"c","type":3}
input bash command uuidgen -r
target output json
{"id": "7e3ca7b0-48f1-41fe-9a19-092a62cba0dc"}
{"name":"a","type":1}
{"id": "3f793fdd-ec3b-4306-8153-12f3f9faf2c1"}
{"name":"b","type":2}
{"id": "cbcd759a-37e7-4da7-b7fe-7572f474ec31"}
{"name":"c","type":3}
basic jq program to insert new objects
jq -c '{"id"}, .' test.json
output json
{"id":null}
{"name":"a","type":1}
{"id":null}
{"name":"b","type":2}
{"id":null}
{"name":"c","type":3}
jq program to insert uuid generated from bash:
jq -c '{"id" | input}, .' test.json < <(uuidgen)
Unsure about how to handle two inputs, bash command used to create a value in the new object, and the input file to be transformed (new object inserted in between each object).
I want to process small and large json files up to a few gigabytes each.
Greatly appeaciate some help with a well designed solution(s) that would scale for large files and perform the operations quickly and efficiently.
Thanks in advance.
If the input file is already well-formed JSONL, then a simple bash solution would be:
while IFS= read -r line; do
printf "{\"id\": \"%s\"}\n" $(uuidgen)
printf '%s\n' "$line"
done < test.json
This might well be the best trivial solution if test.json is very large and known to be valid JSONL.
If the input file is not already JSONL, then you could still use the above approach by piping in jq -c . test.json. And if ‘read’ is too slow, you could still use the above text-processing approach with awk.
For the record, a single-call-to-jq solution along the lines you have in mind could be constructed as follows:
jq -n -c -R --slurpfile objects test.json '
$objects[] | {"id": input}, .' <(while true ; do uuidgen ; done)
Obviously you cannot "slurp" the unbounded stream of uuidgen values; less obviously perhaps, if you were simply to pipe in the stream, the process will hang.
Since #peak has already covered the jq side of the problem, I'm going to take a shot at doing this more efficiently using Python, still wrapped so it can be called in a shell script.
This assumes that your input is JSONL, with one document per line. If it isn't, consider piping through jq -c . before piping into the below.
#!/usr/bin/env bash
py_prog=$(cat <<'EOF'
import json, sys, uuid
for line in sys.stdin:
print(json.dumps({"id": str(uuid.uuid4())}))
sys.stdout.write(line)
EOF
)
python -c "$py_prog" <in.json >out.json
Here's another approach where jq is handling input as raw string, already muxed by a separate copy of bash.
while IFS= read -r line; do
uuidgen
printf '%s\n' "$line"
done | jq -Rrc '({ "id": . }, input)'
It still has all the performance overhead of calling uuidgen once per input line (plus some extra overhead because bash's read operates one byte at a time) -- but it operates in a fixed amount of memory without needing Python.
If the input was not known in advance to be valid JSONL,
one of the following bash+jq solutions might make sense
since the overhead of counting the number of objects would be relatively small.
If the input is small enough to fit in memory, you could go with a simple solution:
n=$(jq -n 'reduce inputs as $in (0; .+1)' test.json)
for ((i=0; i < $n; i++)); do uuidgen ; done |
jq -n -c -R --slurpfile objects test.json '
$objects[] | {"id": input}, .'
Otherwise, that is, if the input is very large, then one could avoid slurping it as follows:
n=$(jq -n 'reduce inputs as $in (0; .+1)' test.json)
jq -nc --rawfile ids <(for ((i=0; i < $n; i++)); do uuidgen ; done) '
$ids | split("\n") as $ids
| foreach inputs as $in (-1; .+1; {id: $ids[.]}, $in)
' test.json
In my bash script, when I run the following jq against my curl result:
curl -u someKey:someSecret someURL 2>/dev/null | jq -r '.schema' | jq -r -c '.fields'
I get back a JSON array as follows:
[{"name":"id","type":"int","doc":"Documentation for the id field."},{"name":"test_string","type":"string","doc":"Documentation for the test_string field"}]
My goal is to do a call with jq applied to return the following (given the example above):
{"id":1234567890,"test_string":"xxxxxxxxxx"}
NB: I am trying to automatically generate templated values that match the "schema" JSON shown above.
So just to clarify, that is:
all array objects (there could be more than 2 shown above) returned in a single comma-delimited row
doc fields are ignored
the values for "name" (including their surrounding double-quotes) are concatenated with either:
:1234567890 ...when the "type" for that object is "int"
":xxxxxxxxxx" ...when the "type" for that object is "string"
NB: these will be the only types we ever get for now
Can someone show me how I can expand upon my initial jq to return this?
NB: I tried working down the following path but am failing beyond this...
curl -u someKey:someSecret someURL 2>/dev/null | jq -r '.schema' | jq -r -c '.fields' | "\(.name):xxxxxxxxxxx"'
If it's not possible in pure JQ (my preference) I'm also happy for a solution that mixes in a bit of sed/awk magic :)
Cheers,
Stan
Given the JSON shown, you could add the following to your pipeline:
jq -c 'map({(.name): (if .type == "int" then 1234567890 else "xxxxxxxxxx" end)})|add'
With that JSON, the output would be:
{"id":1234567890,"test_string":"xxxxxxxxxx"}
However, it would be far better if you combined the three calls to jq into one.
Title may be incorrect as I'm not actually sure where this is failing. I have a bash script running in one directory, and a JSON file I need a value from in a different directory. I want to copy the value from the external directory into an identical JSON file in the current directory.
I'm using jq to grab the value, but I can't figure out how to grab from a directory other than the one the script is running in.
The relevant bits of file structure are as follows;
cloudformation
- parameters_v13.json
environment_files
- prepare_stack_files.json (the script this is run from)
- directory, changes based on where the script is pointed
- created directory where created files are being output
- GREPNAME_parameters.json
The chunk of the JSON file I'm interested in looks like this;
[
{
"ParameterKey": "RTSMEMAIL",
"ParameterValue": "secretemail"
}
]
The script needs to get the "secretemail" from cloudformation/parameters_v13.json and paste it into the matching RTSMEMAIL field in the GREPNAME_parameters.json file.
I've been attempting the following with no luck - nothing is output. No error message either, just blank output. I know the GREPNAME path is correct because it's used elsewhere with no issues.
jq --arg email "$EMAIL" '(.[] | select(.ParameterKey == "RTSMEMAIL") | .ParameterValue) |= $email' ../cloudformation/parameters_v13.json | sponge ${GREPNAME}_parameters.json
This jq filter should help you get secretmail string
jq '.[] | select(.ParameterKey=="RTSMEMAIL") | .ParameterValue' json
"secretemail"
Add a -r file for raw output to remove quotes around the value
jq -r '.[] | select(.ParameterKey=="RTSMEMAIL") | .ParameterValue' json
secretemail
--raw-output / -r:
With this option, if the filter’s result is a string then it will be written directly to standard output rather than being formatted as a JSON string with quotes. This can be useful for making jq filters talk to non-JSON-based systems.
As I could see it you are trying to pass args to jq filter, for extraction you can do something first by setting the variable in bash
email="RTSMEMAIL"
and now pass it to the filter as
jq --arg email "$email" -r '.[] | select(.ParameterKey==$email) | .ParameterValue' json
secretemail
Now to replace the string obtained from parameters_v13.json file to your GREPNAME_parameters.json do the following steps:-
First storing the result from the first file in a variable to re-use later, I have used the file to extract as json, this actually points your parameters_v13.json file in another path.
replacementValue=$(jq --arg email "$email" -r '.[] | select(.ParameterKey==$email) | .ParameterValue' json)
now the $replacementValue will hold the secretmail which you want to update to another file. As you have indicated previously GREPNAME_parameters.json has a similar syntax as of the first file. Something like below,
$ cat GREPNAME_parameters.json
[
{
"ParameterKey": "SOMEJUNK",
"ParameterValue": "somejunkvalue"
}
]
Now I understand your intention is replace "ParameterValue" from the above file to the value obtained from the other file. To achieve that,
jq --arg replace "$replacementValue" '.[] | .ParameterValue = $replace' GREPNAME_parameters.json
{
"ParameterKey": "SOMEJUNK",
"ParameterValue": "secretemail"
}
You can then write this output to the a temp file and move it back as the GREPNAME_parameters.json. Hope this answers your question.
#Alex -
(1) sponge simply provides a convenient way to modify a file without having to manage a temporary file. You could use it like this:
jq ........ input.json | sponge input.json
Here, "input.json" is the file that you want to edit "in place". If you want to avoid overwriting the input file, you would not use sponge. In fact, I would recommend against doing so until you're absolutely sure that's what you want.
(2) There are several strategies for achieving what you have described using jq. They basically fall into two categories: (a) invoke jq twice; (b) invoke jq once.
Ignoring the sponge part:
the pattern for using jq twice would be as follows:
param=$(jq -r '.[]
| select(.ParameterKey == "RTSMEMAIL")|.ParameterValue
' cloudformation/parameters_v13.json )
jq --arg param "$param" -f edit.jq input.json
assuming you have jq 1.5, the pattern for doing everything with just one invocation of jq would be:
jq --argfile p cloudformation/parameters_v13.json -f manage.jq input.json
Here, edit.jq and manage.jq are files containing suitable jq programs.
Based on my understanding of your requirements, edit.jq might look like this:
(.[] | select(.ParameterKey == "RTSMEMAIL")|.ParameterValue) |= $param
And manage.jq might look like this:
($p[] | select(.ParameterKey == "RTSMEMAIL")|.ParameterValue) as $param
| (.[]| select(.ParameterKey == "RTSMEMAIL")|.ParameterValue) |= $param
I would like to convert a list into JSON array. I'm looking at jq for this but the examples are mostly about parsing JSON (not creating it). It would be nice to know proper escaping will occur. My list is single line elements so the new line will probably be the best delimiter.
I was also trying to convert a bunch of lines into a JSON array, and was at a standstill until I realized that -s was the only way I could handle more than one line at a time in the jq expression, even if that meant I'd have to parse the newlines manually.
jq -R -s -c 'split("\n")' < just_lines.txt
-R to read raw input
-s to read all input as a single string
-c to not pretty print the output
Easy peasy.
Edit: I'm on jq ≥ 1.4, which is apparently when the split built-in was introduced.
--raw-input, then --slurp
Just summarizing what the others have said in a hopefully quicker to understand form:
cat /etc/hosts | jq --raw-input . | jq --slurp .
will return you:
[
"fe00::0 ip6-localnet",
"ff00::0 ip6-mcastprefix",
"ff02::1 ip6-allnodes",
"ff02::2 ip6-allrouters"
]
Explanation
--raw-input/-R:
Don´t parse the input as JSON. Instead, each line of text is passed
to the filter as a string. If combined with --slurp, then the
entire input is passed to the filter as a single long string.
--slurp/-s:
Instead of running the filter for each JSON object in the input,
read the entire input stream into a large array and run the filter
just once.
You can also use jq -R . to format each line as a JSON string and then jq -s (--slurp) to create an array for the input lines after parsing them as JSON:
$ printf %s\\n aa bb|jq -R .|jq -s .
[
"aa",
"bb"
]
The method in chbrown's answer adds an empty element to the end if the input ends with a linefeed, but you can use printf %s "$(cat)" to remove trailing linefeeds:
$ printf %s\\n aa bb|jq -R -s 'split("\n")'
[
"aa",
"bb",
""
]
$ printf %s\\n aa bb|printf %s "$(cat)"|jq -R -s 'split("\n")'
[
"aa",
"bb"
]
If the input lines don't contain ASCII control characters (which have to be escaped in strings in valid JSON), you can use sed:
$ printf %s\\n aa bb|sed 's/["\]/\\&/g;s/.*/"&"/;1s/^/[/;$s/$/]/;$!s/$/,/'
["aa",
"bb"]
Update: If your jq has inputs you can simply write:
jq -nR [inputs] /etc/hosts
to produce a JSON array of strings. This avoids having to read the text file as a whole.
I found in the man page for jq and through experimentation what seems to me to be a simpler answer.
$ cat test_file.txt | jq -Rsc '. / "\n" - [""]'
["aa","bb"]
The -R is to read without trying to parse json, the -s says to read all of the input as one string, and the -c is for one-line output - not necessary, but it's what I was looking for.
Then in the string I pass to jq, the '.' says take the input as it is. The '/ \n' says to divide the string (split it) on newlines. The '- [""]' says to remove from the resulting array any empty strings (resulting from an extra newline at the end).
It's one line and without any complicated constructs, using just simple built in jq features.