I have an JSON result and i need to search for a specific value and get from the array.
For example here is my JSON and I need to search for a version 1.15 having the higher patch version inside the validNodeVersions array. So here I wanted to retrieve the value 1.15.12-gke.20 and that is the highest 1.15 versions for the array list. Can somebody please help on this?
Basically I am looking always to pick the highest patch release for any of the version. for 1.15 it is 1.15.12-gke.20.
gcloud container get-server-config --format json
{
"channels": [
{
"channel": "REGULAR",
"defaultVersion": "1.17.9-gke.1504",
"validVersions": [
"1.17.9-gke.6300",
"1.17.9-gke.1504"
]
},
{
"channel": "STABLE",
"defaultVersion": "1.16.13-gke.401",
"validVersions": [
"1.16.13-gke.401",
"1.15.12-gke.20"
]
}
],
"defaultClusterVersion": "1.16.13-gke.401",
"defaultImageType": "COS",
"validImageTypes": [
"UBUNTU",
"UBUNTU_CONTAINERD"
],
"validMasterVersions": [
"1.17.12-gke.500",
"1.14.10-gke.50"
],
"validNodeVersions": [
"1.17.12-gke.500",
"1.16.8-gke.12",
"1.15.12-gke.20",
"1.15.12-gke.17",
"1.15.12-gke.16",
"1.15.12-gke.13",
"1.15.12-gke.9",
"1.15.12-gke.6",
"1.15.12-gke.3",
"1.15.12-gke.2",
"1.15.11-gke.17",
"1.15.11-gke.15",
"1.15.11-gke.13",
"1.15.11-gke.12",
"1.15.11-gke.11",
"1.15.11-gke.9",
"1.15.11-gke.5",
"1.15.11-gke.3",
"1.15.11-gke.1",
"1.15.9-gke.26",
"1.15.8-gke.3",
"1.15.7-gke.23",
"1.15.4-gke.22",
"1.14.10-gke.0",
"1.14.9-gke.0"
]
}
It is more tricky to match the regex, sort or anything inside jq. GNU sort command has a nice parameter, -V that stands for version sorting, so here is a simple way to do this, also without any awk or sort splitting to fields or similar.
jq -r '.validNodeVersions[]' file.json | grep "^1\.15" | sort -V | tail -1
1.15.12-gke.20
jq is doing a simple selection of values here, grep filters these values by version and after sorting by version we get the highest.
Related
I'd like to use jq to format the output of some known keys in my objects more succinctly.
Sample object:
// test.json
[
{
"target": "some-string",
"datapoints": [
[
123,
456
],
[
789,
101112
]
]
}
]
I'd like to use JQ (with some incantation) to change this to put all the datapoints objects on a single line. E.g.
[
{
"target": "some-string",
"datapoints": [[ 123, 456 ], [ 789, 101112 ]]
}
]
I don't really know if JQ allows this. I searched around - and found custom formatters like https://www.npmjs.com/package/perfect-json which seem to do what I want. I'd prefer to have a portable incantation for this using jq alone (and/or with standard *nix tools).
Use a two-pass approach. In the first, stringify the field using special markers so that in the second pass, they can be removed.
Depending on your level of paranoia, this second pass could be very simple or quite complex. On the simple end of the spectrum, choose markers that simply will not occur elsewhere, perhaps "<q>…</q>", or using some combination of non-ASCII characters. On the complex end of the spectrum, only remove the markers if they occur in the fields in which they are known to be markers.
Both passes could be accomplished with jq, along the lines of:
jq '.[].datapoints |= "<q>\(tojson)</q>"' |
jq -Rr 'sub("<q>(?<s>.*)</q>"; .s)'
Using jq and perl :
jq 'map(.datapoints |= "\u001b\(tojson)\u001b")
' test.json | perl -pe 's/"\\u001b(.*?)\\u001b"/$1/g'
How to search for a word, once it's found, in the next line save a specific value in a variable.
The json bellow is only a small part of the file.
Due to this specific file json structure be inconsistent and subject to change overtime, it need to by done via search like grep sed awk.
however the paramenters bellow will be always the same.
search for the word next
get the next line bellow it
extract everything after the word page_token not the boundary "
store in a variable to be used
test.txt:
"link": [
{
"relation": "search",
"url": "aaa/ww/rrrrrrrrr/aaaaaaaaa/ffffffff/ccccccc/dddd/?token=gggggggg3444"
},
{
"relation": "next",
"url": "aaa/ww/rrrrrrrrr/aaaaaaaaa/ffffffff/ccccccc/dddd/?&_page_token=121_%_#212absa23bababa121212121212121"
},
]
so the desired output in this case is:
PAGE_TOKEN="121_%_#212absa23bababa121212121212121"
my attempt:
PAGE_TOKEN=$(cat test.txt| grep "next" | sed 's/^.*: *//;q')
no lucky..
This might work for you (GNU sed):
sed -En '/next/{n;s/.*(page_token=)([^"]*).*/\U\1\E"\2"/p}' file
This is essentially a filtering operation, hence the use of the -n option.
Find a line containing next, fetch the next line, format as required and print the result.
Presuming your input is valid json, one option is to use:
cat test.json
[{
"relation": "search",
"url": "aaa/ww/rrrrrrrrr/aaaaaaaaa/ffffffff/ccccccc/dddd/?token=gggggggg3444"
},
{
"relation": "next",
"url": "aaa/ww/rrrrrrrrr/aaaaaaaaa/ffffffff/ccccccc/dddd/?&_page_token=121_%_#212absa23bababa121212121212121"
}
]
PAGE_TOKEN=$(cat test.json | jq -r '.[] | select(.relation=="next") | .url | gsub(".*=";"")')
echo "$PAGE_TOKEN"
121_%_#212absa23bababa121212121212121
I'm making a query into a rest api, from this result i got:
{ "meta": { "query_time": 0.004266858, "pagination": { "offset": 0, "limit": 00, "total": 4 }, "powered_by": "device-api", "trace_id": "foo" }, "resources": [ "foo/bar", "foo/bar/2", "foo/bar/3", "foo/bar/4" ], "errors": [] }
I want to take results only from resources like this:
"resources": [
"foo/bar",
"foo/bar/2",
"foo/bar/3",
"foo/bar/4"
],
Can we share some knowledge? thanks a lot!
PS: these results from resources are random
Don't use grep or other regular expression tools to parse JSON. JSON is structured data and should be processed by a tool designed to read JSON. On the command line jq is a great tool for this purpose. There are many powerful JSON libraries written in other languages if jq isn't what you need.
Once you've extracted the data you care about, you can use the shuf utility to select random lines, e.g. shuf -n 5 would sample five random lines from the input.
With the JSON you've provided this appears to do what I think you want:
jq --raw-output '.resources[]' | shuf -n 2
You may need to tweak the jq syntax slightly if the real JSON has a different structure.
I have 2 files
file_one.json
{
"releases": [
{
"name": "bpm",
"version": "1.1.5"
},
{
"name": "haproxy",
"version": "9.8.0"
},
{
"name": "test",
"version": "10"
}
]
}
and file_two.json
{
"releases": [
{
"name": "bpm",
"version": "1.1.6"
},
{
"name": "haproxy",
"version": "9.8.1"
},
{
"name": "test",
"version": "10"
}
]
}
In file 2 the versions were changed and i need to echo the new changes.
I have used the following command to see the changes:
diff -C 2 <(jq -S . file_one.json) <(jq -S . file_two.json)
But than i need to format the output to something like this.
I need to output text:
The new versions are:
bpm 1.1.6
haproxy 9.8.1
You may be able to use the following jq command :
jq --slurp -r 'map(.releases) | add
| group_by(.name)
| map(unique | select(length > 1) | max_by(.version))
| map("\(.name) : \(.version)") | join("\n")'
file_one.json file_two.json
It first merges the two releases arrays, groups the elements by name, then unicize the elements of the resulting arrays, remove the arrays with a single element (the versions that were identic between the two files), then map the arrays into their greatest element (by version) and finally format those for display.
You can try it here.
A few particularities that might make this solution incorrect for your use :
it doesn't only report version upgrades, but also version downgrades. However, it always returns the greatest version, disregarding which file contains it.
the version comparison is alphabetic. It's okay with your sample, but it can fail for multi-digits versions (e.g. 1.1.5 is considered greater than 1.1.20 because 5 > 2). This could be fixed but might not be problematic depending on your versionning scheme.
Edit following your updated request in the comments : the following jq command will output the versions changed between the first file and the second. It nicely handles downgrades and somewhat handles products that have appeared or disappeared in the second file (although it always shows the version as version --> null whether it is a product that appeared or disappeared).
jq --slurp -r 'map(.releases) | add
| group_by(.name)
| map(select(.[0].version != .[1].version))
| map ("\(.[0].name) : \(.[0].version) --> \(.[1].version)")
| join("\n")' file_one.json file_two.json
You can try it here.
When I run a command I get a response like this
{
"status": "available",
"managed": true,
"name":vdisk7,
"support":{
"status": "supported"
},
"storage_pool": "pfm9253_pfm9254_new",
"id": "ff10abad"-2bf-4ef3-9038-9ae7f18ea77c",
"size":100
},
and hundreds of this type of lists or dictionaries
I want a command that does such sort of a thing
if name = "something",
get the id
Any links that would help me in learning such sort of commands would be highly appreciated
I have tried
awk '{if ($2 == "something") print $0;}'
But I think the response is in Json so the colum wise awk formatting is not working.
Also it's just a single command that I need to run so I would prefer not to use any external library.
JSON parser is better for this task
awk and sed are utilities to parse line-oriented text, but not json. What if your json formatting will change ? (some lines will go on one line ?).
You should use any standard json parser out there. Or use some powerful scripting language, such as PHP, Python, Ruby, etc.
I can provide you with example on how to do it with python.
What if I can't use powerful scripting language ?
If you totally unable to use python, then there is utility jq out there: link
If you have some recent distro, jq maybe already in repositories (example: Ubuntu 13.10 has it in repos).
I can use python!
I would do that using simple python inline script.
For example we have some some_command that returns json as a result.
We have to get value of data["name"].
Here we go:
some_command | python -c "import json, sys; print json.load(sys.stdin)['name']"
It will output vdisk7 in your case
For this to work you need to be sure, json is fully valid.
If you have a list of json objects:
[
{
...
"name": "vdisk17"
...
},
{
...
"name": "vdisk18"
...
},
{
...
"name": "vdisk19"
...
},
...
]
You could use some list comprehensions:
some_command | python -c "import json, sys; [sys.stdout.write(x['name'] + '\n') for x in json.load(sys.stdin)]"
It will output:
vdisk17
vdisk18
vdisk19