I'd like to use jq to format the output of some known keys in my objects more succinctly.
Sample object:
// test.json
[
{
"target": "some-string",
"datapoints": [
[
123,
456
],
[
789,
101112
]
]
}
]
I'd like to use JQ (with some incantation) to change this to put all the datapoints objects on a single line. E.g.
[
{
"target": "some-string",
"datapoints": [[ 123, 456 ], [ 789, 101112 ]]
}
]
I don't really know if JQ allows this. I searched around - and found custom formatters like https://www.npmjs.com/package/perfect-json which seem to do what I want. I'd prefer to have a portable incantation for this using jq alone (and/or with standard *nix tools).
Use a two-pass approach. In the first, stringify the field using special markers so that in the second pass, they can be removed.
Depending on your level of paranoia, this second pass could be very simple or quite complex. On the simple end of the spectrum, choose markers that simply will not occur elsewhere, perhaps "<q>…</q>", or using some combination of non-ASCII characters. On the complex end of the spectrum, only remove the markers if they occur in the fields in which they are known to be markers.
Both passes could be accomplished with jq, along the lines of:
jq '.[].datapoints |= "<q>\(tojson)</q>"' |
jq -Rr 'sub("<q>(?<s>.*)</q>"; .s)'
Using jq and perl :
jq 'map(.datapoints |= "\u001b\(tojson)\u001b")
' test.json | perl -pe 's/"\\u001b(.*?)\\u001b"/$1/g'
I'm seeing a is not defined at <top-level> when calling jq like so:
jq ".Changes[0].ResourceRecordSet.Name = word-is-here.domain.com" someFile.json
The error repeats for each word separated by a dash in the second side of the replacement. The full error is like
jq: error: word/0 is not defined at <top-level>, line 1:
.Changes[0].ResourceRecordSet.Name = word-is-here.domain.com
I've tried escaping quotes in many different ways but that didn't help. (what I mean by this is doing "'"'" weird stuff, I'm still learning bash so I'm just trowing stuff at the wall until it sticks)
EDIT:
So I'm trying to run this in a bash script, and both side of the = signs are variables such as jq --arg value "$value" --arg key "$key" '$key = $value' "$path" (what I tried after a suggestion)
and got the error:
Invalid path expression with result ".Changes[0].ResourceRecor...
The json I'm using is as such:
{
"Changes": [
{
"Action": "do something",
"ResourceRecordSet": {
"Name": "some name here to replace",
...
}
}
]
}
jq '.Changes[0].ResourceRecordSet.Name = "word-is-here.domain.com"' file.json
Quote the string you are assigning. Or pass it to jq via an argument:
jq --arg foo 'words-here' '.Changes[0].ResourceRecordSet.Name = $foo' file.json
For passing the path to the key you want as an argument, a suggestion from https://github.com/stedolan/jq/issues/1493 might work:
jq --argjson path '["Changes",0,"ResourceRecordSet","Name"]' \
--arg val 'word-is-here.domain.com' \
'getpath($path) = $val' file.json
The problem (or at least the obvious problem) here is evidently the string: word-is-here.domain.com, since jq is interpreting the dash ("-") as an operation ("minus").
Unfortunately, since you haven't given us many clues, it's not completely clear what specifically needs to be changed, but a reasonable guess is that word-is-here.domain.com is intended as a fixed string. If so, you would have to present it as a JSON string. So in a bash or bash-like environment, you could write:
jq '.Changes[0].ResourceRecordSet.Name = "word-is-here.domain.com"' someFile.json
Specifying the LHS path via a shell variable
If the LHS path must be specified by a shell variable, it should if possible be passed in as a JSON array, e.g. using the --argjson command-line option; one can then use an expression of the form setpath($path; $value) to update the path.
If for some reason a solution allowing the LHS to be specified as a jq path is preferred, then shell string-interpolation could be used, though as with any such interpolation, this should be done with care.
Suppose I have some text file (json in this case):
{
"data": [
{
"timestamp": 1577856103107
},
{
"timestamp": 1577869991302
}
]
}
And I want to replace a pattern (in this case a UNIX millisecond timestamp) with a more readable date format.
I'm trying with this:
$ sed -E 's/(.*)([0-9]{13})/echo "\1\\"$(date --date="#$((\2\/1000))" --iso-8601=seconds)\\""/e' example.json
{
"data": [
{
timestamp: "2020-01-01T00:21:43-05:00"
},
{
timestamp: "2020-01-01T04:13:11-05:00"
}
]
}
This is somewhat ok, but I don't understand why the quotes arround timestamp get lost.
This command works:
sed -E 's/(.*)"(timestamp)"(: )([0-9]{13})/echo "\1\\"\2\\"\3\\"$(date --date="#$((\4\/1000))" --iso-8601=seconds)"\\"/e' example.json
{
"data": [
{
"timestamp": "2020-01-01T00:21:43-05:00"
},
{
"timestamp": "2020-01-01T04:13:11-05:00"
}
]
}
I also don't understand why I need double backslashes \\ to ouput a double-quote " in the right side of this sed command.
Is there a better way (or tool) to solve this?
I'm on sed (GNU sed) 4.8 and zsh 5.8 (x86_64-pc-linux-gnu), thanks!
Is there a better way (or tool) to solve this?
Using sed for manipulating json things is very crude. You can't parse json with regex. I (strongly) suggest to use json-aware tools, like jq.
jq '.data[].timestamp |= (. / 1000 | strftime("%Y-%m-%dT%H:%M:%SZ"))'
but I don't understand why the quotes arround timestamp get lost.
The:
echo "\1\\"$(date --date="#$((\2\/1000))" --iso-8601=seconds)\\""
is "substituted" to:
echo ""timestamp": \"$(date --date="#$((1577856103107/1000))" --iso-8601=seconds)\""
^^
^------------------------------------------------------------------^
Then is passed to shell and quotes are re-evalulated according to shell rules.
Matching (.*) is really doing nothing, just match only the part you want to substitute. You could instead match only the part you want to substitute:
sed '/"timestamp":/s/([0-9]{13})/echo "\\"$(date --date="#$((\1\/1000))" --iso-8601=seconds)\\""/e'
why I need double backslashes \ to ouput a double-quote " in the right side of this sed command.
First \\ is interpreted by sed into single \.
$ echo a | sed 's/.*/single slash: \\/'
single slash: \
Then the result of sed command is passed to shell where all shell parsing rules are done one again.
I have a part of json which looks like below:
{
"openstack": {
"admin": {
"username": "admin",
"password": "password",
"tenant_name": "test"
},
and three environment variables defined like this
auth_url=VALUE1
region_name=VALUE3
endpoint_type=VALUE2
I want to insert 3 lines in the input file just after row 2, so that the output is
{
"openstack": {
"auth_url": VALUE1,
"region_name": VALUE2,
"endpoint_type": VALUE3,
"admin": {
"username": "admin",
"password": "password",
"tenant_name": "test"
},
How it can be done using SED, I tried below
sed -e '3i/\t"auth_url":$auth_url,' -i account_2.json
But it not only adds an extra / at row no 3 but it also doesn't actually replace $auth_url with environment variable as well.
You are are misusing the i command. You have to put backslash after it, not a slash.
Furhtermore, the variable is not expanded since it is in single quotes. Try putting it in double quotes, like this
sed "3i\ \"auth_url\":$AUTH," yourfile
I've read that the insert command wants whatever follows the backslash to be on a newline, which is not the case here, where we have everything on a single line. I guess that's GNU sed's which allows it.
To insert three lines, you can use this
sed "3i\ \"auth_url\":$SHELL\n \"auth_url\":$SHELL\n \"auth_url\":$SHELL" os
And it works well with commas too, since they have no special meaning:
sed "3i\ \"auth_url\":$SHELL,\n \"auth_url\":$SHELL,\n \"auth_url\":$SHELL,"
Variants of this question have been asked and answered before, but I find that my sed/grep/awk skills are far too rudimentary to work from those to a custom solution since I hardly ever work in shell scripts.
I have a rather large (100K+ lines) text file in which each line defines a GeoJSON object, each such object including a property called "county" (there are, all told, 100 different counties). Here's a snippet:
{"type": "Feature", "properties": {"county":"ALAMANCE", "vBLA": 0, "vWHI": 4, "vDEM": 0, "vREP": 2, "vUNA": 2, "vTOT": 4}, "geometry": {"type":"Polygon","coordinates":[[[-79.537429,35.843303],[-79.542428,35.843303],[-79.542428,35.848302],[-79.537429,35.848302],[-79.537429,35.843303]]]}},
{"type": "Feature", "properties": {"county":"NEW HANOVER", "vBLA": 0, "vWHI": 0, "vDEM": 0, "vREP": 0, "vUNA": 0, "vTOT": 0}, "geometry": {"type":"Polygon","coordinates":[[[-79.532429,35.843303],[-79.537428,35.843303],[-79.537428,35.848302],[-79.532429,35.848302],[-79.532429,35.843303]]]}},
{"type": "Feature", "properties": {"county":"ALAMANCE", "vBLA": 0, "vWHI": 0, "vDEM": 0, "vREP": 0, "vUNA": 0, "vTOT": 0}, "geometry": {"type":"Polygon","coordinates":[[[-79.527429,35.843303],[-79.532428,35.843303],[-79.532428,35.848302],[-79.527429,35.848302],[-79.527429,35.843303]]]}},
I need to split this into 100 separate files, each containing one county's GeoJSONs, and each named xxxx_bins_2016.json (where xxxx is the county's name). I'd also like the final character (comma) at the end of each such file to go away.
I'm doing this in Mac OSX, if that matters. I hope to learn a lot by studying any solutions you could suggest, so if you feel like taking the time to explain the 'why' as well as the 'what' that would be fantastic. Thanks!
EDITED to make clear that there are different county names, some of them two-word names.
jq can kind of do this; it can group the input and output one line of text per group. The shell then takes care of writing each line to an appropriately named file. jq itself doesn't really have the ability to open files for writing that would allow you to do this in a single process.
jq -Rn -c '[inputs[:-1]|fromjson] | group_by(.properties.county)[]' tmp.json |
while IFS= read -r line; do
county=$(jq -r '.[0].properties.county' <<< $line)
jq -r '.[]' <<< "$line" > "$county.txt"
done
[inputs[:-1]|fromjson] reads each line of your file as a string, strips the trailing comma, then parses the line as JSON and wraps the lines into a single array. The resulting array is sorted and grouped by county name, then written to standard output, one group per line.
The shell loop reads each line, extracts the county name from the first element of the group with a call to jq, then uses jq again to write each element of the group to the appropriate file, again one element per line.
(A quick look at https://github.com/stedolan/jq/issues doesn't appear to show any requests yet for an output function that would let you open and write to a file from inside a jq filter. I'm thinking of something like
jq -Rn '... | group_by(.properties.county) | output("\(.properties.county).txt")' tmp.json
without the need for the shell loop.)
If using string parsing rather than proper JSON parsing to extract the county name is acceptable - brittle in general, but would work in this simple case - consider Sam Tolton's GNU awk answer, which has the potential to be by far the simplest and fastest solution.
To complement chepner's excellent answer with a variation that focuses on performance:
jq -Rrn '[inputs[:-1]|fromjson] | .properties.county + "|" + (.|tostring)' file |
awk -F'|' '{ print $2 > ($1 "_bins_2016.json") }'
Shell loops are avoided altogether, which should speed up the operation.
The general idea is:
Use jq to trim the trailing , from each input line, interpret the trimmed string as JSON, extract the county name, then output the trimmed JSON strings prepended with the county name and a distinct separator, |.
Use an awk command to split each line into the prepended county name and the trimmed JSON string, which allows awk to easily construct the output filename and write the JSON string to it.
Note: The awk command keeps all output files open until the script has finished, which means that, in your case, 100 output files will be open simultaneously - a number that shouldn't be a problem, however.
In cases where it is a problem, you can use the following variation, in which jq first sorts the lines by county name, which then allows awk to immediately close the previous output field whenever the next county is reached in the input:
jq -Rrn '
[inputs[:-1]|fromjson] | sort_by(.properties.county)[] |
.properties.county + "|" + (.|tostring)
' file |
awk -F'|' '
prevCounty != $1 { if (outFile) close(outFile); outFile = $1 "_bins_2016.json" }
{ print $2 > outFile; prevCounty = $1 }
'
A simpler version of chepner's answer:
while IFS= read -r line
do
countyName=$(jq --raw-output '.properties.county' <<<"${line: : -1}")
jq <<< "${line: : -1}" >> "$countyName"_bins_2016.json
done<file
The idea is to filter the county name using a jq filter after stripping the , from each line of your input file. Then the line is passed to jq as plain stream to produce a JSON file in prettified format.
If you are from a relatively older version of bash (< 4.0) use "${line%?}" over "${line: : -1}"
For example with the change above, one of your county becomes,
cat ALAMANCE_bins_2016.json
{
"type": "Feature",
"properties": {
"county": "ALAMANCE",
"vBLA": 0,
"vWHI": 0,
"vDEM": 0,
"vREP": 0,
"vUNA": 0,
"vTOT": 0
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-79.527429,
35.843303
],
[
-79.532428,
35.843303
],
[
-79.532428,
35.848302
],
[
-79.527429,
35.848302
],
[
-79.527429,
35.843303
]
]
]
}
}
Note: The current solution could be performance intensive as reading file line by line is an expensive operation, and equally invoking jq for each of the lines.
This will do what you want minus getting rid of the last comma:-
gawk 'match($0, /"county":"([^"]+)/, array){ print >array[1]"_bins_2016.json" }' INPUT_FILE
This will output files in the current path with a filename in the format COUNTRY NAME_bins_2016.json.
The script goes line by line and uses a regex to match the exact term "country":" followed by 1 or more characters that aren't a ". It captures the characters within the quotes and then uses it as part of the filename to append the current line to.
To remove the trailing comma from all .json files in the current path you could use:-
sed -i '$ s/,$//' *.json
If you were certain that the last char was always a comma, a faster solution would be to use truncate:-
truncate -s-1 *.json
Last part taken from this answer: https://stackoverflow.com/a/40568723/1453798
Here is a quickie script that will do the job. It has the virtue of working on most systems without having to install any other tools.
IFS=$'\n'
counties=( $( sed 's/^.*"county":"//;s/".*$//' counties.txt ) )
unset IFS
for county in "${!counties[#]}"
do
county="${counties[$i]}"
filename="$county".out.txt
echo "'$filename'"
grep "\"$county\"" counties.txt > "$filename"
done
The setting of IFS to \n allows the array elements to contain spaces. The sed command strips off all the text up to the start of the county name and all the text after it. The for loop is the form that allows iterating over the array. Finally, the grep command needs to have double quotes around the search string so that counties that are substrings of other counties don't accidentally get put into the wrong file.
See this section of the GNU BASH Reference Manual for more info.