I'm getting some values with jq command like these:
curl xxxxxx | jq -r '.[] | ["\(.job.Name), \(.atrib.data)"]' | #tsv' | column -t -s ","
It gives me:
AAAA PENDING
ZZZ FAILED BAD
What I want is that I get is a first field with a secuencial number (1 ....) like these:
1 AAA PENDING
2 ZZZ FAILED BAD
......
Do you know if it's possible? Thanks!
One way would be to start your pipeline with:
range(0;length) as $i | .[$i]
You then can use $i in the remainder of the program.
Related
Can anyone help me to understand how I can print countryCode followed by connectionName and load with a percentage symbol all on one line nicely formatted - all using jq - not using sed, column or any other unix external command. I cannot seem print anything other than the one column
curl --silent "https://api.surfshark.com/v3/server/clusters" | jq -r -c "map(select(.countryCode == "US" and .load <= "99")) | sort_by(.load) | limit(20;.[]) | [.countryCode, .connectionName, .load] | (.[1])
Is this what you wanted ?
curl --silent "https://api.surfshark.com/v3/server/clusters" |
jq -r -c 'map(select(.countryCode == "US" and .load <= 99)) |
sort_by(.load) |
limit(20;.[]) |
"\(.countryCode) \(.connectionName) \(.load)%"'
I use Jq to perform some filtering on a large json file using :
paths=$(jq '.paths | to_entries | map(select(.value[].tags | index("Filter"))) | from_entries' input.json)
and write the result to a new file using :
jq --argjson prefix "$paths" '.paths=$prefix' input.json > output.json
But this ^ fails as $paths has a very high line count (order of 100,000).
Error :
jq: Argument list too long
I also went through : /usr/bin/jq: Argument list too long error bash , understood the same problem there, but did not get the solution.
In general, assuming your jq allows it, you could use —argfile or —slurpfile but in your case you can simply avoid the issue by invoking jq just once instead of twice. For example, to keep things clear:
( .paths | to_entries | map(select(.value[].tags | index("Filter"))) | from_entries ) as $prefix
| .paths=$prefix
Even better, simply use |=:
.paths |= ( to_entries | map(select(.value[].tags | index("Filter"))) | from_entries)
or better yet, use with_entries.
I have the following command that I use to rewrite some maxscale output to be able to use it in other software:
maxadmin list servers | sed -r 's/[^a-z 0-9]//gi;/^\s*$/d;1,3d;' | awk '$1=$1' | cut -d ' ' -f 1,5 | sed -e 's/ /":"/g' | sed -e 's/\(.*\)/"\1"/' | tr '\n' ',' | sed 's/.$/}\n/' | sed 's/^/{/'
I am thinking this is way to complex for what I want to do, but I am not able to see a simpler version of this myself. What I want is to rewrite this (output of maxadmin list servers):
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
svr_node1 | 192.168.178.1 | 3306 | 0 | Master, Synced, Running
svr_node2 | 192.168.178.1 | 3306 | 0 | Slave, Synced, Running
svr_node3 | 192.168.178.1 | 3306 | 0 | Slave, Synced, Running
-------------------+-----------------+-------+-------------+--------------------
Into this:
{"svrnode1":"Master","svrnode2":"Slave","svrnode3":"Slave"}
My command does a good job but as I said, there should be a simpler way with less sed commands being run hopefully.
You can use awk, like this:
json.awk
BEGIN {
printf "{"
}
# Everything after line for and before the last ------ line
# plus the last empty line (if any).
NR>4&&!/^([-]|$)/{
sub(/,/,"",$9) # Remove trailing comma
printf "%s\"%s\":\"%s\"",s,$1,$9
s="," # Set comma separator after first iteration
}
END {
print "}"
}
Run it like this:
maxadmin list servers | awk -f json.awk
Output:
{"svr_node1":"Master","svr_node2":"Slave","svr_node3":"Slave"}
In comments there came up the question how to achieve that without an extra json.awk file:
maxadmin list servers | awk 'BEGIN{printf"{"}NR>4&&!/^([-]|$)/{sub(/,/,"",$9);printf"%s\"%s\":\"%s\"",s,$1,$9;s=","}END{print"}"}'
Ugly, but works. ;)
If you want to put this into a shell script, consider a multiline version like this:
maxadmin list servers | awk '
BEGIN{printf"{"}
NR>4&&!/^([-]|$)/{
sub(/,/,"",$9)
printf"%s\"%s\":\"%s\"",s,$1,$9
s=","
}
END{print"}"}'
I want to grep logs for exceptions and identify unique with their counts
Following is sample input
[msisdn:123][trxId:1234] | subscriptions | java.lang.Exception: this msidn NOT found
[msisdn:432][trxId:1212] | subscriptions | java.lang.Exception: this msidn NOT found
[msisdn:232][trxId:3232] | subscriptions | java.lang.Exception: this msidn NOT found
I used following and it shows duplicate with count
grep -i exception my.log| cut -d'|' -f2- | uniq –c
its shows results as expected, but i loose first part which contain msisdn and trxid, then i used following
grep -i exception my.log | sort -u -k 2,3 -t'|'
it shows unique results with sample line and on base of that sample line which contained msisdn and trxid I can troubleshoot.
Now how I can get count with my last command used?
This should work:
grep -i exception my.log | sort -k 2,3 -t'|' | uniq -c -f 1
Output:
3 [msisdn:123][trxId:1234] | subscriptions | java.lang.Exception: this msidn NOT found
I have a file that include the following lines :
2 | blah | blah
1 | blah | blah
3 | blah
2 | blah | blah
1
1 | high | five
3 | five
I wanna extract only the lines that has 3 columns (3 fields, 2 seperators...)
I wanna pipe it to the following commands :
| sort -nbsk1 | cut -d "|" -f1 | uniq -d
So after all I will get only :
2
1
Any suggestions ?
It's a part of homework assignment, we are not allowed to use awk\sed and some more commands.. (grep\tr and whats written above can be used)
Thanks
since you said grep is allowed:
grep -E '^([^|]*\|){2}[^|]*$' file
grep '.*|.*|.*' will select lines with at least three fields and two separators.