combining jq conditions using and [duplicate] - syntax

This question already has an answer here:
combine two jq filters into one
(1 answer)
Closed 5 years ago.
How to combine two jq conditions using 'and'.
test.json
{
"url": "https://<part1>.test/hai/<part1>",
"ParameterValue": "<value>"
}
jq --arg input1 "$arg1" --arg input2 "$arg2" \
'if .url | contains("<part1>")
then . + {"url" : ("https://" + $input1 + ".test/hai/" + $input1) }
else . end' and
'if .ParameterValue == "<value>"
then . + {"ParameterValue" : ($input2) }
else . end' test.json > test123.json

and is a boolean (logical) operator. What you want here is to create a pipeline using '|':
jq --arg input1 "$arg1" --arg input2 "$arg2" '
if .url | contains("<part1>")
then . + {url : ("https://" + $input1 + ".test/hai/" + $input1) }
else . end
| if .ParameterValue == "<value>"
then . + {ParameterValue : $input2 }
else . end' test.json > test123.json
Or maybe better:
def when(filter; action): if (filter?) // null then action else . end;
when(.url | contains("<part1>");
.url = ("https://" + $input1 + ".test/hai/" + $input1))
| when(.ParameterValue == "<value>";
.ParameterValue = $input2)

Related

jq null unix timestamps parsing issue

I'm trying to parse a big json file which I receive using curl.
By following this answer I could parse the next file:
$ cat test.json
{"items": [{"id": 110, "date1": 1590590723, "date2": 1590110000, "name": "somename"}]}
using the next command:
TZ=Europe/Kyiv jq -r '.[] | .[] | .name + "; " + (.date1|strftime("%B %d %Y %I:%M%p")) + "; " + (.date2|strftime("%B %d %Y %I:%M%p"))' test.json
Output is:
somename; May 27 2020 02:45PM; May 22 2020 01:13AM
But when I try to parse the next file using the same command:
$ cat test2.json
{"items": [{"id": 110, "date1": 1590590723, "date2": null, "name": "somename"}]}
Output is:
jq: error (at test2.json:1): strftime/1 requires parsed datetime inputs
I could replace those null values using sed by some valid values before parsing. But maybe there is a better way to skip (ignore) those values, leaving nulls in output:
somename; May 27 2020 02:45PM; null
You could tweak your jq program so that it reads:
def tod: if type=="number" then strftime("%B %d %Y %I:%M%p") else tostring end;
.[] | .[] | .name + "; " + (.date1|tod) + "; " + (.date2|tod)
An alternative would be:
def tod: (tonumber? | strftime("%B %d %Y %I:%M%p")) // null;
.[] | .[] | "\(.name); \(.date1|tod); \(.date2|tod)"

awk bash recursive brackets id sed tr

A server provides a list of asset IDs separated by commas in square brackets after the date and colons :
20160420084726:-
20160420085418:[111783178, 111557953, 111646835, 111413356, 111412662, 105618372, 111413557]
20160420085418:[111413432, 111633904, 111783198, 111792767, 111557948, 111413225, 111413281]
20160420085418:[111413432, 111633904, 111783198, 111792767, 111557948, 111413225, 111413281]
20160420085522:[111344871, 111394583, 111295547, 111379566, 111352520]
20160420090022:[111344871, 111394583, 111295547, 111379566, 111352520]
The format of the input log is:
timestamp:ads
Where:
timestamp is in the format YYYYMMDDhhmmss, and ads is a comma separated list of ad asset IDs surrounded by square brackets, or - if no ads were returned.
The first part of the task is to write a script that outputs, for each ten minute slice of the day:
Count of IDs that were returned
Count of unique IDs that were returned
Script should support a command line parameter to select whether unique or total IDs should be given.
Example output using the above log excerpt (in total mode):
20160420084:0
20160420085:26
20160420090:5
And in unique count mode it would give:
20160420084:0
20160420085:19
20160420090:5
I have tried this:
awk -F '[,:]' '
{
key = substr($1,1,11)"0"
count[key] += ($2 == "-" ? 0 : NF-1)
}
END {
PROCINFO["sorted_in"] = "#ind_num_asc"
for (key in count) print key, count[key]
}
' $LOGFILENAME | grep $DATE;
With the scripts given until now other scenarios fail. For example this one:
log file:
https://drive.google.com/file/d/1sXFvLyCH8gZrXiqf095MubyP7-sLVUXt/view?usp=sharing
The first few lines of the results should be:
nonunique:
20160420000:1
20160420001:11
20160420002:13
20160420003:16
20160420004:3
20160420005:3
20160420010:6
unique:
20160420000:1
20160420001:5
20160420002:5
20160420003:5
20160420004:3
20160420005:3
20160420010:4
$ cat tst.awk
BEGIN { FS="[]:[]+"; OFS=":" }
{
tot = unq = 0
time = substr($1,1,11)
if ( /,/ ) {
tot = split($2,tmp,/, ?/)
for ( i in tmp ) {
if ( !seen[time,tmp[i]]++ ) {
unq++
}
}
}
tots[time] += tot
unqs[time] += unq
}
END {
for (time in tots) {
print time, tots[time], unqs[time]
}
}
$ awk -f tst.awk file
20160420084:0:0
20160420085:26:19
20160420090:5:5
Massage to suit...
#!/bin/bash
while read; do
dts=$( echo "$REPLY" | cut -d: -f1 )
ids=$( echo "$REPLY" | grep -o '\[.*\]' )
if [ $? -eq 0 ]; then
ids=$( echo "$ids" | tr -d '[] ' | tr ',' '\n' | sort $1 )
count=$( echo "$ids" | wc -l )
else
count=0
fi
echo $dts: $count
done
Run like this:
./script.sh [-u] <input.txt

Issue with jq when using in jenkins pipeline

I am having a JSON object x and a variable requiredValue
let requiredValue = 5;
let x = [
{"score":1},
{"score":2},
{"score":3}
}
Here using jq first i want to extract all score values and then check if any score value in the object is greater than or equal to requiredValue.
Here is what I tried
jq -r '.[].score | join(",") | contains([requiredValue])'
Suppose if requiredValue is 5 then jq query should return false and if requiredValue is 2 it should return true.
If you split your inputs into two valid JSON documents, rather than having a JavaScript input which is not valid JSON, you could do the following:
requiredValue=5
x='
[{"score":1},
{"score":2},
{"score":3}]
'
jq -n \
--argjson requiredValue "$requiredValue" \
--argjson x "$x" '
[$x[].score | select(. == $requiredValue)] | any
'
The following has been tested with bash:
requiredValue=5
x='[
{"score":1},
{"score":2},
{"score":3}
]'
jq --argjson requiredValue $requiredValue '
any(.[].score; . >= $requiredValue)' <<< "$x"
The result:
false

Mismatched input errors in antlr 3.5

i have problems with my grammar code in antlr3.5 . My input file is
` define tcpChannel ChannelName
define
listener
ListnerProperty
end
listener ;
define
execution
request with format RequestFormat,
response with format ResponseFormat,
error with format ErrorFormat ,
call servicename.executionname
end define execution ;
end
define channel ;
`
My lexer code is as follows:
lexer grammar ChannelLexer;
// ***************** lexer rules:
Define
:
'define'
;
Tcpchannel
:
'tcphannel'
;
Listener
:
'Listener'
;
End
:
'end'
;
Execution
:
' execution '
;
Request
:
' request '
;
With
:
' with '
;
Format
:
' format '
;
Response
:
' response '
;
Error
:
' error '
;
Call
:
' call '
;
Channel
:
' channel '
;
Dot
:
'.'
;
SColon
:
';'
;
Comma
:
','
;
Value
:
(
'a'..'z'
|'A'..'Z'
|'_'
)
(
'a'..'z'
|'A'..'Z'
|'_'
|Digit
)*
;
fragment
String
:
(
'"'
(
~(
'"'
| '\\'
)
| '\\'
(
'\\'
| '"'
)
)*
'"'
| '\''
(
~(
'\''
| '\\'
)
| '\\'
(
'\\'
| '\''
)
)*
'\''
)
{
setText(getText().substring(1, getText().length() - 1).replaceAll("\\\\(.)",
"$1"));
}
;
fragment
Digit
:
'0'..'9'
;
Space
:
(
' '
| '\t'
| '\r'
| '\n'
| '\u000C'
)
{
skip();
}
;
My parser code is:
parser grammar ChannelParser;
options
{
// antlr will generate java lexer and parser
language = Java;
// generated parser should create abstract syntax tree
output = AST;
}
// ***************** parser rules:
//our grammar accepts only salutation followed by an end symbol
expression
:
tcpChannelDefinition listenerDefinition executionDefintion endchannel
;
tcpChannelDefinition
:
Define Tcpchannel channelName
;
channelName
:
i= Value
{
$i.setText("CHANNEL_NAME#" + $i.text);
}
;
listenerDefinition
:
Define Listener listenerProperty endListener
;
listenerProperty
:
i=Value
{
$i.setText("PROPERTY_VALUE#" + $i.text);
}
;
endListener
:
End Listener SColon
;
executionDefintion
:
Define Execution execution
;
execution
:
Request With Format requestValue Comma
Response With Format responseValue Comma
Error With Format errorValue Comma
Call servicename Dot executionname
;
requestValue
:
i=Value
{
$i.setText("REQUEST_FORMAT#" + $i.text);
}
;
responseValue
:
i=Value
{
$i.setText("RESPONSE_FORMAT#" + $i.text);
}
;
errorValue
:
i=Value
{
$i.setText("ERROR_FORMAT#" + $i.text);
}
;
servicename
:
i=Value
{
$i.setText("SERVICE_NAME#" + $i.text);
}
;
executionname
:
i=Value
{
$i.setText("OPERATION_NAME#" + $i.text);
}
;
endexecution
:
End Define Execution SColon
;
endchannel
:
End Channel SColon
;
im getting error like missing Tcpchannel at 'tcpChannel' and extraneous input 'ChannelName' expecting Define. How to correct them. Please do help.ASAP

Why does this //ID pass but //DEF fails?

I'm experimenting with the XPath using the grammar provided in the test suite and am having a problem with the path //ID being identified, but //DEF is not found. An IllegalArgumentException is thrown. "DEF at index 2 isn't a valid token name" Why is //ID matched, but //DEFnot?
String exprGrammar = "grammar Expr;\n" +
"prog: func+ ;\n" +
"func: DEF ID '(' arg (',' arg)* ')' body ;\n" +
"body: '{' stat+ '}' ;\n" +
"arg : ID ;\n" +
"stat: expr ';' # printExpr\n" +
" | ID '=' expr ';' # assign\n" +
" | 'return' expr ';' # ret\n" +
" | ';' # blank\n" +
" ;\n" +
"expr: expr ('*'|'/') expr # MulDiv\n" +
" | expr ('+'|'-') expr # AddSub\n" +
" | primary # prim\n" +
" ;\n" +
"primary" +
" : INT # int\n" +
" | ID # id\n" +
" | '(' expr ')' # parens\n" +
" ;" +
"\n" +
"MUL : '*' ; // assigns token name to '*' used above in grammar\n" +
"DIV : '/' ;\n" +
"ADD : '+' ;\n" +
"SUB : '-' ;\n" +
"RETURN : 'return' ;\n" +
"DEF: 'def';\n" +
"ID : [a-zA-Z]+ ; // match identifiers\n" +
"INT : [0-9]+ ; // match integers\n" +
"NEWLINE:'\\r'? '\\n' -> skip; // return newlines to parser (is end-statement signal)\n" +
"WS : [ \\t]+ -> skip ; // toss out whitespace\n";
String SAMPLE_PROGRAM =
"def f(x,y) { x = 3+4; y; ; }\n" +
"def g(x) { return 1+2*x; }\n";
Grammar g2 = new Grammar(exprGrammar);
LexerInterpreter g2LexerInterpreter = g2.createLexerInterpreter(new ANTLRInputStream(SAMPLE_PROGRAM));
CommonTokenStream tokens = new CommonTokenStream(g2LexerInterpreter);
ParserInterpreter parser = g2.createParserInterpreter(tokens);
parser.setBuildParseTree(true);
ParseTree tree = parser.parse(g2.rules.get("prog").index);
String xpath = "//DEF";
for (ParseTree t : XPath.findAll(tree, xpath, parser) ) {
System.out.println(t.getSourceInterval());
}
When I run your code, the following gets printed:
0..0
18..18
In other words:
;)
This XPath tree pattern matching is all rather new, so my guess is that you've stumbled upon a bug that has been fixed. I'm using ANTLR version 4.2.2

Resources