I have this Camel route:
from("direct:myRoute")
.id("myRoute")
.setHeader("accept", constant("application/json"))
.setHeader("Cache-Control", constant("no-cache"))
.setHeader("content-Type", constant("application/json"))
.setHeader(Exchange.HTTP_METHOD, constant("GET"))
.setHeader("ID",constant("0072168580"))
.removeHeader(Exchange.HTTP_PATH)
.removeHeader("CamelHttp*")
.setBody(simple("${null}"))
.streamCaching()
.to("http4" + URL)
.to("jolt:customerSpec.json?inputType=JsonString&outputType=JsonString&contentCache=true")
.log("Before: ${body}")
.filter()
.jsonpath("$.[?(#.customerId == '${header.ID}')]")
.log("After: ${body}");
The service I consume through http4 returns a response that is transformed with jolt, no problem so far. The JSON transformation result is:
[
{
"customerId": "0072168580",
"documentId": "IDO"
},
{
"customerId": "0072168580",
"documentId": "ID2"
},
{
"customerId": "0072168580",
"documentId": "CDO"
},
{
"customerId": "0072172460",
"documentId": "IDO"
},
{
"customerId": "0072172460",
"documentId": "ID2"
},
{
"customerId": "0072197658",
"documentId": "IDO"
},
{
"customerId": "0072197658",
"documentId": "ID2"
},
{
"customerId": "0072197658",
"documentId": "CDO"
}
]
The log after transformation shows:
INFO myRoute - Before: [{"customerId": "0072168580","documentId": "IDO"},{"customerId": "0072168580","documentId": "ID2"},{"customerId": "0072168580","documentId": "CDO"},{"customerId": "0072172460","documentId": "IDO"},{"customerId": "0072172460","documentId": "ID2"},{"customerId": "0072197658","documentId": "IDO"},{"customerId": "0072197658","documentId": "ID2"},{"customerId": "0072197658","documentId": "CDO"}]
Then, I want filter this response by customerId, I am setting a value in header to do it:
.jsonpath("$.[?(#.customerId == '${header.ID}')]")
Apparently, the jsonpath expression is ok, because the log shows me there were elements met the filtering criteria:
...
[main] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: #['customerId']
[main] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: #['customerId']
[main] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: #['customerId']
[main] DEBUG org.apache.camel.processor.FilterProcessor - Filter matches: true for exchange: Exchange[ID-XYZ-1529020843413-0-1]
However, the log after filtering shows me the same JSON, without filter it:
INFO myRoute - After: [{"customerId": "0072168580","documentId": "IDO"},{"customerId": "0072168580","documentId": "ID2"},{"customerId": "0072168580","documentId": "CDO"},{"customerId": "0072172460","documentId": "IDO"},{"customerId": "0072172460","documentId": "ID2"},{"customerId": "0072197658","documentId": "IDO"},{"customerId": "0072197658","documentId": "ID2"},{"customerId": "0072197658","documentId": "CDO"}]
I have been test the filter criteria in online tools, like http://jsonpath.com/ and it works:
Criteria
Results
What could be wrong?
Thanks a lot.
I think you misunderstand the meaning of Filter EIP: it filters a message according to a predicate so, in your case, as the content of the exchange matches the jsonpath predicate, the message flew through the next step.
You have different way to achieve what you want, i.e
by using the Split EIP and then filter out what you need
by using the Message Translator EIP
Related
I want to consumer graphQl API.
I know we need to use http requester to call graphQl.
I need some info on forming mutation request using dwl.
I was trying to hit this service https://www.predic8.de/fruit-shop-graphql
using below
%dw2.0
outputapplication/json
---
{
"query": "mutation(\$input:addCategoryInput!) { addCategory(input:\$input) { name products { name}} }",
"variables": {
"input": {
"id": 6,
"name": "Green Fruits",
"products": 8
}
}
}
its throwing bad request
But when using below
%dw 2.0
output application/json
---
{
"query": "mutation { addCategory(id: 6, name: \"Green Fruits\", products: 8) { name products { name } }}"
}
its working.
I want to use above format. Are both not valid requests.
Please share me your knowledge or guide me to right blog to refer.
output application/json
---
{
query: "mutation(\$id:Int!,\$name:String!,\$products:[Int]!) { addCategory(id:\$id, name:\$name, products:\$products) { name products { name } } }",
variables: {
id: 6,
name: "Green Fruits",
products: [8]
}
}
Your issue would appear to be more with your GraphQL. Products is defined as [Int]! in the schema, and you need to pass in the individual arguments - I don't see an addCategoryInput defined anywhere in the schema, and addCategory is expecting individual arguments.
I have setup a SPARQL endpoint on my machine using d2rq details at http://d2rq.org/getting-started. This is able to expose a standard chinook.db on my http://localhost:2020/sparql endpoint. I have tested and its returning data.
I have now created following 2 configuration files as described in the documentation of HyperGraphQL as shown below:
1.chinook.json
{
"name": "chinook-hgql",
"schema": "chinook.graphql",
"server": {
"port": 8889,
"graphql": "/graphql",
"graphiql": "/graphiql"
},
"services": [
{
"id": "chinook-sparql",
"type": "SPARQLEndpointService",
"url": "http://localhost:2020/sparql",
"graph": "",
"user": "",
"password": ""
}
]
}
2.chinook.graphql
type __Context {
Album: _#href(iri: "http://localhost:2020/ALBUMS")
label: _#href(iri: "http://www.w3.org/2000/01/rdf-schema#label")
comment: _#href(iri: "http://www.w3.org/2000/01/rdf-schema#comment")
albumId: _#href(iri: "http://localhost:2020/vocab/ALBUMS_ALBUMID")
title: _#href(iri: "http://localhost:2020/vocab/ALBUMS_TITLE")
artistId: _#href(iri: "http://localhost:2020/vocab/ARTISTS_ARTISTID")
}
type Album #service(id:"chinook-sparql") {
albumId: [String] #service(id:"chinook-sparql")
}
I am starting the service using the following command:
java -jar hypergraphql-1.0.3-exe.jar --config /Kiran/Source/HyperGraphQL/chinook/chinook.json
Now when I open up the http://localhost:8889/graphiql and provide the following query:
{
Album_GET {
albumId
}
}
the result shows up as:
{
"extensions": {},
"data": {
"#context": {
"_type": "#type",
"albumId": "http://localhost:2020/vocab/ALBUMS_ALBUMID",
"_id": "#id",
"Album_GET": "http://hypergraphql.org/query/Album_GET"
},
"Album_GET": []
},
"errors": []
}
Can someone tell me why its not returning data
Also I see there is no logging on the HyperGraphQL console to see what its doing or what SPARQL query is being generated because the logging level is set to WARN I Think, as I am new to Java, can someone please tell me how to change the logging level to DEBUG in Log4j. I see it prints this when the service starts:
log4j: reset attribute= "false".
log4j: Threshold ="null".
log4j: Retreiving an instance of org.apache.log4j.Logger.
log4j: Setting [org.hypergraphql] additivity to [false].
log4j: Level value for org.hypergraphql is [INFO].
log4j: org.hypergraphql level set to INFO
log4j: Class name: [org.apache.log4j.ConsoleAppender]
log4j: Parsing layout of class: "org.apache.log4j.PatternLayout"
log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n].
log4j: Adding appender named [console] to category [org.hypergraphql].
log4j: Level value for root is [WARN].
log4j: root level set to WARN
log4j: Adding appender named [console] to category [root].
Update 1:
I figured out that the iri was missing a part of the namespace in the Album: _#href(iri: "http://localhost:2020/vocab/ALBUMS") as a result the query was getting messed-up.
Now the question remains, how can I improve the logging to DEBUG level on log4j so that I can see what is going on inside HyperGraphQL
I am having some serious problems with dissecting the below text blob in my ELK stack
This is the field -
INFO [2019-06-20 10:37:42,734]
com.something.something.something.information.core.LoggingPiracyReporter:
Informational request: ip_address="1.1.1.1" domain_name="domain.com"
some_random_id="HrmwldM4DQNXoQF3AnYosJ0Mtig="
random_id_2="Isl/eC4ERnoLVEBMXYtWeMjwqkSKA2MPSsDnGHe4EzE=" number=1000
timestamp=1561027064 valid_token_present=true everything_ok=true
[Http/1.1] [8.8.8.8, 8.8.8.8, 8.8.8.8]
I have the below currently -
dissect { mapping => { "message" => '%{} ip_address="%{ip}" domain_name="%
{name}" some_random_id="%{some_random_id}" random_id_2="%{random_id_2}"
number="%{number}"%{}' } }
It seems to be breaking on the number field, if i remove the number it all works fine.(albeit throws a warning, but works fine and shows the fields in my kibana)
Can anyone suggest a way of getting the IP address/Domain some_random_id/random_id_2 aswell as the [http/1.1] block.
The quotes around %{number} in your mapping aren't present in the log you provided, which is what breaks your filter.
With this configuration:
dissect {
mapping => {
"message" => '%{} ip_address="%{ip}" domain_name="%{name}" some_random_id="%{some_random_id}" random_id_2="%{random_id_2}" number=%{number} timestamp=%{timestamp} valid_token_present=%{valid} everything_ok=%{ok} [%{http}]'
}
}
I'm getting this result:
{
"ok": "true",
"random_id_2": "Isl/eC4ERnoLVEBMXYtWeMjwqkSKA2MPSsDnGHe4EzE=",
"message": "INFO [2019-06-20 10:37:42,734] com.something.something.something.information.core.LoggingPiracyReporter: Informational request: ip_address=\"1.1.1.1\" domain_name=\"domain.com\" some_random_id=\"HrmwldM4DQNXoQF3AnYosJ0Mtig=\" random_id_2=\"Isl/eC4ERnoLVEBMXYtWeMjwqkSKA2MPSsDnGHe4EzE=\" number=1000 timestamp=1561027064 valid_token_present=true everything_ok=true [Http/1.1] [8.8.8.8, 8.8.8.8, 8.8.8.8]\r",
"ip": "1.1.1.1",
"http": "Http/1.1",
"name": "domain.com",
"valid": "true",
"some_random_id": "HrmwldM4DQNXoQF3AnYosJ0Mtig=",
"timestamp": "1561027064",
"number": "1000"
}
I use match query to search the field "syslog_5424"
{
"query":{
"filtered":{
"query":{"match":{"syslog5424_app":"e1c28ca3-dc7e-4425-ba14-7778f126bdd6"}}
}
}
}
Here is the query result:
{
took: 23,
timed_out: false,
-_shards: {
total: 45,
successful: 29,
failed: 0
},
-hits: {
total: 8340,
max_score: 17.623652,
-hits: [
-{
_index: "logstash-2014.12.16",
_type: "applog",
_id: "AUpTBuwKsotKslj7c27d",
_score: 17.623652,
-_source: {
message: "132 <14>1 2014-12-16T12:16:09.889089+00:00 loggregator e1c28ca3-dc7e-4425-ba14-7778f126bdd6 [App/0] - - Get the platform's MBean server",
#version: "1",
#timestamp: "2014-12-16T12:16:10.127Z",
host: "9.91.32.178:33128",
type: "applog",
syslog5424_pri: "14",
syslog5424_ver: "1",
syslog5424_ts: "2014-12-16T12:16:09.889089+00:00",
syslog5424_host: "loggregator",
syslog5424_app: "e1c28ca3-dc7e-4425-ba14-7778f126bdd6",
syslog5424_proc: "[App/0]",
syslog5424_msg: "Get the platform's MBean server",
syslog_severity_code: 5,
syslog_facility_code: 1,
syslog_facility: "user-level",
syslog_severity: "notice",
#source_host: "%{syslog_hostname}",
#message: "%{syslog_message}"
}
},
...
But when I change the "match" to "term", I got nothing. the content of field syslog5424_app is exactly "e1c28ca3-dc7e-4425-ba14-7778f126bdd6", but I can't find it using "term".any kind of advice would be good.
{
"query":{
"filtered":{
"query":{"term":{"syslog5424_app":"e1c28ca3-dc7e-4425-ba14-7778f126bdd6"}}
}
}
}
What analyser are you using on field syslog_5424?
if it's the standard analyser then the data is probably being broken down into search terms.
e.g.
e1c28ca3-dc7e-4425-ba14-7778f126bdd6
is broken down into:
e1c28ca3
dc7e
4425
ba14
7778f126bdd6
When you use match query, your search string will also be broken down - so a match is made.
However when you use a term query, the search string won't be analysed. i.e. you are looking for e1c28ca3-dc7e-4425-ba14-7778f126bdd6 in the 5 individual terms - it's not going to match.
So - my recommendation would be to update your mapping to use not_analyzed - you wouldn't normally need part of a UUID, so turn off all analysis for this field.
I am new to logstash !
I configured and everything is working fine - so far.
My log files comes as:
2014-04-27 16:24:43 DEBUG b45e66 T+561 10.31.166.155 /v1/XXX<!session> XXX requested for category_ids: only_pro: XXX_ids:14525
If i use the following conf file:
input { file { path => "/logs/*_log" }} output { elasticsearch { host => localhost } }
It will place the following in the ES:
{
_index: "logstash-2014.04.28",
_type: "logs",
_id: "WIoUbIvCQOqnz4tMZzMohg",
_score: 1,
_source: {
message: "2014-04-27 16:24:43 DEBUG b45e66 T+561 10.31.166.155 This is my log !",
#version: "1",
#timestamp: "2014-04-28T14:25:52.165Z",
host: "MYCOMPUTER",
path: "\logs\xxx_app.log"
}
}
How do i take the string in my log so the entire text wont be at _source.message ?
e.g: I wish i could parse it to something like:
{
_index: "logstash-2014.04.28",
_type: "logs",
_id: "WIoUbIvCQOqnz4tMZzMohg",
_score: 1,
_source: {
logLevel: "DEBUG",
messageId: "b45e66",
sendFrom: "10.31.166.155",
logTimestamp: "2014-04-27 16:24:43",
message: "This is my log !",
#version: "1",
#timestamp: "2014-04-28T14:25:52.165Z",
host: "MYCOMPUTER",
path: "\logs\xxx_app.log"
}
}
You need to parse it through a filter, e.g. the grok filter. This can be quite a bit tricky, so be patient and try, try, try. And have a look at the predefined patterns, too.
A start for your message would be
%{DATESTAMP} %{WORD:logLevel} %{WORD:messageId} %{GREEDYDATA:someString} %{IP}
The grokdebugger is an extremely helpful tool for your assistance.
When done, your config should look like
input {
stdin {}
}
filter {
grok {
match => { "message" => "%{DATESTAMP} %{WORD:logLevel} %{WORD:messageId} %{GREEDYDATA:someString} %{IP}" }
}
}
output {
elasticsearch { host => localhost }
}