Spring Data MongoDB - custom query [duplicate] - spring

This question already has answers here:
Query with sort() and limit() in Spring Repository interface
(2 answers)
Closed 5 years ago.
I want to store trace info like below:
{
"timestamp": 1394343677415,
"info": {
"method": "GET",
"path": "/trace",
"headers": {
"request": {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Connection": "keep-alive",
"Accept-Encoding": "gzip, deflate",
"User-Agent": "Mozilla/5.0 Gecko/Firefox",
"Accept-Language": "en-US,en;q=0.5",
"Cookie": "_ga=GA1.1.827067509.1390890128; ..."
"Authorization": "Basic ...",
"Host": "localhost:8080"
},
"response": {
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Application-Context": "application:8080",
"Content-Type": "application/json;charset=UTF-8",
"status": "200"
}
}
}
My #Document entity is extending HashMap.
Now I have to write custom query for pagination.
In Mongo client shell I would write it:
db.traceInfo.find({"headers.response.status": "404"}).limit(n);
and it works, but I don't know how to write this query as #Query in Spring MongoRepository? How can I do that?

This is totally easy, but keyword limit is not supported by spring-data-mongodb, check out the reference here: https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#repository-query-keywords
Possible solution:
#Query({"headers.response.status": "?1"})
List<T> findByGivenStatus(int status);

Related

How do I work with two endpoints using the JetBrains GraphQL plugin?

Currently I'm using something like the following in PyCharm using the GraphQL plugin for JetBrains IDEs to work with two different GraphQL endpoints that I switch between in my work (a local and a remote), with the consequence that I (need to manually) overwrite the schema file when I switch between them.
Is there a way to do this with a different schema file for each endpoint? What is the correct idiom for working with (and switching between) two endpoints?
{
"name": "My Schema",
"schemaPath": "_schema.graphql",
"extensions": {
"endpoints": {
"Local GraphQL Endpoint": {
"url": "http://localhost:5000",
"headers": {
"user-agent": "JS GraphQL"
},
"introspect": true
},
"Remote GraphQL Endpoint": {
"url": "http://my.remote.io",
"headers": {
"user-agent": "JS GraphQL"
},
"introspect": true
}
}
}
}

Overlay on map not drawing properly, is this a cache issue?

The site in question is multi.reindicator.com
We have market and neighborhood overlays using Carto, but many users experience an issue where the overlays don't show up properly. This is sometimes resolved when clearing browser cache, but a lot of our members are older and would never be able to figure out how to do that on their own.
Issue is, I can't figure out how o resolve this problem. I can't find anything in the code that appears to be causing this, so I feel like it may just be the way Carto handles our data.
You can see an example map error here: https://i.stack.imgur.com/CeBIg.png
Original map looks like this: https://i.stack.imgur.com/dXpeI.png
I can't think of anything else to try. All browser experience this issue to some extent, and clearing cache does resolve is most of the time, but the browser needs to be quit and reopened.
It uses mapbox for the basemap and Carto for overlays. It "usually" works well but often the overlays are partially rendered. According to the network flow (inspector), some requests response "status: bad -> 'Bad Request'"
Here is a part of the har file:
{
"startedDateTime": "2019-08-20T17:13:04.505Z",
"time": 121.74399999639718,
"request": {
"method": "GET",
"url": "https://cartocdn-gusc.global.ssl.fastly.net/reindicator/api/v1/map/reindicator#3b077a09#c3615dde7e9d26d95c813667bde38147:1545364448004/1/13/1878/3146.png",
"httpVersion": "http/1.1",
"headers": [
{
"name": "Sec-Fetch-Mode",
"value": "no-cors"
},
{
"name": "Referer",
"value": "https://multi.reindicator.com/"
},
{
"name": "DNT",
"value": "1"
},
{
"name": "User-Agent",
"value": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"
}
],
"queryString": [],
"cookies": [],
"headersSize": -1,
"bodySize": 0
},
"response": {
"status": 400,
"statusText": "Bad Request",
"httpVersion": "http/1.1",
"headers": [
{
"name": "Access-Control-Allow-Origin",
"value": "*"
},
{
"name": "Access-Control-Allow-Headers",
"value": "X-Requested-With, X-Prototype-Version, X-CSRF-Token, Authorization"
}
],
As you can see above, the response status is 400.
If anyone has any idea how I can resolve this issue so the users don't experience this partial drawing issue of the overlays and don't need to clear their cache constantly, I'd really appreciate any tips.

Decoding gzip response body from Packetbeat

I am using Packetbeat to monitor the requests/responses into/out of Elasticsearch client nodes using the http protocol watcher on port 9200. I am sending the output of Packetbeat through Logstash, and then from there out to a different instance of Elasticsearch. We have compression support enabled in the Elasticsearch that is being monitored, so I occasionally see requests with "Accept-Encoding: gzip, deflate" headers returning responses that are gzipped. Unfortunately, I have not been able to decode any of these gzip responses using any tools I have at my disposal (including the web-based converters, the gzip command line tool, and using Zlib::GzipReader in a Logstash ruby filter script). They all report that it is not a gzip format.
Does anyone know why I can't seem to decode the gzip content?
I have provided a sample of the filter I'm using in Logstash to try to do this on the fly as the event passes through Logstash (and it always reports that http.response.body is not in gzip format).
filter {
if [type] == "http" {
if [http][response][headers][content-encoding] == "gzip" {
ruby {
init => "
require 'zlib'
require 'stringio'
"
code => "
body = event.get('[http][response][body]').to_s
sio = StringIO.new(body)
gz = Zlib::GzipReader.new(sio)
result = gz.read.to_s
event.set('[http][response][body]', result)
"
}
}
}
}
I'm also providing a sample of the logged event here which includes the gzip content in case you would like to try to decompress it yourself:
{
"_index": "packetbeat-6.2.3-2018.05.19",
"_type": "doc",
"_id": "oH0bemMB2mAXfg5euIiP",
"_score": 1,
"_source": {
"server": "",
"client_server": "",
"bytes_in": 160,
"bytes_out": 361,
"#timestamp": "2018-05-19T20:33:46.470Z",
"client_port": 55863,
"path": "/",
"type": "http",
"client_proc": "",
"query": "GET /",
"port": 9200,
"host": "gke-main-production-elastic-clients-5728bab3-t1z8",
"#version": "1",
"responsetime": 0,
"fields": {
"nodePool": "production-elastic-clients"
},
"response": "HTTP/1.1 200 OK\r\ncontent-type: application/json; charset=UTF-8\r\ncontent-encoding: gzip\r\ncontent-length: 250\r\n\r\n\u001f�\b\u0000\u0000\u0000\u0000\u0000\u0000\u0000T��n�0\u0014Fw���\u001c\u0010\u0018�����&��vH\u0016d�K������\u0010��\u000b�C\u0018����{��\u0010]\u0001�\u001aap1W\u0012�\u0018\u0017�,y)���oC�\n��A��\u001b�6/��\u001a�\u000e��\"l+�����\u001d\u000f\u0005y/���k�?�\u0005�\u0005���3���Y�_[���Mh�\u0007nzo�T����C�1�\u0011�]����\u0007H�\u0015q��)�&i��u^%iF�k�i6�ތs�c���)�9hh^�0�T2<�<���.J����x���}�:c�\u0011��=���\u001f\u0000\u0000\u0000��\u0003\u0000��.�S\u0001\u0000\u0000",
"proc": "",
"request": "GET / HTTP/1.1\r\nUser-Agent: vscode-restclient\r\nhost: es-http-dev.elastic-prod.svc.cluster.local:9200\r\naccept-encoding: gzip, deflate\r\nConnection: keep-alive\r\n\r\n",
"beat": {
"name": "gke-main-production-elastic-clients-5728bab3-t1z8",
"version": "6.2.3",
"hostname": "gke-main-production-elastic-clients-5728bab3-t1z8"
},
"status": "OK",
"method": "GET",
"client_ip": "10.24.20.6",
"http": {
"response": {
"phrase": "OK",
"headers": {
"content-encoding": "gzip",
"content-length": 250,
"content-type": "application/json; charset=UTF-8"
},
"body": "\u001f�\b\u0000\u0000\u0000\u0000\u0000\u0000\u0000T��n�0\u0014Fw���\u001c\u0010\u0018�����&��vH\u0016d�K������\u0010��\u000b�C\u0018����{��\u0010]\u0001�\u001aap1W\u0012�\u0018\u0017�,y)���oC�\n��A��\u001b�6/��\u001a�\u000e��\"l+�����\u001d\u000f\u0005y/���k�?�\u0005�\u0005���3���Y�_[���Mh�\u0007nzo�T����C�1�\u0011�]����\u0007H�\u0015q��)�&i��u^%iF�k�i6�ތs�c���)�9hh^�0�T2<�<���.J����x���}�:c�\u0011��=���\u001f\u0000\u0000\u0000��\u0003\u0000��.�S\u0001\u0000\u0000",
"code": 200
},
"request": {
"params": "",
"headers": {
"connection": "keep-alive",
"user-agent": "vscode-restclient",
"content-length": 0,
"host": "es-http-dev.elastic-prod.svc.cluster.local:9200",
"accept-encoding": "gzip, deflate"
}
}
},
"tags": [
"beats",
"beats_input_raw_event"
],
"ip": "10.24.41.5"
},
"fields": {
"#timestamp": [
"2018-05-19T20:33:46.470Z"
]
}
}
And this is the response for that message that I receive at the client after it has been decompressed successfully by the client:
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-encoding: gzip
content-length: 250
{
"name": "es-client-7688c8d9b9-qp9l7",
"cluster_name": "esprod",
"cluster_uuid": "8iRwLMMSR72F76ZEONYcUg",
"version": {
"number": "5.6.3",
"build_hash": "1a2f265",
"build_date": "2017-10-06T20:33:39.012Z",
"build_snapshot": false,
"lucene_version": "6.6.1"
},
"tagline": "You Know, for Search"
}
I had a different situation and was able to resolve my issue. Posting it here, see if it helps your case.
I was using postman tool to test my REST API services locally. My Packetbeat used following config.
type: http
ports: [80, 8080, 8000, 5000, 8002]
send_all_headers: true
include_body_for: ["application/json", "x-www-form-urlencoded"]
send_request: true
send_response: true
I was getting following output in body.
I was able to get http.response.body in clear text when i added following to my postman request.
Accept-Encoding: application/json

unable to parse my custom log in logstash

Here given below is my Service log I want to parse this log into logstash. please suggests some plugin or method to parse the log.
"msgs": [{
"ts": "2017-07-17T12:22:00.2657422Z",
"tid": 4,
"eid": 1,
"lvl": "Information",
"cat": "Microsoft.AspNetCore.Hosting.Internal.WebHost",
"msg": {
"cnt": "Request starting HTTP/1.1 POST http://localhost:20001/Processor text/xml; charset=utf-8 601",
"Protocol": "HTTP/1.1",
"Method": "POST",
"ContentType": "text/xml; charset=utf-8",
"ContentLength": 601,
"Scheme": "http",
"Host": "localhost:20001",
"PathBase": "",
"Path": "/Processor",
"QueryString": ""
}
},
{
"ts": "2017-07-17T12:22:00.4617773Z",
"tid": 4,
"lvl": "Information",
"cat": "NCR.CP.Service.ServiceHostMiddleware",
"msg": {
"cnt": "REQ"
},
"data": {
"Headers": {
"Connection": "Keep-Alive",
"Content-Length": "601",
"Content-Type": "text/xml; charset=utf-8",
"Accept-Encoding": "gzip, deflate",
"Expect": "100-continue",
"Host": "localhost:20001",
"SOAPAction": "\"http://servereps.mtxeps.com/TransactionService/SendTransaction\""
}
}
}]
please suggest me some the way to apply filter on this type of logs, so that i can take out fields from this logs and visualize in kibana.
i have heard of grok filter but what pattern i have to use here for the same.

LogicApp CRM connector create record gives a 500 error

I've build a fairly simple LogicApp with the most recent version that came out about a week ago. It runs every hour and tries to create a record in CRM online. In the same manner I've created a LogicApp that retrieves records and that works.
The failed input and output looks like this:
{
"host": {
"api": {
"runtimeUrl": "https://logic-apis-westeurope.azure-apim.net/apim/dynamicscrmonline"
},
"connection": {
"name": "subscriptions/6e779c81-1d0c-4df9-92b8-b287ba919b51/resourceGroups/spdev-eiramar/providers/Microsoft.Web/connections/EE2AF043-71F0-4780-A8D1-25438A7746C0"
}
},
"method": "post",
"path": "/datasets/xxxx.crm4/tables/accounts/items",
"body": {
"name": "Test 1234"
}
}
Output:
{
"statusCode": 500,
"headers": {
"pragma": "no-cache",
"x-ms-request-id": "d121f98c-3dd5-4e6d-899b-5150b17795a3",
"cache-Control": "no-cache",
"date": "Tue, 01 Mar 2016 12:27:09 GMT",
"set-Cookie": "ARRAffinity=xxxxx082f2bca2639f8a68e283db0eba612ddd71036cf5a5cf2597f99;Path=/;Domain=127.0.0.1",
"server": "Microsoft-HTTPAPI/2.0",
"x-AspNet-Version": "4.0.30319",
"x-Powered-By": "ASP.NET"
},
"body": {
"status": 500,
"message": "Unknown error.",
"source": "127.0.0.1"
}
}
Does anybody know how to solve this issue?
Do you have security roles setup to access the CRM Org, i.e. can you access the URL from browser? https://xxxx.crm4.dynamics.com
Update: Fix for this issue has been rolled out. Please retry your scenario.

Resources