I am using Packetbeat to monitor the requests/responses into/out of Elasticsearch client nodes using the http protocol watcher on port 9200. I am sending the output of Packetbeat through Logstash, and then from there out to a different instance of Elasticsearch. We have compression support enabled in the Elasticsearch that is being monitored, so I occasionally see requests with "Accept-Encoding: gzip, deflate" headers returning responses that are gzipped. Unfortunately, I have not been able to decode any of these gzip responses using any tools I have at my disposal (including the web-based converters, the gzip command line tool, and using Zlib::GzipReader in a Logstash ruby filter script). They all report that it is not a gzip format.
Does anyone know why I can't seem to decode the gzip content?
I have provided a sample of the filter I'm using in Logstash to try to do this on the fly as the event passes through Logstash (and it always reports that http.response.body is not in gzip format).
filter {
if [type] == "http" {
if [http][response][headers][content-encoding] == "gzip" {
ruby {
init => "
require 'zlib'
require 'stringio'
"
code => "
body = event.get('[http][response][body]').to_s
sio = StringIO.new(body)
gz = Zlib::GzipReader.new(sio)
result = gz.read.to_s
event.set('[http][response][body]', result)
"
}
}
}
}
I'm also providing a sample of the logged event here which includes the gzip content in case you would like to try to decompress it yourself:
{
"_index": "packetbeat-6.2.3-2018.05.19",
"_type": "doc",
"_id": "oH0bemMB2mAXfg5euIiP",
"_score": 1,
"_source": {
"server": "",
"client_server": "",
"bytes_in": 160,
"bytes_out": 361,
"#timestamp": "2018-05-19T20:33:46.470Z",
"client_port": 55863,
"path": "/",
"type": "http",
"client_proc": "",
"query": "GET /",
"port": 9200,
"host": "gke-main-production-elastic-clients-5728bab3-t1z8",
"#version": "1",
"responsetime": 0,
"fields": {
"nodePool": "production-elastic-clients"
},
"response": "HTTP/1.1 200 OK\r\ncontent-type: application/json; charset=UTF-8\r\ncontent-encoding: gzip\r\ncontent-length: 250\r\n\r\n\u001f�\b\u0000\u0000\u0000\u0000\u0000\u0000\u0000T��n�0\u0014Fw���\u001c\u0010\u0018�����&��vH\u0016d�K������\u0010��\u000b�C\u0018����{��\u0010]\u0001�\u001aap1W\u0012�\u0018\u0017�,y)���oC�\n��A��\u001b�6/��\u001a�\u000e��\"l+�����\u001d\u000f\u0005y/���k�?�\u0005�\u0005���3���Y�_[���Mh�\u0007nzo�T����C�1�\u0011�]����\u0007H�\u0015q��)�&i��u^%iF�k�i6�ތs�c���)�9hh^�0�T2<�<���.J����x���}�:c�\u0011��=���\u001f\u0000\u0000\u0000��\u0003\u0000��.�S\u0001\u0000\u0000",
"proc": "",
"request": "GET / HTTP/1.1\r\nUser-Agent: vscode-restclient\r\nhost: es-http-dev.elastic-prod.svc.cluster.local:9200\r\naccept-encoding: gzip, deflate\r\nConnection: keep-alive\r\n\r\n",
"beat": {
"name": "gke-main-production-elastic-clients-5728bab3-t1z8",
"version": "6.2.3",
"hostname": "gke-main-production-elastic-clients-5728bab3-t1z8"
},
"status": "OK",
"method": "GET",
"client_ip": "10.24.20.6",
"http": {
"response": {
"phrase": "OK",
"headers": {
"content-encoding": "gzip",
"content-length": 250,
"content-type": "application/json; charset=UTF-8"
},
"body": "\u001f�\b\u0000\u0000\u0000\u0000\u0000\u0000\u0000T��n�0\u0014Fw���\u001c\u0010\u0018�����&��vH\u0016d�K������\u0010��\u000b�C\u0018����{��\u0010]\u0001�\u001aap1W\u0012�\u0018\u0017�,y)���oC�\n��A��\u001b�6/��\u001a�\u000e��\"l+�����\u001d\u000f\u0005y/���k�?�\u0005�\u0005���3���Y�_[���Mh�\u0007nzo�T����C�1�\u0011�]����\u0007H�\u0015q��)�&i��u^%iF�k�i6�ތs�c���)�9hh^�0�T2<�<���.J����x���}�:c�\u0011��=���\u001f\u0000\u0000\u0000��\u0003\u0000��.�S\u0001\u0000\u0000",
"code": 200
},
"request": {
"params": "",
"headers": {
"connection": "keep-alive",
"user-agent": "vscode-restclient",
"content-length": 0,
"host": "es-http-dev.elastic-prod.svc.cluster.local:9200",
"accept-encoding": "gzip, deflate"
}
}
},
"tags": [
"beats",
"beats_input_raw_event"
],
"ip": "10.24.41.5"
},
"fields": {
"#timestamp": [
"2018-05-19T20:33:46.470Z"
]
}
}
And this is the response for that message that I receive at the client after it has been decompressed successfully by the client:
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-encoding: gzip
content-length: 250
{
"name": "es-client-7688c8d9b9-qp9l7",
"cluster_name": "esprod",
"cluster_uuid": "8iRwLMMSR72F76ZEONYcUg",
"version": {
"number": "5.6.3",
"build_hash": "1a2f265",
"build_date": "2017-10-06T20:33:39.012Z",
"build_snapshot": false,
"lucene_version": "6.6.1"
},
"tagline": "You Know, for Search"
}
I had a different situation and was able to resolve my issue. Posting it here, see if it helps your case.
I was using postman tool to test my REST API services locally. My Packetbeat used following config.
type: http
ports: [80, 8080, 8000, 5000, 8002]
send_all_headers: true
include_body_for: ["application/json", "x-www-form-urlencoded"]
send_request: true
send_response: true
I was getting following output in body.
I was able to get http.response.body in clear text when i added following to my postman request.
Accept-Encoding: application/json
Related
I'm trying to log all queries from kibana. So I edited config/kibana.yml and added the following lines:
logging.dest: /tmp/test.log
logging.silent: false
logging.quiet: false
logging.verbose: true
elasticsearch.logQueries: true
Then I restarted kibana, queried for something.
Now logs start to appear, but only access logs are recorded, no ES queries there.
{
"type": "response",
"#timestamp": "2018-08-21T02:41:03Z",
"tags": [],
"pid": 28701,
"method": "post",
"statusCode": 200,
"req": {
"url": "/elasticsearch/_msearch",
"method": "post",
"headers": {
...
},
"remoteAddress": "xxxxx",
"userAgent": "xxxxx",
"referer": "http://xxxxxxx:8901/app/kibana"
},
"res": {
"statusCode": 200,
"responseTime": 62,
"contentLength": 9
},
"message": "POST /elasticsearch/_msearch 200 62ms - 9.0B"
}
Any ideas? I'm using ELK 6.2.2.
The elasticsearch.logQueries setting has been introduced in Kibana 6.3 as can be seen in this pull request
Here given below is my Service log I want to parse this log into logstash. please suggests some plugin or method to parse the log.
"msgs": [{
"ts": "2017-07-17T12:22:00.2657422Z",
"tid": 4,
"eid": 1,
"lvl": "Information",
"cat": "Microsoft.AspNetCore.Hosting.Internal.WebHost",
"msg": {
"cnt": "Request starting HTTP/1.1 POST http://localhost:20001/Processor text/xml; charset=utf-8 601",
"Protocol": "HTTP/1.1",
"Method": "POST",
"ContentType": "text/xml; charset=utf-8",
"ContentLength": 601,
"Scheme": "http",
"Host": "localhost:20001",
"PathBase": "",
"Path": "/Processor",
"QueryString": ""
}
},
{
"ts": "2017-07-17T12:22:00.4617773Z",
"tid": 4,
"lvl": "Information",
"cat": "NCR.CP.Service.ServiceHostMiddleware",
"msg": {
"cnt": "REQ"
},
"data": {
"Headers": {
"Connection": "Keep-Alive",
"Content-Length": "601",
"Content-Type": "text/xml; charset=utf-8",
"Accept-Encoding": "gzip, deflate",
"Expect": "100-continue",
"Host": "localhost:20001",
"SOAPAction": "\"http://servereps.mtxeps.com/TransactionService/SendTransaction\""
}
}
}]
please suggest me some the way to apply filter on this type of logs, so that i can take out fields from this logs and visualize in kibana.
i have heard of grok filter but what pattern i have to use here for the same.
I'm querying simple Elasticsearch index with house numbers data.
".house-numbers": {
"mappings": {
"house-number": {
"properties": {
"id": {
"type": "keyword"
},
"value": {
"type": "text",
"index_options": "docs"
}
}
}
}
}
Then I'm querying data like POST http request
Request url
http://localhost:9200/.house-numbers/housenumber/_search
Headers:
Content-Type: text/plain
Content-Length: 55
Accept: */*
Accept-Encoding: gzip, deflate, br
Request body:
{
"size": 30,
"query": {
"match": {
"value": {
"query": "2 3"
}
}
}
}
Request returns data in 10ms - 30ms and everything works fine. Elasticsearch reponse parameter took is small in all cases 3-5ms.
When I change size in request body to "size": 35 response time has suddenly 500ms. Took parameter from Elasticsearch is the same. There are no special characters and size of response is very similar.
I tried many clients NEST, Postman, Fiddler to do these requests, every client has the same behaviour.
Setting of my elasticsearch contains only
http.compression : true
http.compression_level : 9
Setting of my jvm
"jvm": {
"timestamp": 1478108615141,
"uptime_in_millis": 17150141,
"mem": {
"heap_used_in_bytes": 1384307624,
"heap_used_percent": 66,
"heap_committed_in_bytes": 2077753344,
"heap_max_in_bytes": 2077753344,
"non_heap_used_in_bytes": 96403904,
"non_heap_committed_in_bytes": 101502976,
"pools": {
"young": {
"used_in_bytes": 324358632,
"max_in_bytes": 558432256,
"peak_used_in_bytes": 558432256,
"peak_max_in_bytes": 558432256
},
"survivor": {
"used_in_bytes": 69730304,
"max_in_bytes": 69730304,
"peak_used_in_bytes": 69730304,
"peak_max_in_bytes": 69730304
},
"old": {
"used_in_bytes": 990220848,
"max_in_bytes": 1449590784,
"peak_used_in_bytes": 1190046816,
"peak_max_in_bytes": 1449590784
}...
I tried different versions of elasticsearch
I tried different settings - turn off http.compression, change compression_level
I tried another hosts for elasticsearch
I have no idea what can cause this problem and I can't continue with my work.
Any idea where to look or how to proceed?
Of course problem was not in elasticsearch but in the http communication, especially when http compression was turned on
Hints to remove delays
close fiddler
disable firewall and anti-virus software
close all programs that possibly can intercept http communication
I cannot get the message field to decode from my json log line when receiving via filebeat.
Here is the line in my logs:
{"levelname": "WARNING", "asctime": "2016-07-01 18:06:37", "message": "One or more gateways are offline", "name": "ep.management.commands.import", "funcName": "check_gateway_online", "lineno": 103, "process": 44551, "processName": "MainProcess", "thread": 140735198597120, "threadName": "MainThread", "server": "default"}
Here the logstash config. I tried with and without the codec. The only difference is that the message is being escaped when I use the codec.
input {
beats {
port => 5044
codec => "json"
}
}
filter {
json{
source => "message"
}
}
Here is the json as it arrives in elasticsearch:
{
"_index": "filebeat-2016.07.01",
"_type": "json",
"_id": "AVWnpK519vJkh3Ry-Q9B",
"_score": null,
"_source": {
"#timestamp": "2016-07-01T18:07:13.522Z",
"beat": {
"hostname": "59b378d40b2e",
"name": "59b378d40b2e"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "{\"levelname\": \"WARNING\", \"asctime\": \"2016-07-01 18:07:12\", \"message\": \"One or more gateways are offline on server default\", \"name\": \"ep.controllers.secure_client\", \"funcName\": \"check_gateways_online\", \"lineno\": 80, \"process\": 44675, \"processName\": \"MainProcess\", \"thread\": 140735198597120, \"threadName\": \"MainThread\"}",
"offset": 251189,
"source": "/mnt/ep_logs/ep_.json",
"type": "json"
},
"fields": {
"#timestamp": [
1467396433522
]
},
"sort": [
1467396433522
]
}
What I would like is that contents from the message object are decoded.
Many thanks
When that happens, it's usually because your Filebeat instance is configured to send documents directly to ES.
In your filebeat configuration file, make sure to comment out the elasticsearch output.
I am setting up new payment method in e-shop.
The requirements are:
URL: POST https://payparts2.privatbank.ua/ipp/v2/payment/create
server-server
Headers
Accept: application/json;
Accept-Encoding: UTF-8;
Content-Type: application/json; charset=UTF-8;
Body
{
"storeId": "",
"orderId": "",
"amount": 300.00,
"currency": "980",
"partsCount": 6,
"merchantType": "PP",
"products": [
{
"name": "TV",
"count": 2,
"price": 100.00
},
{
"name": "microwave",
"count": 1,
"price": 200.00
}
],
"responseUrl": "http://shop.com/response",
"redirectUrl": "http://shop.com/redirect",
"signature": ""
}
Here's my code:
http://jsfiddle.net/olga_smelo/hasb5kkr/9/
I've got an error:
FAILContent type 'application/x-www-form-urlencoded;charset=UTF-8' not supporteduk_UA
, but I set "Content type" as "application/json" in ajax
Help me please!