Here is my JSON data that i am trying to send from filebeat to ingest pipeline "logpipeline.json" in opensearch.
json data
{
"#timestamp":"2022-11-08T10:07:05+00:00",
"client":"10.x.x.x",
"server_name":"example.stack.com",
"server_port":"80",
"server_protocol":"HTTP/1.1",
"method":"POST",
"request":"/example/api/v1/",
"request_length":"200",
"status":"500",
"bytes_sent":"598",
"body_bytes_sent":"138",
"referer":"",
"user_agent":"Java/1.8.0_191",
"upstream_addr":"10.x.x.x:10376",
"upstream_status":"500",
"gzip_ratio":"",
"content_type":"application/json",
"request_time":"6.826",
"upstream_response_time":"6.826",
"upstream_connect_time":"0.000",
"upstream_header_time":"6.826",
"remote_addr":"10.x.x.x",
"x_forwarded_for":"10.x.x.x",
"upstream_cache_status":"",
"ssl_protocol":"TLSv",
"ssl_cipher":"xxxx",
"ssl_session_reused":"r",
"request_body":"{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}",
"response_body":"{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}",
"limit_req_status":"",
"log_body":"1",
"connection_upgrade":"close",
"http_upgrade":"",
"request_uri":"/example/api/v1/",
"args":""
}
Filebeat to Opensearch log shipping
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.29.117:9200"]
pipeline: logpipeline
#index: "filebeatelastic-%{[agent.version]}-%{+yyyy.MM.dd}"
index: "nginx_dev-%{+yyyy.MM.dd}"
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.enabled: true
ssl.verification_mode: none
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "filebeat"
password: "filebeat"
I am carrying out the "data" fields transformation in the ingest pipeline for some of the fields by doing type conversion which works perfectly. But the only problem i am facing is with the "#timestamp".
The "#timestamp" is of "date" type and once the json data goes through the pipeline i am mapping the json data message to root level json object called "data". In that transformed data the "data.#timestamp" is showing as type "string" even though i haven't done any transformation for it.
Opensearch ingestpipeline - logpipeline.json
{
"description" : "Logging Pipeline",
"processors" : [
{
"json" : {
"field" : "message",
"target_field" : "data"
}
},
{
"date" : {
"field" : "data.#timestamp",
"formats" : ["ISO8601"]
}
},
{
"convert" : {
"field" : "data.body_bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_length",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_connect_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_header_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_response_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
}
]
}
Is there any way i can preserve the "#timestamp" "date" type field even after the transformation carried out in ingest pipeline?
indexed document image:
Edit1: Update ingest pipeline simulate result
{
"docs" : [
{
"doc" : {
"_index" : "_index",
"_id" : "_id",
"_source" : {
"index_date" : "2022.11.08",
"#timestamp" : "2022-11-08T12:07:05.000+02:00",
"message" : """
{ "#timestamp": "2022-11-08T10:07:05+00:00", "client": "10.x.x.x", "server_name": "example.stack.com", "server_port": "80", "server_protocol": "HTTP/1.1", "method": "POST", "request": "/example/api/v1/", "request_length": "200", "status": "500", "bytes_sent": "598", "body_bytes_sent": "138", "referer": "", "user_agent": "Java/1.8.0_191", "upstream_addr": "10.x.x.x:10376", "upstream_status": "500", "gzip_ratio": "", "content_type": "application/json", "request_time": "6.826", "upstream_response_time": "6.826", "upstream_connect_time": "0.000", "upstream_header_time": "6.826", "remote_addr": "10.x.x.x", "x_forwarded_for": "10.x.x.x", "upstream_cache_status": "", "ssl_protocol": "TLSv", "ssl_cipher": "xxxx", "ssl_session_reused": "r", "request_body": "{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}", "response_body": "{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}", "limit_req_status": "", "log_body": "1", "connection_upgrade": "close", "http_upgrade": "", "request_uri": "/example/api/v1/", "args": ""}
""",
"data" : {
"server_name" : "example.stack.com",
"request" : "/example/api/v1/",
"referer" : "",
"log_body" : "1",
"upstream_addr" : "10.x.x.x:10376",
"body_bytes_sent" : 138,
"upstream_header_time" : 6.826,
"ssl_cipher" : "xxxx",
"response_body" : """{"statusCode":500,"reasonPhrase":"Internal Server Error","errorMessage":"xxxx"}""",
"upstream_status" : "500",
"request_time" : 6.826,
"upstream_cache_status" : "",
"content_type" : "application/json",
"client" : "10.x.x.x",
"user_agent" : "Java/1.8.0_191",
"ssl_protocol" : "TLSv",
"limit_req_status" : "",
"remote_addr" : "10.x.x.x",
"method" : "POST",
"gzip_ratio" : "",
"http_upgrade" : "",
"bytes_sent" : 598,
"request_uri" : "/example/api/v1/",
"x_forwarded_for" : "10.x.x.x",
"args" : "",
"#timestamp" : "2022-11-08T10:07:05+00:00",
"upstream_connect_time" : 0.0,
"request_body" : """{"date":null,"sourceType":"BPM","processId":"xxxxx","comment":"Process status: xxxxx: ","user":"xxxx"}""",
"request_length" : 200,
"ssl_session_reused" : "r",
"server_port" : "80",
"upstream_response_time" : 6.826,
"connection_upgrade" : "close",
"server_protocol" : "HTTP/1.1",
"status" : "500"
}
},
"_ingest" : {
"timestamp" : "2023-01-18T08:06:35.335066236Z"
}
}
}
]
}
Finally able to resolve my issue. I updated the filebeat.yml with the following. Previously template name and pattern was different. But this default template name "filebeat" and pattern "filebeat" seems to be doing the job for me.
To
setup.template.name: "filebeat"
setup.template.pattern: "filebeat"
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
But still need to figure our how templates work though
How is it possible to populate simple schema's default value with a call to a collection in Meteor js instead of defining the "tests" within the defaultValue as below? If possible to have the defaultValue return all from TestList = new Mongo.Collection('testList').
StudentSchema = new SimpleSchema({
tests: {
type: [Object],
blackbox: true,
optional: true,
defaultValue:[
{
"_id" : "T2yfqWJ3a5rQz64WN",
"category_id" : "5",
"active" : "true",
"category" : "Cognitive/Intelligence",
"abbr" : "WJ-IV COG",
"name" : "Woodcock-Johnson IV, Tests of Cognitive Abilities",
"publisher" : "Riverside Publishing"
},
{
"_id" : "Ai8bT6dLYGQRDfvKe",
"category_id" : "5",
"active" : "true",
"category" : "Cognitive/Intelligence",
"abbr" : "WISC-IV",
"name" : "Wechsler Intelligence Scale for Children-Fourth Edition",
"publisher" : "The Psychological Corporation"
},
{
"_id" : "osAuaLrX97meRZuda",
"category_id" : "7",
"active" : "true",
"category" : "Speech and Language",
"abbr" : "WOJO",
"name" : "Wechsler Intelligence",
"publisher" : "The Psychological Corporation"
},
{
"_id" : "57c62a784b94c533b656dba8",
"category_id" : "5",
"active" : "true",
"category" : "Behavioral",
"abbr" : "CARS",
"name" : "CARS",
"publisher" : "The Psychological Corporation"
}
],
);
},
Dynamically loading all entries from "TestList" collection into "tests" array.
TestList = new Mongo.Collection('testList');
StudentSchema = new SimpleSchema({
tests: {
type: [Object],
blackbox: true,
optional: true,
autoValue: function () {
return TestList.find().fetch();
},
I am setting the return to true, but when i complete the payment on square, the app does not redirect on its own.
{
"amount_money": {
"amount" : "100",
"currency_code" : "USD"
},
"auto_return":true,
"callback_url" : "https://floating-inlet-19449.herokuapp.com/redirect",
"client_id" : "sq0idp-U8x6mJyLFtHuhCfv9sqL5g",
"version": "1.3",
"notes": "notes for the transaction",
"options" : {
"supported_tender_types" : ["CREDIT_CARD"]
}
auto_return should be nested in the options object, like this:
{
"amount_money": { "amount" : 100, "currency_code" : "USD" },
"callback_url" : "https://floating-inlet-19449.herokuapp.com/redirect",
"client_id" : "sq0idp-U8x6mJyLFtHuhCfv9sqL5g",
"version": "1.3",
"notes": "notes for the transaction",
"options" : {
"supported_tender_types" : ["CREDIT_CARD"],
"auto_return": true
}
Wiremock logs that the following request not matches:
WireMock : Request was not matched:
{
"url" : "/api/accounts?username=defaultuser",
"absoluteUrl" : "http://localhost:11651/api/accounts?username=defaultuser",
"method" : "GET",
"clientIp" : "127.0.0.1",
"headers" : {
"authorization" : "bearer test123",
"accept" : "application/json, application/*+json",
"user-agent" : "Java/1.8.0_121",
"host" : "localhost:11651",
"connection" : "keep-alive"
},
"cookies" : { },
"browserProxyRequest" : false,
"loggedDate" : 1500711718016,
"bodyAsBase64" : "",
"body" : "",
"loggedDateString" : "2017-07-22T08:21:58Z"
}
Closest match:
{
"urlPath" : "/api/accounts",
"method" : "GET",
"headers" : {
"authorization" : {
"matches" : "^bearer"
},
"accept" : {
"equalTo" : "application/json, application/*+json"
},
"user-agent" : {
"equalTo" : "Java/1.8.0_121"
},
"host" : {
"matches" : "^localhost:[0-9]{5}"
},
"connection" : {
"equalTo" : "keep-alive"
}
},
"queryParameters" : {
"username" : {
"matches" : "^[a-zA-Z0-9]*$"
}
}
}
Is the problem because of the difference of url and urlPath?
I also tried to specify absoluteUrl in the Contract. but it is ignored. I guess because it is not defined in Contract DSL.
The request side of the contract looks like this:
request{
method 'GET'
url('/api/accounts'){
queryParameters {
parameter('username', $(consumer(regex('^[a-zA-Z0-9]*$')), producer('defaultuser')))
}
}
headers {
header('authorization', $(consumer(regex('^bearer')), producer(execute('authClientBearer()'))))
header('accept', $(consumer('application/json, application/*+json')))
header('user-agent', $(consumer('Java/1.8.0_121')))
header('host', $(consumer(regex('^localhost:[0-9]{5}'))))
header('connection', $(consumer('keep-alive')))
}
}
It turned out to be a missing / at the end of the URL in the contract/stub
Not directly related to the question but for all who came here from Google:
In my case I was in the wrong scenario state.
More about scenario states here: http://wiremock.org/docs/stateful-behaviour/
If you have the same problem, maybe it's about your problem:
JSON configuration for matching mvcMock example:
"request": {
"urlPath": "/hello?name=pavel",
"method": "GET",
..
}
And you can see in log:
"/hello?name=pavel" | "/hello?name=pavel" - URL does not match
This is correct.
You have to change:
"request": {
"urlPath": "/hello",
"method": "GET",
"queryParameters": {
"name": {
"equalTo": "pavel"
}
},
..
}
Currently in my nightwatch.json I am set up fine for running on my mac:
{
"src_folders" : ["specs"],
"output_folder" : "tests/e2e/reports",
"custom_commands_path" : "",
"custom_assertions_path" : "",
"page_objects_path" : "",
"globals_path" : "",
"selenium" : {
"start_process" : true,
"server_path" : "bin/selenium-server-standalone-2.48.2.jar",
"log_path" : "",
"host" : "127.0.0.1",
"port" : 4444,
"cli_args" : {
"webdriver.chrome.driver" : "bin/chromedriver 2",
"webdriver.ie.driver" : ""
}
},
"test_settings" : {
"default" : {
"launch_url" : "someurl",
"selenium_port" : 4444,
"selenium_host" : "localhost",
"silent": true,
"screenshots" : {
"enabled" : false,
"path" : ""
},
"desiredCapabilities": {
"browserName": "chrome",
"javascriptEnabled": true,
"acceptSslCerts": true
}
}
}
}
However the driver for chrome will need to run chromedriver.exe. What is the best practice way of resolving this? Do I need 2 config files? I would prefer not to have this as I will need to have extra checks for this.
Solution is to use a nightwatch.conf.js file, ie:
module.exports = (function (settings) {
//Setting chromedriver path at runtime to run on different architectures
if (process.platform === "darwin") {
settings.selenium.cli_args["webdriver.chrome.driver"] = "bin/chromedriver 2";
}
else if (process.platform === "win32" || process.platform === "win64") {
settings.selenium.cli_args["webdriver.chrome.driver"] = "bin/chromedriver.exe";
}
return settings;
})(require('./nightwatch.json'));