I have been trying to run a simple test with Nightwatchjs and I keep running into issues.
I believe I have setup my JSON file correctly:
{
"src_folder" : ["./smoketests"],
"output_folder" : "./reports",
"selenium" : {
"start_process" : true,
"start_session" : true,
"server_path" : "M:/nightwatch/lib/selenium-server-standalone-2.48.2.jar",
"log_path" : false,
"host" : "127.0.0.1",
"port" : 4444,
"cli_args" : {
"webdriver.chrome.driver" : "./lib/chromedriver.exe"
}
},
"test_settings" : {
"default" : {
"launch_url" : "http://www.google.com/",
"selenium_port" : 4444,
"selenium_host" : "localhost",
"silent" : true,
"screenshots" : {
"enabled" : false,
"path" : "./screenshots/smoketests"
}
},
"desiredCapabilities" : {
"browserName" : "firefox",
"javascriptEnabled" : true,
"acceptSslCerts" : true
},
"chrome" : {
"desiredCapabilities" : {
"browserName" : "chrome",
"javascriptEnabled" : true,
"acceptSslCerts" : true
}
}
}
}
and my test is pretty simple:
module.exports = {
beforeEach : function(browser) {
browser.maximizeWindow();
},
'Test title' : function(browser) {
browser
.url('http://www.google.com/')
.waitForElementVisible('body', 1000)
.assert.title("Google")
browser.end();
}
};
Yet, when I run the test:
nightwatch -c smoketests/homepage.json
I receive the following error:
M:\nightwatch>nightwatch -c projects/smoketests/homepage.json
Starting selenium server... started - PID: 6448
C:\Users\jomartinez\AppData\Roaming\npm\node_modules\nightwatch\lib\
runner\run.js:116
var fullPaths = testSource.map(function (p) {
^
TypeError: Cannot read property 'map' of undefined
at module.exports.readPaths (C:\Users\jomartinez\AppData\Roaming\npm\node_mo
dules\nightwatch\lib\runner\run.js:116:31)
at runner [as run] (C:\Users\jomartinez\AppData\Roaming\npm\node_modules\nig
htwatch\lib\runner\run.js:182:10)
at Object.CliRunner.runner (C:\Users\jomartinez\AppData\Roaming\npm\node_mod
ules\nightwatch\lib\runner\cli\clirunner.js:345:16)
at C:\Users\jomartinez\AppData\Roaming\npm\node_modules\nightwatch\lib\runne
r\cli\clirunner.js:321:12
at SeleniumServer.onStarted (C:\Users\jomartinez\AppData\Roaming\npm\node_mo
dules\nightwatch\lib\runner\cli\clirunner.js:281:9)
at SeleniumServer.checkProcessStarted (C:\Users\jomartinez\AppData\Roaming\n
pm\node_modules\nightwatch\lib\runner\selenium.js:140:10)
at SeleniumServer.onStderrData (C:\Users\jomartinez\AppData\Roaming\npm\node
_modules\nightwatch\lib\runner\selenium.js:120:8)
at emitOne (events.js:77:13)
at Socket.emit (events.js:169:7)
at readableAddChunk (_stream_readable.js:146:16)
Has anybody else encounter this issue as well?
I think I figure out my initial issue. I had a syntax error in my "scr_folders" in my JSON file. After fixing it, my test seems to run fine.
Related
Here is my JSON data that i am trying to send from filebeat to ingest pipeline "logpipeline.json" in opensearch.
json data
{
"#timestamp":"2022-11-08T10:07:05+00:00",
"client":"10.x.x.x",
"server_name":"example.stack.com",
"server_port":"80",
"server_protocol":"HTTP/1.1",
"method":"POST",
"request":"/example/api/v1/",
"request_length":"200",
"status":"500",
"bytes_sent":"598",
"body_bytes_sent":"138",
"referer":"",
"user_agent":"Java/1.8.0_191",
"upstream_addr":"10.x.x.x:10376",
"upstream_status":"500",
"gzip_ratio":"",
"content_type":"application/json",
"request_time":"6.826",
"upstream_response_time":"6.826",
"upstream_connect_time":"0.000",
"upstream_header_time":"6.826",
"remote_addr":"10.x.x.x",
"x_forwarded_for":"10.x.x.x",
"upstream_cache_status":"",
"ssl_protocol":"TLSv",
"ssl_cipher":"xxxx",
"ssl_session_reused":"r",
"request_body":"{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}",
"response_body":"{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}",
"limit_req_status":"",
"log_body":"1",
"connection_upgrade":"close",
"http_upgrade":"",
"request_uri":"/example/api/v1/",
"args":""
}
Filebeat to Opensearch log shipping
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.29.117:9200"]
pipeline: logpipeline
#index: "filebeatelastic-%{[agent.version]}-%{+yyyy.MM.dd}"
index: "nginx_dev-%{+yyyy.MM.dd}"
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.enabled: true
ssl.verification_mode: none
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "filebeat"
password: "filebeat"
I am carrying out the "data" fields transformation in the ingest pipeline for some of the fields by doing type conversion which works perfectly. But the only problem i am facing is with the "#timestamp".
The "#timestamp" is of "date" type and once the json data goes through the pipeline i am mapping the json data message to root level json object called "data". In that transformed data the "data.#timestamp" is showing as type "string" even though i haven't done any transformation for it.
Opensearch ingestpipeline - logpipeline.json
{
"description" : "Logging Pipeline",
"processors" : [
{
"json" : {
"field" : "message",
"target_field" : "data"
}
},
{
"date" : {
"field" : "data.#timestamp",
"formats" : ["ISO8601"]
}
},
{
"convert" : {
"field" : "data.body_bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_length",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_connect_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_header_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_response_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
}
]
}
Is there any way i can preserve the "#timestamp" "date" type field even after the transformation carried out in ingest pipeline?
indexed document image:
Edit1: Update ingest pipeline simulate result
{
"docs" : [
{
"doc" : {
"_index" : "_index",
"_id" : "_id",
"_source" : {
"index_date" : "2022.11.08",
"#timestamp" : "2022-11-08T12:07:05.000+02:00",
"message" : """
{ "#timestamp": "2022-11-08T10:07:05+00:00", "client": "10.x.x.x", "server_name": "example.stack.com", "server_port": "80", "server_protocol": "HTTP/1.1", "method": "POST", "request": "/example/api/v1/", "request_length": "200", "status": "500", "bytes_sent": "598", "body_bytes_sent": "138", "referer": "", "user_agent": "Java/1.8.0_191", "upstream_addr": "10.x.x.x:10376", "upstream_status": "500", "gzip_ratio": "", "content_type": "application/json", "request_time": "6.826", "upstream_response_time": "6.826", "upstream_connect_time": "0.000", "upstream_header_time": "6.826", "remote_addr": "10.x.x.x", "x_forwarded_for": "10.x.x.x", "upstream_cache_status": "", "ssl_protocol": "TLSv", "ssl_cipher": "xxxx", "ssl_session_reused": "r", "request_body": "{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}", "response_body": "{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}", "limit_req_status": "", "log_body": "1", "connection_upgrade": "close", "http_upgrade": "", "request_uri": "/example/api/v1/", "args": ""}
""",
"data" : {
"server_name" : "example.stack.com",
"request" : "/example/api/v1/",
"referer" : "",
"log_body" : "1",
"upstream_addr" : "10.x.x.x:10376",
"body_bytes_sent" : 138,
"upstream_header_time" : 6.826,
"ssl_cipher" : "xxxx",
"response_body" : """{"statusCode":500,"reasonPhrase":"Internal Server Error","errorMessage":"xxxx"}""",
"upstream_status" : "500",
"request_time" : 6.826,
"upstream_cache_status" : "",
"content_type" : "application/json",
"client" : "10.x.x.x",
"user_agent" : "Java/1.8.0_191",
"ssl_protocol" : "TLSv",
"limit_req_status" : "",
"remote_addr" : "10.x.x.x",
"method" : "POST",
"gzip_ratio" : "",
"http_upgrade" : "",
"bytes_sent" : 598,
"request_uri" : "/example/api/v1/",
"x_forwarded_for" : "10.x.x.x",
"args" : "",
"#timestamp" : "2022-11-08T10:07:05+00:00",
"upstream_connect_time" : 0.0,
"request_body" : """{"date":null,"sourceType":"BPM","processId":"xxxxx","comment":"Process status: xxxxx: ","user":"xxxx"}""",
"request_length" : 200,
"ssl_session_reused" : "r",
"server_port" : "80",
"upstream_response_time" : 6.826,
"connection_upgrade" : "close",
"server_protocol" : "HTTP/1.1",
"status" : "500"
}
},
"_ingest" : {
"timestamp" : "2023-01-18T08:06:35.335066236Z"
}
}
}
]
}
Finally able to resolve my issue. I updated the filebeat.yml with the following. Previously template name and pattern was different. But this default template name "filebeat" and pattern "filebeat" seems to be doing the job for me.
To
setup.template.name: "filebeat"
setup.template.pattern: "filebeat"
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
But still need to figure our how templates work though
How is it possible to populate simple schema's default value with a call to a collection in Meteor js instead of defining the "tests" within the defaultValue as below? If possible to have the defaultValue return all from TestList = new Mongo.Collection('testList').
StudentSchema = new SimpleSchema({
tests: {
type: [Object],
blackbox: true,
optional: true,
defaultValue:[
{
"_id" : "T2yfqWJ3a5rQz64WN",
"category_id" : "5",
"active" : "true",
"category" : "Cognitive/Intelligence",
"abbr" : "WJ-IV COG",
"name" : "Woodcock-Johnson IV, Tests of Cognitive Abilities",
"publisher" : "Riverside Publishing"
},
{
"_id" : "Ai8bT6dLYGQRDfvKe",
"category_id" : "5",
"active" : "true",
"category" : "Cognitive/Intelligence",
"abbr" : "WISC-IV",
"name" : "Wechsler Intelligence Scale for Children-Fourth Edition",
"publisher" : "The Psychological Corporation"
},
{
"_id" : "osAuaLrX97meRZuda",
"category_id" : "7",
"active" : "true",
"category" : "Speech and Language",
"abbr" : "WOJO",
"name" : "Wechsler Intelligence",
"publisher" : "The Psychological Corporation"
},
{
"_id" : "57c62a784b94c533b656dba8",
"category_id" : "5",
"active" : "true",
"category" : "Behavioral",
"abbr" : "CARS",
"name" : "CARS",
"publisher" : "The Psychological Corporation"
}
],
);
},
Dynamically loading all entries from "TestList" collection into "tests" array.
TestList = new Mongo.Collection('testList');
StudentSchema = new SimpleSchema({
tests: {
type: [Object],
blackbox: true,
optional: true,
autoValue: function () {
return TestList.find().fetch();
},
Wiremock logs that the following request not matches:
WireMock : Request was not matched:
{
"url" : "/api/accounts?username=defaultuser",
"absoluteUrl" : "http://localhost:11651/api/accounts?username=defaultuser",
"method" : "GET",
"clientIp" : "127.0.0.1",
"headers" : {
"authorization" : "bearer test123",
"accept" : "application/json, application/*+json",
"user-agent" : "Java/1.8.0_121",
"host" : "localhost:11651",
"connection" : "keep-alive"
},
"cookies" : { },
"browserProxyRequest" : false,
"loggedDate" : 1500711718016,
"bodyAsBase64" : "",
"body" : "",
"loggedDateString" : "2017-07-22T08:21:58Z"
}
Closest match:
{
"urlPath" : "/api/accounts",
"method" : "GET",
"headers" : {
"authorization" : {
"matches" : "^bearer"
},
"accept" : {
"equalTo" : "application/json, application/*+json"
},
"user-agent" : {
"equalTo" : "Java/1.8.0_121"
},
"host" : {
"matches" : "^localhost:[0-9]{5}"
},
"connection" : {
"equalTo" : "keep-alive"
}
},
"queryParameters" : {
"username" : {
"matches" : "^[a-zA-Z0-9]*$"
}
}
}
Is the problem because of the difference of url and urlPath?
I also tried to specify absoluteUrl in the Contract. but it is ignored. I guess because it is not defined in Contract DSL.
The request side of the contract looks like this:
request{
method 'GET'
url('/api/accounts'){
queryParameters {
parameter('username', $(consumer(regex('^[a-zA-Z0-9]*$')), producer('defaultuser')))
}
}
headers {
header('authorization', $(consumer(regex('^bearer')), producer(execute('authClientBearer()'))))
header('accept', $(consumer('application/json, application/*+json')))
header('user-agent', $(consumer('Java/1.8.0_121')))
header('host', $(consumer(regex('^localhost:[0-9]{5}'))))
header('connection', $(consumer('keep-alive')))
}
}
It turned out to be a missing / at the end of the URL in the contract/stub
Not directly related to the question but for all who came here from Google:
In my case I was in the wrong scenario state.
More about scenario states here: http://wiremock.org/docs/stateful-behaviour/
If you have the same problem, maybe it's about your problem:
JSON configuration for matching mvcMock example:
"request": {
"urlPath": "/hello?name=pavel",
"method": "GET",
..
}
And you can see in log:
"/hello?name=pavel" | "/hello?name=pavel" - URL does not match
This is correct.
You have to change:
"request": {
"urlPath": "/hello",
"method": "GET",
"queryParameters": {
"name": {
"equalTo": "pavel"
}
},
..
}
Currently in my nightwatch.json I am set up fine for running on my mac:
{
"src_folders" : ["specs"],
"output_folder" : "tests/e2e/reports",
"custom_commands_path" : "",
"custom_assertions_path" : "",
"page_objects_path" : "",
"globals_path" : "",
"selenium" : {
"start_process" : true,
"server_path" : "bin/selenium-server-standalone-2.48.2.jar",
"log_path" : "",
"host" : "127.0.0.1",
"port" : 4444,
"cli_args" : {
"webdriver.chrome.driver" : "bin/chromedriver 2",
"webdriver.ie.driver" : ""
}
},
"test_settings" : {
"default" : {
"launch_url" : "someurl",
"selenium_port" : 4444,
"selenium_host" : "localhost",
"silent": true,
"screenshots" : {
"enabled" : false,
"path" : ""
},
"desiredCapabilities": {
"browserName": "chrome",
"javascriptEnabled": true,
"acceptSslCerts": true
}
}
}
}
However the driver for chrome will need to run chromedriver.exe. What is the best practice way of resolving this? Do I need 2 config files? I would prefer not to have this as I will need to have extra checks for this.
Solution is to use a nightwatch.conf.js file, ie:
module.exports = (function (settings) {
//Setting chromedriver path at runtime to run on different architectures
if (process.platform === "darwin") {
settings.selenium.cli_args["webdriver.chrome.driver"] = "bin/chromedriver 2";
}
else if (process.platform === "win32" || process.platform === "win64") {
settings.selenium.cli_args["webdriver.chrome.driver"] = "bin/chromedriver.exe";
}
return settings;
})(require('./nightwatch.json'));
I'm building a stack that needs access to a private S3 bucket to download the most current version of my application. I'm using IAM roles, a relatively new AWS feature that allows EC2 instances to be assigned specific roles, which are then coupled with IAM policies. Unfortunately, these roles come with temporary API credentials generated at instantiation. It's not crippling, but it's forced me to do things like this cloud-init script (simplified to just the relevant bit):
#!/bin/sh
# Grab our credentials from the meta-data and parse the response
CREDENTIALS=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access)
S3_ACCESS_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['AccessKeyId'];")
S3_SECRET_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['SecretAccessKey'];")
S3_TOKEN=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['Token'];")
# Create an executable script to pull the file
cat << EOF > /tmp/pullS3.rb
require 'rubygems'
require 'aws-sdk'
AWS.config(
:access_key_id => "$S3_ACCESS_KEY",
:secret_access_key => "$S3_SECRET_KEY",
:session_token => "$S3_TOKEN")
s3 = AWS::S3.new()
myfile = s3.buckets['mybucket'].objects["path/to/my/file"]
File.open("/path/to/save/myfile", "w") do |f|
f.write(myfile.read)
end
EOF
# Downloading the file
ruby /tmp/pullS3.rb
First and foremost: This works, and works pretty well. All the same, I'd love to use CloudFormation's existing support for source access. Specifically, cfn-init supports the use of authentication resources to get at protected data, including S3 buckets. Is there anyway to get at these keys from within cfn-init, or perhaps tie the IAM role to an authentication resource?
I suppose one alternative would be putting my source behind some other authenticated service, but that's not a viable option at this time.
Another promising lead is the AWS::IAM::AccessKey resource, but the docs don't suggest it can be used with roles. I'm going to try it anyway.
I'm not sure when support was added, but you can meanwhile use an IAM role for authenticating S3 downloads for files and sources sections in AWS::CloudFormation::Init.
Just use roleName instead of accessKeyId & secretKey (see AWS::CloudFormation::Authentication for details), e.g.:
"Metadata": {
"AWS::CloudFormation::Init": {
"download": {
"files": {
"/tmp/test.txt": {
"source": "http://myBucket.s3.amazonaws.com/test.txt"
}
}
}
},
"AWS::CloudFormation::Authentication": {
"default" : {
"type": "s3",
"buckets": [ "myBucket" ],
"roleName": { "Ref": "myRole" }
}
}
}
Tested with aws-cfn-bootstrap-1.3-11
I managed to get this working. What I used was code from this exchange:
https://forums.aws.amazon.com/message.jspa?messageID=319465
The code doesn't use IAM Policies - it uses the AWS::S3::BucketPolicy instead.
Cloud formation code snippet:
"Resources" : {
"CfnUser" : {
"Type" : "AWS::IAM::User",
"Properties" : {
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": { "Statement":[{
"Effect" : "Allow",
"Action" : [
"cloudformation:DescribeStackResource",
"s3:GetObject"
],
"Resource" :"*"
}]}
}]
}
},
"CfnKeys" : {
"Type" : "AWS::IAM::AccessKey",
"Properties" : {
"UserName" : {"Ref": "CfnUser"}
}
},
"BucketPolicy" : {
"Type" : "AWS::S3::BucketPolicy",
"Properties" : {
"PolicyDocument": {
"Version" : "2008-10-17",
"Id" : "CfAccessPolicy",
"Statement" : [{
"Sid" : "ReadAccess",
"Action" : ["s3:GetObject"],
"Effect" : "Allow",
"Resource" : { "Fn::Join" : ["", ["arn:aws:s3:::<MY_BUCKET>/*"]]},
"Principal" : { "AWS": {"Fn::GetAtt" : ["CfnUser", "Arn"]} }
}]
},
"Bucket" : "<MY_BUCKET>"
}
},
"WebServer": {
"Type": "AWS::EC2::Instance",
"DependsOn" : "BucketPolicy",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"sources" : {
"/etc/<MY_PATH>" : "https://s3.amazonaws.com/<MY_BUCKET>/<MY_FILE>"
}
}
},
"AWS::CloudFormation::Authentication" : {
"S3AccessCreds" : {
"type" : "S3",
"accessKeyId" : { "Ref" : "CfnKeys" },
"secretKey" : {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]},
"buckets" : [ "<MY_BUCKET>" ]
}
}
},
"Properties": {
"ImageId" : "<MY_INSTANCE_ID>",
"InstanceType" : { "Ref" : "WebServerInstanceType" },
"KeyName" : {"Ref": "KeyName"},
"SecurityGroups" : [ "<MY_SECURITY_GROUP>" ],
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"# Helper function\n",
"function error_exit\n",
"{\n",
" cfn-signal -e 1 -r \"$1\" '", { "Ref" : "WaitHandle" }, "'\n",
" exit 1\n",
"}\n",
"# Install Webserver Packages etc \n",
"cfn-init -v --region ", { "Ref" : "AWS::Region" },
" -s ", { "Ref" : "AWS::StackName" }, " -r WebServer ",
" --access-key ", { "Ref" : "CfnKeys" },
" --secret-key ", {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]}, " || error_exit 'Failed to run cfn-init'\n",
"# All is well so signal success\n",
"cfn-signal -e 0 -r \"Setup complete\" '", { "Ref" : "WaitHandle" }, "'\n"
]]}}
}
}
Obviously replacing MY_BUCKET, MY_FILE, MY_INSTANCE_ID, MY_SECURITY_GROUP with your values.