raven - sentry + django = No servers configured, and sentry not installed. Cannot send message - sentry

I have a sentry server that works ok.
raven test <dnstoserver> -> Sending a test message... success!
I have a dev machine with django 1.3 and raven 1.93.
In the django project I have this:
setting.py:
SENTRY_KEY=<secretkey>
SENTRY_DNS=<dnstoserver>
INSTALLED_APPS = (
'bar',
'foo',
'raven.contrib.django',
)
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'root': {
'level': 'WARNING',
'handlers': ['sentry'],
},
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
},
'handlers': {
'sentry': {
'level': 'ERROR',
'class': 'raven.contrib.django.handlers.SentryHandler',
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose'
}
},
'loggers': {
'django.db.backends': {
'level': 'ERROR',
'handlers': ['console'],
'propagate': False,
},
'raven': {
'level': 'DEBUG',
'handlers': ['console', 'sentry'],
'propagate': False,
},
},
}
Mind the absence of 'sentry' in the installed_apps. This is intentionally, since sentry is the server and should not be on a client!
views.py (in a view):
import logging
logger = logging.getLogger("raven")
logger.error("test")
When I run the view I get on the console:
No servers configured, and sentry not installed. Cannot send message
Why, and how to fix?

Were you really setting SENTRY_DNS or SENTRY_DSN?
When you set SENTRY_DSN the instantiation of the major config variables happens automatically (including SENTRY_SERVERS, SENTRY_PUBLIC_KEY, SENTRY_SECRET_KEY, and SENTRY_PROJECT)

The problem was in the construction of the raven DjangoClient. It did not get passed any servers, and couldn't find sentry config to steal that config from.
I added in the settings.py:
SENTRY_SERVERS=<dnstoserver>
Console now outputs this every time raven is called:
INFO 2012-06-21 05:33:19,831 base 4323 140735075462336 Configuring Raven for host: <dnstoserver>
But it works like a charm! Messages in sentry...
BTW. for all the undocumented settings take a look in raven.contrib.django.models.get_client()

I suggest to use:
SENTRY_DSN = 'http://user:password#<domain>:<port>/<project_id>'
And in APPS_INSTALLED add:
'raven.contrib.django.raven_compat'
Also take a look at this guide:
http://code.fetzig.at/post/18607051916/sentry-and-raven-setup-for-django-projects

Related

botium-cli emulator does not work with the rasa connector

When I try to start the emulator for my rasa bot, I get this error. The rasa endpoint url works perfectly fine when I send it a request with curl.
$ botium-cli emulator
Error: Start failed: undefined
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\src\containers\plugins\SimpleRestContainer.js:170:25
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:2955:19
at wrapper (C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:268:20)
at iterateeCallback (C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:413:21)
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:321:20
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:2953:17
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\src\containers\plugins\SimpleRestContainer.js:130:17
Here's the botium.json
{
"botium": {
"Capabilities": {
"PROJECTNAME": "My Botium Project",
"CONTAINERMODE": "rasa",
"RASA_MODE": "DIALOG_AND_NLU",
"SIMPLEREST_STRICT_SSL" : "false",
"RASA_ENDPOINT_URL": "https://example.com/webhooks/rest/webhook"
},
"Sources": {},
"Envs": {}
}
}
Here's the output from the emulator with the verbose tag.
Still does not work, maybe it has to do something with the way botium connects to the bot?
$ botium-cli emulator --verbose
2021-03-05T08:17:32.230Z botium-cli Using Botium configuration file ./botium.json
2021-03-05T08:17:32.240Z botium-cli-emulator command options: {
_: [ 'emulator' ],
verbose: true,
v: true,
convos: [ '.' ],
C: [ '.' ],
config: './botium.json',
c: './botium.json',
ui: 'console',
'$0': '..\\..\\AppData\\Roaming\\npm\\node_modules\\botium-cli\\bin\\botium-cli.js'
}
2021-03-05T08:17:32.573Z botium-core-BotDriver Loaded Botium configuration files C:\Users\shivam.sinha\Desktop\rasa check\botium.json
2021-03-05T08:17:33.085Z botium-core-BotDriver Build - Botium Core Version: 1.11.1
2021-03-05T08:17:33.087Z botium-core-BotDriver Build - Capabilites: {
PROJECTNAME: 'My Botium Project',
TESTSESSIONNAME: 'Botium Test Session',
TESTCASENAME: 'Botium Test Case',
TEMPDIR: 'botiumwork',
CLEANUPTEMPDIR: true,
WAITFORBOTTIMEOUT: 10000,
SIMULATE_WRITING_SPEED: false,
SIMPLEREST_PING_RETRIES: 6,
SIMPLEREST_PING_TIMEOUT: 10000,
SIMPLEREST_PING_VERB: 'GET',
SIMPLEREST_PING_UPDATE_CONTEXT: true,
SIMPLEREST_STOP_RETRIES: 6,
SIMPLEREST_STOP_TIMEOUT: 10000,
SIMPLEREST_STOP_VERB: 'GET',
SIMPLEREST_START_RETRIES: 6,
SIMPLEREST_START_TIMEOUT: 10000,
SIMPLEREST_START_VERB: 'GET',
SIMPLEREST_POLL_VERB: 'GET',
SIMPLEREST_POLL_INTERVAL: 1000,
SIMPLEREST_POLL_UPDATE_CONTEXT: true,
SIMPLEREST_METHOD: 'GET',
SIMPLEREST_IGNORE_EMPTY: true,
SIMPLEREST_TIMEOUT: 10000,
SIMPLEREST_EXTRA_OPTIONS: {},
SIMPLEREST_STRICT_SSL: false,
SIMPLEREST_INBOUND_UPDATE_CONTEXT: true,
SIMPLEREST_CONTEXT_MERGE_OR_REPLACE: 'MERGE',
SCRIPTING_TXT_EOL: '\n',
SCRIPTING_XLSX_EOL_WRITE: '\r\n',
SCRIPTING_XLSX_HASHEADERS: true,
SCRIPTING_CSV_SKIP_HEADER: true,
SCRIPTING_CSV_QUOTE: '"',
SCRIPTING_CSV_ESCAPE: '"',
SCRIPTING_NORMALIZE_TEXT: true,
SCRIPTING_ENABLE_MEMORY: false,
SCRIPTING_ENABLE_MULTIPLE_ASSERT_ERRORS: false,
SCRIPTING_MATCHING_MODE: 'wildcardIgnoreCase',
SCRIPTING_UTTEXPANSION_MODE: 'all',
SCRIPTING_UTTEXPANSION_RANDOM_COUNT: 1,
SCRIPTING_UTTEXPANSION_NAMING_MODE: 'justLineTag',
SCRIPTING_UTTEXPANSION_NAMING_UTTERANCE_MAX: '16',
SCRIPTING_MEMORYEXPANSION_KEEP_ORIG: false,
ASSERTERS: [],
LOGIC_HOOKS: [],
USER_INPUTS: [],
SECURITY_ALLOW_UNSAFE: true,
CONTAINERMODE: 'rasa',
RASA_MODE: 'DIALOG_AND_NLU',
RASA_ENDPOINT_URL: 'https://10.60.31.102:8080/webhooks/rest/webhook',
CONFIG: './botium.json'
}
2021-03-05T08:17:33.087Z botium-core-BotDriver Build - Sources : { LOCALPATH: '.', GITPATH: 'git', GITBRANCH: 'master', GITDIR: '.' }
2021-03-05T08:17:33.087Z botium-core-BotDriver Build - Envs : { IS_BOTIUM_CONTAINER: true }
2021-03-05T08:17:33.274Z botium-connector-PluginConnectorContainer-helper Botium plugin botium-connector-rasa loaded. Plugin version is 0.0.7
Error: Start failed: undefined
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\src\containers\plugins\SimpleRestContainer.js:170:25
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:2955:19
at wrapper (C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:268:20)
at iterateeCallback (C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:413:21)
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:321:20
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:2953:17
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\src\containers\plugins\SimpleRestContainer.js:130:17
The RASA_ENDPOINT_URL capability has to point to the base URL of your Rasa server - without the /webhooks/rest/webhook path. Botium will build up it's own full url endpoint path based on the RASA_MODE capability:
NLU_INPUT => /model/parse is appended
REST_INPUT => /webhooks/rest/webhook/ is appended
{
"botium": {
"Capabilities": {
"PROJECTNAME": "Botium Project Rasa NLU",
"CONTAINERMODE": "rasa",
"RASA_MODE": "NLU_INPUT",
"RASA_ENDPOINT_URL": "https://demo.botiumbox.com/rasa-demo/"
}
}
}

Configuration for log driver logfile

I've implemented nightwatchjs in my project and the start is there. However, something I don't like is that the chrome and gecko driver are placing a log file in my root directory. I'ld much prefer this to move to a logging location.
disable_error_log: true,
desiredCapabilities: {
silent: true,
browserName: 'firefox',
alwaysMatch: {
acceptInsecureCerts: true,
'moz:firefoxOptions': {
args: []
}
}
},
webdriver: {
start_process: true,
server_path: GeckoDriver.path,
cli_args: []
}
},
chrome: {
disable_error_log: true,
desiredCapabilities: {
silent: true,
browserName: 'chrome',
chromeOptions: {
args: []
}
},
webdriver: {
log_path: false,
start_process: true,
server_path: ChromeDriver.path,
cli_args: [
]
}
}
Right now the configuration is as above. Two questions here for the logging are:
wat setting do you use to turn it on or off
what setting do you use to change the location and or file name for the log
I'm running Nightwatch on a Mac machine and in my nightwatch.conf.js file I have a Selenium object that has a log_path property. The path currently is logs/ but I just tried to remove the path and put false as the path and that worked for me. If you want to turn it off, put false as the path for log_path, otherwise put the name of the directory (mine is logs/). As far as changing the name of the selenium-server.log file, I do not think you can change this, unless someone has create an npm module that gives the ability to extend Nightwatch.
selenium: {
"start_process": true,
"server_path": "bin/selenium-server-standalone-3.9.1.jar",
"log_path": false,
"port": 4444,
"cli_args": {
"webdriver.chrome.driver": this.chromePath
}
}
link for nightwatch docs
I also have the same issue. I solve this by setting value of log_path in webdriver object of each driver.
you can make changes as like:
chrome: {
disable_error_log: true,
desiredCapabilities: {
silent: true,
browserName: 'chrome',
chromeOptions: {
args: []
}
},
webdriver: {
start_process: true,
server_path: ChromeDriver.path,
log_path:'log_folder', // add this line to your every webdriver object.
cli_args: [
]
}
}
You can off your driverLog generation by setting log_path to false. If you want to specify some specific folder to store the driverLog file then you have to set the log_path to desired location.

typeProviders with private GKE clusters

I am new to google cloud manager (GCM) and am writing some code in order to practice. I have read some interesting articles that detail how I can use deploymentmanager.v2beta.typeprovider in order to extend GCM and use it to configure Kubernetes objects themselves as additional deployment. This is a very appealing behavior for extension and seems to open up great opportunities to extend declarative automation of any API which is cool.
I am attempting to create a private node/public endpoint GKE cluster that is managed by custom typeProvider resources which correspond to GKE api calls. It seems that public node/public endpoint GKE cluster is the only way to support GCM custom typeProviders and this seems wrong considering private node/public endpoint GKE configuration possible.
It seems it would be weird to not have deploymentmanager.v2beta.typeprovider support a private node/public endpoint GKE configuration.
as an aside...
I feel a private node/private endpoint/Cloud Endpoint to expose it to the GCM typeProvider public API endpoint requirement should also be a valid architecture but I have yet to test.
Using the following code
def GenerateConfig(context):
# Some constant type vars that are not really
resources = []
outputs = []
gcp_type_provider = 'deploymentmanager.v2beta.typeProvider'
extension_prefix = 'k8s'
api_version = 'v1'
kube_initial_auth = {
'username': 'luis',
'password': 'letmeinporfavors',
"clientCertificateConfig": {
'issueClientCertificate': True
}
}
# EXTEND API TO CONTROL KUBERNETES BEGIN
kubernetes_exposed_apis = [
{
'name': '{}-{}-api-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'api/v1'
},
{
'name': '{}-{}-apps-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/apps/v1'
},
{
'name': '{}-{}-rbac-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/rbac.authorization.k8s.io/v1'
},
{
'name': '{}-{}-v1beta1-extensions-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/extensions/v1beta1'
}
]
for exposed_api in kubernetes_exposed_apis:
descriptor_url = 'https://{}/swaggerapi/{}'.format(
'$(ref.{}-k8s-cluster.endpoint)'.format(
context.env['deployment']
),
exposed_api['endpoint']
)
resources.append(
{
'name': exposed_api['name'],
'type': gcp_type_provider,
'properties': {
'options': {
'validationOptions': {
'schemaValidation': 'IGNORE_WITH_WARNINGS'
},
'inputMappings': [
{
'fieldName': 'name',
'location': 'PATH',
'methodMatch': '^(GET|DELETE|PUT)$',
'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
},
{
'fieldName': 'metadata.name',
'location': 'BODY',
'methodMatch': '^(PUT|POST)$',
'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
},
{
'fieldName': 'Authorization',
'location': 'HEADER',
'value': '$.concat("Bearer ", $.googleOauth2AccessToken())'
}
],
},
'descriptorUrl': descriptor_url
},
}
)
# EXTEND API TO CONTROL KUBERNETES END
# NETWORK DEFINITION BEGIN
resources.append(
{
'name': "{}-network".format(context.env['deployment']),
'type': "compute.{}.network".format(api_version),
'properties': {
'description': "{} network".format(context.env['deployment']),
'autoCreateSubnetworks': False,
'routingConfig': {
'routingMode': 'REGIONAL'
}
},
}
)
resources.append(
{
'name': "{}-subnetwork".format(context.env['deployment']),
'type': "compute.{}.subnetwork".format(api_version),
'properties': {
'description': "{} subnetwork".format(
context.env['deployment']
),
'network': "$(ref.{}-network.selfLink)".format(
context.env['deployment']
),
'ipCidrRange': '10.64.1.0/24',
'region': 'us-east1',
'privateIpGoogleAccess': True,
'enableFlowLogs': False,
}
}
)
# NETWORK DEFINITION END
# EKS DEFINITION BEGIN
resources.append(
{
'name': "{}-k8s-cluster".format(context.env['deployment']),
'type': "container.{}.cluster".format(api_version),
'properties': {
'zone': 'us-east1-b',
'cluster': {
'description': "{} kubernetes cluster".format(
context.env['deployment']
),
'privateClusterConfig': {
'enablePrivateNodes': False,
'masterIpv4CidrBlock': '10.0.0.0/28'
},
'ipAllocationPolicy': {
'useIpAliases': True
},
'nodePools': [
{
'name': "{}-cluster-pool".format(
context.env['deployment']
),
'initialNodeCount': 1,
'config': {
'machineType': 'n1-standard-1',
'oauthScopes': [
'https://www.googleapis.com/auth/compute',
'https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/logging.write',
'https://www.googleapis.com/auth/monitoring'
],
},
'management': {
'autoUpgrade': False,
'autoRepair': True
}
}],
'masterAuth': kube_initial_auth,
'loggingService': 'logging.googleapis.com',
'monitoringService': 'monitoring.googleapis.com',
'network': "$(ref.{}-network.selfLink)".format(
context.env['deployment']
),
'clusterIpv4Cidr': '10.0.0.0/14',
'subnetwork': "$(ref.{}-subnetwork.selfLink)".format(
context.env['deployment']
),
'enableKubernetesAlpha': False,
'resourceLabels': {
'purpose': 'expiramentation'
},
'networkPolicy': {
'provider': 'CALICO',
'enabled': True
},
'initialClusterVersion': 'latest',
'enableTpu': False,
}
}
}
)
outputs.append(
{
'name': '{}-cluster-endpoint'.format(
context.env['deployment']
),
'value': '$(ref.{}-k8s-cluster.endpoint)'.format(
context.env['deployment']
),
}
)
# EKS DEFINITION END
# bring it all together
template = {
'resources': resources,
'outputs': outputs
}
# give it to google
return template
if __name__ == '__main__':
GenerateConfig({})
Also, I will note the subsequent hello world template which uses the created typeProviders above.
def current_config():
'''
get the current configuration
'''
return {
'name': 'atrium'
}
def GenerateConfig(context):
resources = []
conf = current_config()
resources.append(
{
'name': '{}-svc'.format(conf['name']),
'type': "{}/{}-k8s-api-v1-type:/api/v1/namespaces/{}/pods".format(
context.env['project'],
conf['name'],
'{namespace}'
),
'properties': {
'namespace': 'default',
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'name': 'hello-world',
},
'spec': {
'restartPolicy': 'Never',
'containers': [
{
'name': 'hello',
'image': 'ubuntu:14.04',
'command': ['/bin/echo', 'hello', 'world'],
}
]
}
}
}
)
template = {
'resources': resources,
}
return template
if __name__ == '__main__':
GenerateConfig({})
If I leave enablePrivateNodes as False
'privateClusterConfig': {
'enablePrivateNodes': False,
'masterIpv4CidrBlock': '10.0.0.0/28'
}
I get this as a response
~/code/github/gcp/expiramentation/atrium_gcp_infra 24s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14]...done.
Delete operation operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14 completed successfully.
Waiting for delete [operation-1562105393399-58cba01134528-be47dc30-755cb106]...done.
Delete operation operation-1562105393399-58cba01134528-be47dc30-755cb106 completed successfully.
The fingerprint of the deployment is IiWcrdbZA5MedNlJLIicOg==
Waiting for create [operation-1562105786056-58cba187abee2-5d761e87-b446baca]...done.
Create operation operation-1562105786056-58cba187abee2-5d761e87-b446baca completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-k8s-api-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-apps-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-cluster container.v1.cluster COMPLETED []
atrium-k8s-rbac-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-v1beta1-extensions-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-network compute.v1.network COMPLETED []
atrium-subnetwork compute.v1.subnetwork COMPLETED []
The fingerprint of the deployment is QJ2NS5EhjemyQJThUWYNHA==
Waiting for create [operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb]...done.
Create operation operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-svc atrium-244423/atrium-k8s-api-v1-type:/api/v1/namespaces/{namespace}/pods COMPLETED []
~/code/github/gcp/expiramentation/atrium_gcp_infra 13m 48s
this is a good response and my custom typeProvider resource creates correctly using the API's of my freshly created cluster.
If I make this cluster have private nodes however... with
'privateClusterConfig': {
'enablePrivateNodes': True,
'masterIpv4CidrBlock': '10.0.0.0/28'
},
I fail with
~/code/github/gcp/expiramentation/atrium_gcp_infra 56s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562106572016-58cba47538c93-d34c17fc-8b863765]...done.
Delete operation operation-1562106572016-58cba47538c93-d34c17fc-8b863765 completed successfully.
Waiting for delete [operation-1562106592237-58cba4888184f-a5bc3135-4e662eed]...done.
Delete operation operation-1562106592237-58cba4888184f-a5bc3135-4e662eed completed successfully.
The fingerprint of the deployment is dk5nh_u5ZFFvYO-pCXnFBg==
Waiting for create [operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd]...done.
Create operation operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-k8s-api-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-apps-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-cluster container.v1.cluster COMPLETED []
atrium-k8s-rbac-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-v1beta1-extensions-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-network compute.v1.network COMPLETED []
atrium-subnetwork compute.v1.subnetwork COMPLETED []
The fingerprint of the deployment is 4RnscwpcYTtS614VXqtjRg==
Waiting for create [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]: errors:
- code: ERROR_PROCESSING_REQUEST
message: 'Error fetching URL https://10.0.0.2:443/api/v1/namespaces/default/pods,
reason: ERROR_EXCLUDED_IP'
The 10.0.0.2 seems to be the private endpoint of my cluster. I am having a difficult time tracking down where I can override the https://10.0.0.2:443/api/v1/namespaces/default/pods url host such that it would attempt to contact the publicEndpoint rather than the privateEndpoint.
If this call were to go out to the public endpoint it would create successfully I believe. As an interesting aside the typeProvider declarations in the descriptorUrl do attempt to hit the publicEndpoint of the cluster and are successful in doing so. However, despite this indication the creation of the actual api resources such as the hello world example attempt to interface with the private endpoint.
I feel this behavior should be overridable somewhere but I am failing to find this clue.
I have tried both a working public node configuration and a non-working private node configuration
So I ran into the same problem writing [this] (https://github.com/Aahzymandius/k8s-workshops/tree/master/8-live-debugging). While trying to debug this I ran into 2 issues
The public endpoont is used initially to fetch the swagger definition of the various k8s apis and resources, however, in that response is included the private endpoint for the API server which causes the k8s type to try to use that ip.
While further trying debug that error plus a new one I was hitting, I discovered that gke 1.14.x or later does not support insecure calls to the k8s api which caused the k8s type to fail even with fully public clusters.
Since the functionality seems to fail with newer versions of GKE, I just stopped trying to debug it. Though I'd recommend reporting an issue on the github repo for it. This functionality would be great to keep going

How to run tests programmatically in Nightwatch.js?

I'm using Nightwatch.js and trying to run E2E tests using the programmatic API as described here.
Here is my nightwatch.json file:
{
"src_folders": ["tests"],
"webdriver": {
"start_process": true,
"server_path": "node_modules/.bin/chromedriver",
"port": 9515
},
"test_settings": {
"default": {
"desiredCapabilities": {
"browserName": "chrome"
}
}
}
}
and index.js script:
const Nightwatch = require('nightwatch');
Nightwatch.runTests(require('./nightwatch.json')).then(() => {
console.log('All tests has been passed!');
}).catch(err => {
console.log(err);
});
When I run the script I get the error:
Error: An error occurred while retrieving a new session: "Connection refused to 127.0.0.1:9515". If the Webdriver/Selenium service is managed by Nightwatch, check if "start_process" is set to "true".
I feel it needs some configuration but the documentation isn't very helpful here.

Common tests are excluded in suite configuration of protractor

I could see the common tests are excluded in a suite configuration of protractor. Below is my config.js and there are two scenarios configured in suites.
I'm expecting the test to complete the scenario1 successfully and then Login again as part of scenario2. But, I could see the test ignores 'Login.js', 'CustomerSelection.js', 'Create.js' of Scenario2 and directly proceeds with 'ProductSelection.js'.
Any idea why is that so ? Am I missing anything in conf.js to work the way the scenarios configured ?
Config.js:
exports.config = {
seleniumAddress: 'http://localhost:4444/wd/hub',
capabilities: {
'browserName': 'chrome'
},
framework: 'jasmine' ,
showColors: true,
suites : {
scenario1: [
'Login.js',
'CustomerSelection.js',
'Create.js',
'View.js',
],
scenario2: [
'Login.js',
'CustomerSelection.js',
'Create.js',
'ProductSelection.js',
]
},
jasmineNodeOpts: {
isVerbose: true,
showColors: true,
print: function () {
},
includeStackTrace: true,
defaultTimeoutInterval: 700000
},
onPrepare: function() {
browser.manage().window().maximize();
browser.manage().timeouts().implicitlyWait(5000);
}
};
Below are the versions I'm using:
protractor: Version 5.4.0
Jasmine: Version 3.2.0
Node: v8.11.1
NPM: Version 5.6.0
If this tests are something that you run always before all. You can place them like this into View.js and ProductSelection.js as part of beforeAll, I'm placing login beforeAll (loginPage is page where my functions are placed, Login() is function in loginPage that logins in application if you send correct username and password to it) like this:
beforeAll(function() {
loginPage.Login(username, password);
});

Resources