I am trying to create an elasticsearch index and am getting a timeout error. The commented out code works fine, so the server is running and is pingable. Here is my code:
import * as elasticsearch from "elasticsearch";
export const elasticClient = new elasticsearch.Client({
host: 'localhost:9200',
log: 'trace',
requestTimeout: 30000
});
// elasticClient.ping({
// requestTimeout: 30000,
// }, function (error) {
// if (error) {
// console.error('elasticsearch cluster is down!');
// } else {
// console.log('All is well');
// }
// });
// elasticClient.cluster.health({},function(err:any,resp:any) {
// console.log("-- Client Health --",resp);
// });
elasticClient.indices.create({
index: 'grr'
},function(err,resp,status) {
if(err) {
console.log(err);
}
else {
console.log("create",resp);
}
});
Here is the error message:
Elasticsearch INFO: 2017-06-17T02:36:12Z
Adding connection to http://localhost:9200/
Elasticsearch DEBUG: 2017-06-17T02:36:12Z
starting request {
"method": "PUT",
"path": "/grr",
"query": {}
}
{ Error: Request Timeout after 30000ms
at C:\Users\myname\myproject\node_modules\elasticsearch\src\lib\transport.js:342:15
at Timeout.<anonymous> (C:\Users\myname\myproject\node_modules\elasticsearch\src\lib\transport.js:371:7)
at ontimeout (timers.js:365:14)
at tryOnTimeout (timers.js:237:5)
at Timer.listOnTimeout (timers.js:207:5)
status: 408,
displayName: 'RequestTimeout',
message: 'Request Timeout after 30000ms' }
Elasticsearch TRACE: 2017-06-17T02:36:42Z
-> PUT http://localhost:9200/grr
<- 0
I am really at a loss of what's going on. I'm using version 13.1.0 of the elasticsearch.js library
edit:
I've also tried using a curl statement (taken directly from the official site and modified to work in a Windows command prompt) to create the index and am having a similar issue, except it doesn't time-out; it just goes into an infinite loop.
It ended up being my Symantec Antivirus!! I uninstalled it and everything works as normal.
Related
Currently, I see error status for all the authentication errors and it feels like a lot of extra noise in the total errors chart. I looked at https://github.com/DataDog/dd-trace-js/pull/909 and tried to use the custom execute provided for graphql
import ddTrace from 'dd-trace'
let tracer = ddTrace.init({
debug: false
}) // initialized in a different file to avoid hoisting.
tracer.use('graphql', {
hooks: {
execute: (span, args, res) => {
if (res && res.errors && res.errors[0] && res.errors[0].status !== 403) {
span?.setTag('error', res.errors)
}
}
}
})
export default tracer
But still, res with only 403 error is going into error status. Please help me with how can I achieve this.
Update: I found this bit of code in the tracing client repo:
tracer.use('graphql', {
hooks: {
execute: (span, args, res) => {
if (res?.errors?.[0]?.status === 403) { // assuming "status" is a number
span?.setTag('error', null) // remove any error set by the tracer
}
}
}
})
https://github.com/DataDog/dd-trace-js/issues/1249
maybe it would help
Old message:
Never mind. seems like my solution is only for express, graphql doesn't support that property
You probably want to just modify the validateStatus property in the http module:
Callback function to determine if there was an error. It should take a status code as its only parameter and return true for success or false for errors
https://datadoghq.dev/dd-trace-js/interfaces/plugins.http.html#validatestatus
As an example you should be able to mark 403s as not be errors with something like this:
const tracer = require('dd-trace').init();
tracer.use('express', {
validateStatus: code => code < 400 && code != 403
})
I'm trying to connect to a Postgres database from my Heroku node app, which works when running locally, both through node and by running the heroku local web command, but when running it on Heroku, it times out while waiting for pool.connect
I'm running the following code snippet through the Heroku console (I've also tried using this code in my app directly, but this is more efficient than redeploying each time):
node -e "
const { Pool } = require('pg');
const pool = new Pool({
connectionTimeoutMillis: 15000,
connectionString: process.env.DATABASE_URL + '?sslmode=require',
ssl: {
rejectUnauthorized: true
}
});
console.log('pool created');
(async() => {
try {
console.log('connecting');
const client = await pool.connect(); // this never resolves
console.log('querying');
const { rows } = await client.query('SELECT * FROM test_table LIMIT 1;');
console.log('query success', rows);
client.release()
} catch (error) {
console.log('query error', error);
}
})()
"
Things I've tried so far:
Using the pg Clientinstead of Pool
Using ssl: true instead of ssl: { rejectUnauthorized: true }
Using client.query without using pool.connect
Increased and omitted connectionTimeoutMillis (it resolves quickly when running locally since I'm querying a database that has just one row)
I've also tried using callbacks and promises instead of async / await
I've tried setting the connectionString both with the ?sslmode=require parameter and without it
I have tried using pg versions ^7.4.1 and ^7.18.2 so far
My assumption is that there is something I'm missing with either the Heroku setup or SSL, any help would be greatly appreciated, Thanks!
I'm trying to use the nightwatch-accessibility library, but keep getting error
POST /session/b4e18278544c74b9213c030b8119ee7e/timeouts/async_script - ECONNREFUSED
Error: connect ECONNREFUSED 127.0.0.1:9515
Error while running .setTimeoutsAsyncScript() protocol action: An unknown error has occurred.
POST /session/b4e18278544c74b9213c030b8119ee7e/execute_async - ECONNREFUSED
Error: connect ECONNREFUSED 127.0.0.1:9515
Error while running .executeScriptAsync() protocol action: An unknown error has occurred.
Normal tests work fine. As far as I can tell I am following the example correctly. The test assertions work correctly it just appears at the end of the test run.
nightwatch.json
{
"src_folders": ["test"],
"page_objects_path": "page-objects",
"globals_path": "./globals.js",
"custom_commands_path": ["./node_modules/nightwatch-accessibility/commands"],
"custom_assertions_path": ["./node_modules/nightwatch-accessibility/assertions"],
"end_session_on_fail": false,
"skip_testcases_on_fail": false,
"selenium": {
"start_process": false
},
"webdriver": {
"start_process": true,
"server_path": "node_modules/chromedriver/lib/chromedriver/chromedriver.exe",
"port": 9515
},
"test_settings": {
"default": {
"webdriver.port": 9515,
"desiredCapabilities": {
"browserName": "chrome"
}
}
}
}
globals.js
const chromedriver = require('chromedriver');
module.exports = {
before: function (done) {
chromedriver.start();
done();
},
after: function (done) {
chromedriver.stop();
done();
}
};
First test
module.exports = {
'#tags': ['accessibility'],
'First test': function (browser) {
browser
.url(`http://www.google.com`)
.pause(3000)
.initAccessibility()
.assert.accessibility('html', {
verbose: true
})
.end()
}
}
Executing by typing nightwatch from the terminal like I would other tests. Any ideas and is this the best accessibility assertion library for NightwatchJS?
I ended up using nightwatch-axe-verbose instead. Usage details included on this web accessibility testing using nightwatch blog post.
I'm using Nightwatch.js and trying to run E2E tests using the programmatic API as described here.
Here is my nightwatch.json file:
{
"src_folders": ["tests"],
"webdriver": {
"start_process": true,
"server_path": "node_modules/.bin/chromedriver",
"port": 9515
},
"test_settings": {
"default": {
"desiredCapabilities": {
"browserName": "chrome"
}
}
}
}
and index.js script:
const Nightwatch = require('nightwatch');
Nightwatch.runTests(require('./nightwatch.json')).then(() => {
console.log('All tests has been passed!');
}).catch(err => {
console.log(err);
});
When I run the script I get the error:
Error: An error occurred while retrieving a new session: "Connection refused to 127.0.0.1:9515". If the Webdriver/Selenium service is managed by Nightwatch, check if "start_process" is set to "true".
I feel it needs some configuration but the documentation isn't very helpful here.
Establishing connection between couchbase server and couchbase sync gateway in Mac OS -
$ ../sync_gateway
==== Couchbase Sync Gateway/1.0.4(34;04138fd) ====
Configured Go to use all 2 CPUs; `setenv GOMAXPROCS` to override this
Opening db /sync_gateway as bucket "sync_gateway", pool "default", server <walrus:>
Opening Walrus database sync_gateway on <walrus:>
Using default sync function `'channel(doc.channels)'` for database "sync_gateway"
Starting profile server on
***Starting admin server on 127.0.0.1:4985
Starting server on :4984 ...***
I created a config.json file and trying to connect it to that sever but its not happening by default its going to 127.0.0.1:4985
Can anyone help me out??
Add the following config value to your config.json file:
"adminInterface":"[YOUR_PREFERRED_IP_FOR_ADMIN_INTERFACE]:4985",
The adminInterface field let's the sync_gateway know on which IP:PORT to run the admin interface on.
Also, you need to tell sync_gateway where to fire the rest APIs for a bucket (todos in the sample below). As shown in the example below you can do that by adding "server": "http://[COUCHBASE_SERVER_IP]:8091", in the config for the database.
So,
You will fire admin rest APIs on [YOUR_PREFERRED_IP_FOR_ADMIN_INTERFACE]:4985.
And, sync_gateway will fire rest apis for CRUD operations for the "todos" bucket on [COUCHBASE_SERVER_IP]:8091.
Here is sample config file:
{
"interface":"192.168.1.117:4984",
"adminInterface":"127.0.0.1:4985",
"log": ["CRUD", "REST+", "Access"],
"facebook": { "register": true },
"databases": {
"todos": {
"server": "http://[COUCHBASE_SERVER_IP]:8091",
"users": {
"GUEST": {"disabled": true}
},
"sync":
`
function(doc, oldDoc) {
// NOTE this function is the same across the iOS, Android, and PhoneGap versions.
if (doc.type == "task") {
if (!doc.list_id) {
throw({forbidden : "Items must have a list_id"})
}
channel("list-"+doc.list_id);
} else if (doc.type == "list") {
channel("list-"+doc._id);
if (!doc.owner) {
throw({forbidden : "List must have an owner"})
}
if (oldDoc) {
var oldOwnerName = oldDoc.owner.substring(oldDoc.owner.indexOf(":")+1);
requireUser(oldOwnerName)
}
var ownerName = doc.owner.substring(doc.owner.indexOf(":")+1);
access(ownerName, "list-"+doc._id);
if (Array.isArray(doc.members)) {
var memberNames = [];
for (var i = doc.members.length - 1; i >= 0; i--) {
memberNames.push(doc.members[i].substring(doc.members[i].indexOf(":")+1))
};
access(memberNames, "list-"+doc._id);
}
} else if (doc.type == "profile") {
channel("profiles");
var user = doc._id.substring(doc._id.indexOf(":")+1);
if (user !== doc.user_id) {
throw({forbidden : "Profile user_id must match docid : " + user + " : " + doc.user_id})
}
requireUser(user);
access(user, "profiles"); // TODO this should use roles
}
}
`
}
}
}