I am connecting to a externally hosted MongoDB server from Heroku app. I need to test latency between my Heroku app and MongoDB server. I ran Heroku bash but 'ping' command is not available there. My only purpose is to test latency between Heroku and MongoDB server.
Try this (needs node.js): https://www.npmjs.com/package/tcp-ping
$ heroku run bash
Running `bash` attached to terminal... up, run.9040
~ $ npm install tcp-ping
tcp-ping#0.1.1 node_modules/tcp-ping
~ $ node
> var tcpp = require('tcp-ping');
undefined
> tcpp.ping({ address: 'www.heroku.com', port: 80 }, function(err, data) {
... console.log(data);
... });
undefined
> { address: 'www.heroku.com',
port: 80,
attempts: 10,
avg: 10.4436728,
max: 31.421943,
min: 4.133464,
results:
[ { seq: 0, time: 31.421943 },
{ seq: 1, time: 7.204108 },
{ seq: 2, time: 10.878877 },
{ seq: 3, time: 13.744017 },
{ seq: 4, time: 4.133464 },
{ seq: 5, time: 7.970543 },
{ seq: 6, time: 9.550277 },
{ seq: 7, time: 7.120228 },
{ seq: 8, time: 6.797261 },
{ seq: 9, time: 5.61601 } ] }
undefined
So, first install tcp-ping:
~ $ npm install tcp-ping
Then, copy paste these into the Node REPL client:
var tcpp = require('tcp-ping');
tcpp.ping({ address: 'www.heroku.com', port: 80 }, function(err, data) {
console.log(data);
});
Related
I'm using the https://www.npmjs.com/package/live-stream-radio module, but a few times i get this error using it. How could i solve it? My code:
const NodeMediaServer = require('node-media-server');
const config = {
rtmp: {
port: 1935,
chunk_size: 60000,
gop_cache: true,
ping: 30,
ping_timeout: 60
},
http: {
port: 8080,
allow_origin: '*',
mediaroot: 'F:/VMediaServer/mediaroot'
},
trans: { ffmpeg: 'C:/ffmpeg/bin/ffmpeg.exe', tasks: [ { app: 'live', ac: 'aac', vc: 'libx264', hls: true, hlsFlags: '[hls_time=2:hls_list_size=3:hls_flags=delete_segments]', dash: true, dashFlags: '[f=dash:window_size=3:extra_window_size=5]' } ] }
};
var nms = new NodeMediaServer(config)
nms.run();
The default Nightwatch.js output consumes one line per passed test. For example,
✔ Testing if element <body> contains text 'lecture is so boring' (12ms)
✔ Testing if element <button[id=btn1]> is visible (6ms)
✔ Testing if element <button[id=btn]> is visible (10ms)
✔ Testing if element <#fill1> is visible (19ms)
✔ Testing if element <#fill2> is visible (18ms)
✔ Testing if element <#fill3> is visible (20ms)
Currently Nightwatch consumes far more of my screen than the rest of my build output combined. On the CLI on Linux, appending
|grep -v "✔"
to the command mostly alleviates that but has the disadvantage of stripping colored text (even with --color=always) when calling nightwatch from gulp. Is there a configuration option or command line parameter to minimize output and only show failed tests?
Ideally, if all tests pass, I would prefer just one line of output, but that may be asking too much. Using the grep above, this still results, which remains too much for my taste.
[Nightwatch] Test Suite
=======================
ℹ Connected to localhost on port 4444 (1133ms).
Using: firefox (91.0.1) on linux 4.19.0-18-amd64 platform.
Running: Demo of Quiz
OK. 51 assertions passed. (3.498s)
I'd really just like it to say
Nightwatch: 51 assertions passed. (3.498s)
or similar.
My config file:
// Autogenerated by Nightwatch
// Refer to the online docs for more details: https://nightwatchjs.org/gettingstarted/configuration/
const Services = {}; loadServices();
module.exports = {
// An array of folders (excluding subfolders) where your tests are located;
// if this is not specified, the test source must be passed as the second argument to the test runner.
src_folders: ['src/ts/test/functional'],
// See https://nightwatchjs.org/guide/working-with-page-objects/
page_objects_path: '',
// See https://nightwatchjs.org/guide/extending-nightwatch/#writing-custom-commands
custom_commands_path: '',
// See https://nightwatchjs.org/guide/extending-nightwatch/#writing-custom-assertions
custom_assertions_path: '',
// See https://nightwatchjs.org/guide/#external-globals
globals_path : '',
webdriver: {},
test_settings: {
default: {
disable_error_log: false,
launch_url: 'https://nightwatchjs.org',
screenshots: {
enabled: false,
path: 'screens',
on_failure: true
},
desiredCapabilities: {
browserName : 'firefox'
},
webdriver: {
start_process: true,
server_path: (Services.geckodriver ? Services.geckodriver.path : '')
}
},
firefox: {
desiredCapabilities : {
browserName : 'firefox',
alwaysMatch: {
// Enable this if you encounter unexpected SSL certificate errors in Firefox
// acceptInsecureCerts: true,
'moz:firefoxOptions': {
args: [
// '-headless',
// '-verbose'
],
}
}
},
webdriver: {
start_process: true,
port: 4444,
server_path: (Services.geckodriver ? Services.geckodriver.path : ''),
cli_args: [
// very verbose geckodriver logs
// '-vv'
]
}
},
chrome: {
desiredCapabilities : {
browserName : 'chrome',
chromeOptions : {
// This tells Chromedriver to run using the legacy JSONWire protocol (not required in Chrome 78)
// w3c: false,
// More info on Chromedriver: https://sites.google.com/a/chromium.org/chromedriver/
args: [
//'--no-sandbox',
//'--ignore-certificate-errors',
//'--allow-insecure-localhost',
//'--headless'
]
}
},
webdriver: {
start_process: true,
port: 9515,
server_path: (Services.chromedriver ? Services.chromedriver.path : ''),
cli_args: [
// --verbose
]
}
},
//////////////////////////////////////////////////////////////////////////////////
// Configuration for when using the browserstack.com cloud service |
// |
// Please set the username and access key by setting the environment variables: |
// - BROWSERSTACK_USER |
// - BROWSERSTACK_KEY |
// .env files are supported |
//////////////////////////////////////////////////////////////////////////////////
browserstack: {
selenium: {
host: 'hub-cloud.browserstack.com',
port: 443
},
// More info on configuring capabilities can be found on:
// https://www.browserstack.com/automate/capabilities?tag=selenium-4
desiredCapabilities: {
'bstack:options' : {
local: 'false',
userName: '${BROWSERSTACK_USER}',
accessKey: '${BROWSERSTACK_KEY}',
}
},
disable_error_log: true,
webdriver: {
keep_alive: true,
start_process: false
}
},
'browserstack.chrome': {
extends: 'browserstack',
desiredCapabilities: {
browserName: 'chrome',
chromeOptions : {
// This tells Chromedriver to run using the legacy JSONWire protocol
// More info on Chromedriver: https://sites.google.com/a/chromium.org/chromedriver/
w3c: false
}
}
},
'browserstack.firefox': {
extends: 'browserstack',
desiredCapabilities: {
browserName: 'firefox'
}
},
'browserstack.ie': {
extends: 'browserstack',
desiredCapabilities: {
browserName: 'IE',
browserVersion: '11.0',
'bstack:options' : {
os: 'Windows',
osVersion: '10',
local: 'false',
seleniumVersion: '3.5.2',
resolution: '1366x768'
}
}
},
//////////////////////////////////////////////////////////////////////////////////
// Configuration for when using the Selenium service, either locally or remote, |
// like Selenium Grid |
//////////////////////////////////////////////////////////////////////////////////
selenium: {
// Selenium Server is running locally and is managed by Nightwatch
selenium: {
start_process: true,
port: 4444,
server_path: (Services.seleniumServer ? Services.seleniumServer.path : ''),
cli_args: {
'webdriver.gecko.driver': (Services.geckodriver ? Services.geckodriver.path : ''),
'webdriver.chrome.driver': (Services.chromedriver ? Services.chromedriver.path : '')
}
}
},
'selenium.chrome': {
extends: 'selenium',
desiredCapabilities: {
browserName: 'chrome',
chromeOptions : {
w3c: false
}
}
},
'selenium.firefox': {
extends: 'selenium',
desiredCapabilities: {
browserName: 'firefox',
'moz:firefoxOptions': {
args: [
// '-headless',
// '-verbose'
]
}
}
}
}
};
function loadServices() {
try {
Services.seleniumServer = require('selenium-server');
} catch (err) {}
try {
Services.chromedriver = require('chromedriver');
} catch (err) {}
try {
Services.geckodriver = require('geckodriver');
} catch (err) {}
}
Edit: Just to mark it as answered
You need to add detailed_output: false in your nightwatch config file.
I have mongodb on windows so there is no logrotate or anything. The log consumes 175GB per week! I need to cut this down a lot..
Currently db.GetProfilingLevel() returns 0 and db.getLogComponents() returns -1 for all components and still I get almost 2000 of these bad boys a minute:
2018-06-25T15:44:59.653+0200 I COMMAND [conn2355] command mydb.LimitStubs command: find { find: "LimitStubs", filter: { Limit: "asdl;" }, skip: 0, noCursorTimeout: false } planSummary: IXSCAN { Limit: 1, Holder: 1 } keysExamined:0 docsExamined:0 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:129 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } protocol:op_query 0ms
Any suggestions?
For example, assume that we have stream like following
Stream 1 | -1-2-3-1-2-3--4-----------
after debounce, I would like to have the emitted stream looks like as follows:
Stream 2 | ---------------1-2-3--4------
There are lots of examples how to debounce the stream, but they take all value as the same trigger.
The following is the example code I found in reactitve-extension website,
var Rx = require('rxjs/Rx');
var times = [
{ value: 1, time: 100 },
{ value: 2, time: 200 },
{ value: 3, time: 300 },
{ value: 1, time: 400 },
{ value: 2, time: 500 },
{ value: 3, time: 600 },
{ value: 4, time: 800 }
];
// Delay each item by time and project value;
var source = Rx.Observable.from(times)
.flatMap(function (item) {
return Rx.Observable
.of(item.value)
.delay(item.time);
})
.debounceTime(500 /* ms */);
var subscription = source.subscribe(
function (x) {
console.log('Next: %s', x);
},
function (err) {
console.log('Error: %s', err);
},
function () {
console.log('Completed');
});
The console output would be
Next: 4
Completed
But I would like to get the following output
Next: 1
Next: 2
Next: 3
Next: 4
Completed
Maxime give good answer.
I also try myself. Hope help someone who have the same question.
var Rx = require('rxjs/Rx');
var times = [
{ value: 1, time: 100 },
{ value: 2, time: 200 },
{ value: 3, time: 300 },
{ value: 1, time: 400 },
{ value: 2, time: 500 },
{ value: 3, time: 600 },
{ value: 4, time: 800 },
{ value: 5, time: 1500 }
];
// Delay each item by time and project value;
var source = Rx.Observable.from(times)
.flatMap(function (item) {
return Rx.Observable
.of(item.value)
.delay(item.time);
})
.do(obj => console.log('stream 1:', obj, 'at', Date.now() - startTime, `ms`))
.groupBy(obj => obj)
.flatMap(group => group.debounceTime(500))
let startTime = Date.now();
var subscription = source.subscribe(
function (x) {
console.log('stream 2: %s', x, 'at', Date.now() - startTime, 'ms');
},
function (err) {
console.log('Error: %s', err);
},
function () {
console.log('Completed');
});
The console will output
stream 1: 1 at 135 ms
stream 1: 2 at 206 ms
stream 1: 3 at 309 ms
stream 1: 1 at 409 ms
stream 1: 2 at 509 ms
stream 1: 3 at 607 ms
stream 1: 4 at 809 ms
stream 2: 1 at 911 ms
stream 2: 2 at 1015 ms
stream 2: 3 at 1109 ms
stream 2: 4 at 1310 ms
stream 1: 5 at 1510 ms
stream 2: 5 at 1512 ms
Completed
Here's the code I propose :
const { Observable } = Rx
const objs = [
{ value: 1, time: 100 },
{ value: 2, time: 200 },
{ value: 3, time: 300 },
{ value: 1, time: 400 },
{ value: 2, time: 500 },
{ value: 3, time: 600 },
{ value: 4, time: 800 }
];
const tick$ = Observable.interval(100)
const objs$ = Observable.from(objs).zip(tick$).map(x => x[0])
objs$
.groupBy(obj => obj.value)
.mergeMap(group$ =>
group$
.debounceTime(500))
.do(obj => console.log(obj))
.subscribe()
And the output is just as expected :
Here's a working Plunkr with demo
https://plnkr.co/edit/rEI8odCrhp7GxmlcHglx?p=preview
Explanation :
I tried to make a small schema :
The thing is, you cannot use the debounceTime directly on the main observable (that's why you only had one value). You've got to group every values in their own stream with the groupBy operator and apply the debounceTime to the splitted group of values (as I tried to show in the image). Then use flatMap or mergeMap to get one final stream.
Doc :
Here are some pages that might help you understand :
- groupBy
- debounceTime
- mergeMap
Is there a way to monitor the input and output throughput of a Spark cluster, to make sure the cluster is not flooded and overflowed by incoming data?
In my case, I set up Spark cluster on AWS EC2, so I'm thinking of using AWS CloudWatch to monitor the NetworkIn and NetworkOut for each node in the cluster.
But my idea seems to be not accurate and network does not meaning incoming data for Spark only, maybe also some other data would be calculated too.
Is there a tool or way to monitor specifically for Spark cluster streaming data status? Or there's already a built-in tool in Spark that I missed?
update: Spark 1.4 released, monitoring at port 4040 is significantly enhanced with graphical display
Spark has a configurable metric subsystem.
By default it publishes a JSON version of the registered metrics on <driver>:<port>/metrics/json. Other metrics syncs, like ganglia, csv files or JMX can be configured.
You will need some external monitoring system that collects metrics on a regular basis an helps you make sense of it. (n.b. We use Ganglia but there's other open source and commercial options)
Spark Streaming publishes several metrics that can be used to monitor the performance of your job. To calculate throughput, you would combine:
(lastReceivedBatch_processingEndTime-lastReceivedBatch_processingStartTime)/lastReceivedBatch_records
For all metrics supported, have a look at StreamingSource
Example: Starting a local REPL with Spark 1.3.1 and after executing a trivial streaming application:
import org.apache.spark.streaming._
val ssc = new StreamingContext(sc, Seconds(10))
val queue = scala.collection.mutable.Queue(1,2,3,45,6,6,7,18,9,10,11)
val q = queue.map(elem => sc.parallelize(Seq(elem)))
val dstream = ssc.queueStream(q)
dstream.print
ssc.start
one can GET localhost:4040/metrics/json and that returns:
{
version: "3.0.0",
gauges: {
local-1430558777965.<driver>.BlockManager.disk.diskSpaceUsed_MB: {
value: 0
},
local-1430558777965.<driver>.BlockManager.memory.maxMem_MB: {
value: 2120
},
local-1430558777965.<driver>.BlockManager.memory.memUsed_MB: {
value: 0
},
local-1430558777965.<driver>.BlockManager.memory.remainingMem_MB: {
value: 2120
},
local-1430558777965.<driver>.DAGScheduler.job.activeJobs: {
value: 0
},
local-1430558777965.<driver>.DAGScheduler.job.allJobs: {
value: 6
},
local-1430558777965.<driver>.DAGScheduler.stage.failedStages: {
value: 0
},
local-1430558777965.<driver>.DAGScheduler.stage.runningStages: {
value: 0
},
local-1430558777965.<driver>.DAGScheduler.stage.waitingStages: {
value: 0
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastCompletedBatch_processingDelay: {
value: 44
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastCompletedBatch_processingEndTime: {
value: 1430559950044
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastCompletedBatch_processingStartTime: {
value: 1430559950000
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastCompletedBatch_schedulingDelay: {
value: 0
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastCompletedBatch_submissionTime: {
value: 1430559950000
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastCompletedBatch_totalDelay: {
value: 44
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastReceivedBatch_processingEndTime: {
value: 1430559950044
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastReceivedBatch_processingStartTime: {
value: 1430559950000
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastReceivedBatch_records: {
value: 0
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.lastReceivedBatch_submissionTime: {
value: 1430559950000
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.receivers: {
value: 0
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.retainedCompletedBatches: {
value: 2
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.runningBatches: {
value: 0
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.totalCompletedBatches: {
value: 2
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.totalProcessedRecords: {
value: 0
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.totalReceivedRecords: {
value: 0
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.unprocessedBatches: {
value: 0
},
local-1430558777965.<driver>.Spark shell.StreamingMetrics.streaming.waitingBatches: {
value: 0
}
},
counters: { },
histograms: { },
meters: { },
timers: { }
}
I recommend using https://spark.apache.org/docs/latest/monitoring.html#metrics with Prometheus (https://prometheus.io/).
Metrics generated by Spark metrics can be captured using Prometheus and It offers UI as well. Prometheus is a free tool.