I've got a very simple question which I've tried to find out but without satisfactory results.
Example is underneath:
INFO [karma]: Karma server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
INFO [launcher]: Starting browser Firefox
INFO [Chrome 28.0 (Linux)]: Connected on socket id MIsxYm-yXOtkIlbXrkr4
INFO [Chrome 28.0 (Linux)]: Connected on socket id Ek6biR3iiKgej2a-rkr5
INFO [Firefox 21.0 (Linux)]: Connected on socket id OcDqEq-VZ5o7tCjNrkr6
Chrome 28.0 (Linux): Executed 2 of 2 SUCCESS (1.655 secs / 1.392 secs)
Chrome 28.0 (Linux): Executed 2 of 2 SUCCESS (2.131 secs / 1.659 secs)
Firefox 21.0 (Linux): Executed 2 of 2 SUCCESS (2.351 secs / 1.414 secs)
TOTAL: 6 SUCCESS
The question is what exactly these times (1.655 secs / 1.392 secs) means?
Correct me if I created a bad main question (title) :)
The two numbers are totalTime/netTime source. The total time is wall clock time from start to end, and the net time is the sum of all the time spent in individual tests source.
Related
I'm trying to use strapi with mysql. Connections with the database have been successful and the server too starts successfully.
But the admin page never loads.
Here is what my console displays,
λ strapi start
[2019-04-15T13:15:56.904Z] info File changed: E:\Workspace\ship-n-shore-api
s\users-permissions\config\actions.json
[2019-04-15T13:15:56.961Z] info Time: Mon Apr 15 2019 18:45:56 GMT+0530 (In
ndard Time)
[2019-04-15T13:15:56.962Z] info Launched in: 5425 ms
[2019-04-15T13:15:56.963Z] info Environment: development
[2019-04-15T13:15:56.964Z] info Process PID: 5444
[2019-04-15T13:15:56.965Z] info Version: 3.0.0-alpha.25.2 (node v10.15.1)
[2019-04-15T13:15:56.966Z] info To shut down your server, press <CTRL> + C
time
[2019-04-15T13:15:56.968Z] info ☄️ Admin panel: http://localhost:1337/admi
[2019-04-15T13:15:56.969Z] info ⚡️ Server: http://localhost:1337
[2019-04-15T13:17:08.546Z] debug GET index.html (34 ms)
[2019-04-15T13:17:08.579Z] debug GET vendor.dll.js (13 ms)
[2019-04-15T13:17:08.580Z] debug GET main.js (6 ms)
[2019-04-15T13:17:08.872Z] debug GET plugins.json (9 ms)
[2019-04-15T13:17:08.945Z] debug GET /admin/gaConfig (38 ms)
[2019-04-15T13:17:08.956Z] debug GET /admin/strapiVersion (9 ms)
[2019-04-15T13:17:08.962Z] debug GET /admin/currentEnvironment (3 ms)
[2019-04-15T13:17:08.972Z] debug GET /admin/layout (8 ms)
[2019-04-15T13:17:08.983Z] debug GET main.js (8 ms)
[2019-04-15T13:17:08.991Z] debug GET main.js (12 ms)
[2019-04-15T13:17:08.997Z] debug GET /favicon.ico (4 ms)
[2019-04-15T13:17:09.031Z] debug GET main.js (25 ms)
[2019-04-15T13:17:09.034Z] debug GET main.js (25 ms)
[2019-04-15T13:17:09.037Z] debug GET main.js (24 ms)
[2019-04-15T13:17:09.040Z] debug GET main.js (20 ms)
[2019-04-15T13:17:09.370Z] debug GET /content-type-builder/autoReload (28 m
[2019-04-15T13:17:09.404Z] debug GET /users-permissions/init (15 ms)
[2019-04-15T13:17:09.503Z] debug GET /settings-manager/autoReload (17 ms)
Edit:
I am using node v10.15.1.
Also note that I used the custom setup when creating a new strapi project.
I have setup a small cluster Hadoop 2.7, Hbase 0.98 and Nutch 2.3.1. I have wrote a custom job that simple first combine docs of same domain, after that each URL of domain (from cache i.e., a list) is first obtained from from cache and then corresponding key is used to fetched the object via datastore.get(url_key) and then after updating score, it is written via context.write.
The job should complete after all docs are processed but what I have observed that each attempt if failed due to timeout and progress is 100 percent complete show. Here is the LOG
attempt_1549963404554_0110_r_000001_1 100.00 FAILED reduce > reduce node2:8042 logs Thu Feb 21 20:50:43 +0500 2019 Fri Feb 22 02:11:44 +0500 2019 5hrs, 21mins, 0sec AttemptID:attempt_1549963404554_0110_r_000001_1 Timed out after 1800 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
attempt_1549963404554_0110_r_000001_3 100.00 FAILED reduce > reduce node1:8042 logs Fri Feb 22 04:39:08 +0500 2019 Fri Feb 22 07:25:44 +0500 2019 2hrs, 46mins, 35sec AttemptID:attempt_1549963404554_0110_r_000001_3 Timed out after 1800 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
attempt_1549963404554_0110_r_000002_0 100.00 FAILED reduce > reduce node3:8042 logs Thu Feb 21 12:38:45 +0500 2019 Thu Feb 21 22:50:13 +0500 2019 10hrs, 11mins, 28sec AttemptID:attempt_1549963404554_0110_r_000002_0 Timed out after 1800 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
What it is so i.e., when an attempt is 100.00 percent complete then it should be marked as successfull. Unfortunately, there is any error information other than timeout for my case. How to debug this problem ?
My reducer is somewhat posted to another question
Apache Nutch 2.3.1 map-reduce timeout occurred while updating the score
I have observed that, in the mentioned 3 logs the time required for execution is varied with big difference. Please look upto the job which you are executing once.
I installed tomcat 9.0.14 on my system(Windows 10, Windows server 2016 R2)
I've no issue while starting the tomcat service(start in 2-3 sec).
However, it takes 1 minute to stop.
I thought one of my project residing under webapps is taking time so I removed all my project but result is same.
After that I make it empty webapps folder empty to check further still tomcat took 1 min to stop.
I check the log file and their are no errors.Tomcat is idle for 1 minute while stopping.
Common-deamon.log-------
[2019-01-08 16:30:02] [info] [13948] Stopping service...
[2019-01-08 16:30:03] [info] [13948] Service stop thread completed.
[2019-01-08 16:31:03] [info] [ 1940] Run service finished.
[2019-01-08 16:31:03] [info] [ 1940] Commons Daemon procrun finished
catalina.log--------
08-Jan-2019 16:30:02.399 INFO [Thread-6] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["http-nio-8080"]
08-Jan-2019 16:30:02.431 INFO [Thread-6] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["ajp-nio-8009"]
08-Jan-2019 16:30:02.453 INFO [Thread-6] org.apache.catalina.core.StandardService.stopInternal Stopping service [Catalina]
08-Jan-2019 16:30:02.453 INFO [Thread-6] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["http-nio-8080"]
08-Jan-2019 16:30:02.453 INFO [Thread-6] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["ajp-nio-8009"]
Is their any way I can reduce the sopping time of tomcat 9.
In tomcat 8 stopping time was 3-5 sec
Any help is appreciated.....
I was abel to reproduce this by
Downloading and extracting the apache-tomcat-9.0.14-windows-x64.zip
cd to apache-tomcat/bin
service.bat install
Starting the Service is quick, stopping it delays exactly 60 seconds.
This seemes to be an issue of Tomcat, but current developer snapchot (trunk) changelog suggests it has been already fixed for not yet released Tomcat 9.0.15+ without explicit bug report assigned:
Tomcat 9.0.15 (markt) in development / Catalina:
Correct a bug exposed in 9.0.14 and ensure that the Tomcat terminates in a timely manner when running as a service. (markt)
We had the same problem with Tomcat v9.0.26. Tomcat took exactly 60 seconds to finish once you terminated the server. We tried hard to close and shutdown everything we had in our application and in the end we realized we had a ThreadPoolExecutor that created a newCachedThreadPool() and this cachepool has a "keepAliveTime" of 60 seconds.
So after terminating the tomcat the threadpool was waiting 60 seconds to check if the threads are still needed to be reused. Only after this time it really shut down. So the solution was to shut down the cached thread pool once we shut down the application.
I am using Cloudera's VM CDH 5.12, spark v1.6, kafka(installed by yum) v0.10 and python 2.66 and scala 2.10
Below is a simple spark application that I am running. It takes events from kafka and prints it after map reduce.
from __future__ import print_function
import sys
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
if __name__ == "__main__":
if len(sys.argv) != 3:
print("Usage: kafka_wordcount.py <zk> <topic>", file=sys.stderr)
exit(-1)
sc = SparkContext(appName="PythonStreamingKafkaWordCount")
ssc = StreamingContext(sc, 1)
zkQuorum, topic = sys.argv[1:]
kvs = KafkaUtils.createStream(ssc, zkQuorum, "spark-streaming-consumer", {topic: 1})
lines = kvs.map(lambda x: x[1])
counts = lines.flatMap(lambda line: line.split(" ")) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a+b)
counts.pprint()
ssc.start()
ssc.awaitTermination()
When I submit above code using following command(local) it runs fine
spark-submit --master local[2] --jars /usr/lib/spark/lib/spark-examples.jar testfile.py <ZKhostname>:2181 <kafka-topic>
But when I submit same above code using following command(YARN) it doesn't work
spark-submit --master yarn --deploy-mode client --jars /usr/lib/spark/lib/spark-examples.jar testfile.py <ZKhostname>:2181 <kafka-topic>
Here is the log generated when ran on YARN(cutting them short, logs may differ from above mentioned spark settings):
INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.134.143
ApplicationMaster RPC port: 0
queue: root.cloudera
start time: 1515766709025
final status: UNDEFINED
tracking URL: http://quickstart.cloudera:8088/proxy/application_1515761416282_0010/
user: cloudera
40 INFO YarnClientSchedulerBackend: Application application_1515761416282_0010 has started running.
40 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53694.
40 INFO NettyBlockTransferService: Server created on 53694
53 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
54 INFO BlockManagerMasterEndpoint: Registering block manager quickstart.cloudera:56220 with 534.5 MB RAM, BlockManagerId(1, quickstart.cloudera, 56220)
07 INFO ReceiverTracker: Starting 1 receivers
07 INFO ReceiverTracker: ReceiverTracker started
07 INFO PythonTransformedDStream: metadataCleanupDelay = -1
07 INFO KafkaInputDStream: metadataCleanupDelay = -1
07 INFO KafkaInputDStream: Slide time = 10000 ms
07 INFO KafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
07 INFO KafkaInputDStream: Checkpoint interval = null
07 INFO KafkaInputDStream: Remember duration = 10000 ms
07 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream#7137ea0e
07 INFO PythonTransformedDStream: Slide time = 10000 ms
07 INFO PythonTransformedDStream: Storage level = StorageLevel(false, false, false, false, 1)
07 INFO PythonTransformedDStream: Checkpoint interval = null
07 INFO PythonTransformedDStream: Remember duration = 10000 ms
07 INFO PythonTransformedDStream: Initialized and validated org.apache.spark.streaming.api.python.PythonTransformedDStream#de77734
10 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 5.8 KB, free 534.5 MB)
10 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 3.5 KB, free 534.5 MB)
20 INFO JobScheduler: Added jobs for time 1515766760000 ms
30 INFO JobScheduler: Added jobs for time 1515766770000 ms
40 INFO JobScheduler: Added jobs for time 1515766780000 ms
After this, the job just starts repeating following lines(after some delay set by stream context) and doesnt printout kafka's stream, whereas job on master local with the exact same code does.
Interestingly it prints following line every-time a kafka event occurs(picture is of increased spark memory settings)
Note that:
Data is in kafka and I can see that in consumer console
I have also tried increasing executor's momory(3g) and network timeout time(800s) but no success
Can you see application stdout logs through Yarn Resource Manager UI?
Follow your Yarn Resource Manager link.(http://localhost:8088).
Find your application in running applications list and follow application's link. (http://localhost:8088/application_1396885203337_0003/)
Open "stdout : Total file length is xxxx bytes" link to see log file on browser.
Hope this helps.
When in local mode the application runs in a single machine and you get to see all the prints given in the codes.When run on a cluster everything is in distributed mode and runs on different machines/cores an will not be able to see the print given
Try to get the logs generated by spark using command yarn logs -applicationId
It's possible, that your is an alias and it's not defined on yarn nodes, or is not resolved on the yarn nodes for other reasons.
I'm having trouble with a websocket client connecting to our Cometd server. I can see it is connecting ok and the handshake is good. But after 160 ms, Cometd thinks the session is timed out and removes it.
05:45:45.597 [00003] INFO - canHandshake() result true
05:45:45.597 [00003] INFO - Websocket session-added : 86db64o1yu7v5elbwdkgg595e
05:45:45.597 [00003] INFO - Registering websocket session : session {id=86db64o1yu7v5elbwdkgg595e,cid=0,appId=GMCN01,rid=90C3301D-0295-633
05:45:45.598 [00003] INFO - Registering websocket session : 86db64o1yu7v5elbwdkgg595e for registrationId 90C3301D-0295-6336-351C-45C8884DD
05:45:45.598 [00003] INFO - storing session info for client 0 # with host http://10.200.1.87:8081/websocket/transmit?sId=86db64o1yu7v5elbw
05:45:45.605 [00003] INFO - << {minimumVersion=1.0, supportedConnectionTypes=[websocket, callback-polling, long-polling], successful=true,
05:45:45.606 [00003] INFO - < {minimumVersion=1.0, supportedConnectionTypes=[websocket, callback-polling, long-polling], successful=true,
05:45:46.135 [00004] INFO - > {connectionType=websocket, channel=/meta/connect, clientId=86db64o1yu7v5elbwdkgg595e} 86db64o1yu7v5elbwdkgg
05:45:46.136 [00004] INFO - >> {connectionType=websocket, channel=/meta/connect, clientId=86db64o1yu7v5elbwdkgg595e}
05:45:46.136 [00004] INFO - << {successful=true, advice={interval=0, reconnect=retry, timeout=30000}, channel=/meta/connect}
05:45:46.136 [00004] INFO - < {successful=true, advice={interval=0, reconnect=retry, timeout=30000}, channel=/meta/connect}
05:45:46.296 [00007] INFO - Removing session 86db64o1yu7v5elbwdkgg595e - last connect 160 ms ago, timed out: true <--- THIS IS VERY ODD
05:45:46.296 [00007] INFO - Websocket session-removed : (t/o=true) 86db64o1yu7v5elbwdkgg595e
My own test client appears to work ok, but perhaps because I am closer to the servers and the delay is not as much. The client that is failing is in another region. But the logs do not indicate any latency. The 160 ms timeout seems way too small.
I'm using Java Cometd 2.6.0 embedded with jetty 8.1.12.
I'm thinking there is a setting for the timeout that is too small, but not sure which one controls this, or if there are other reasons behind the timeout.
Anyone else seen this or can explain why this is happening?
Embarrassingly, I found the ws.maxInterval was set to 25 instead of 25000. That was the issue.