Is there a way where I can measure the time taking for a particular node in the TIBCO workflow process?
e.g - How much time did the JMS/ Database node take to complete its operation?
The following goes for Tibco Business Works:
a) In tibco Administrator, you can see the time elapsed for each individual activity.
Service Instances > BW Process > Process Definitions.
Select each process after running it once and you will get an Execution count, Elapsed time and CPU time for each activity than ran.
b) If you are only interested in a single activity, you can add two mapper activities in the flow, one before and one after the node you want to measure, and assign to them a value of tib:timestamp(). Their difference will give you the elapsed time in miliseconds.
You might enable statistics in TIBCO Administrator for the deployed engine
(Engine Control Tab) -> Start Statistic Collection.
This will produce a CSV on local disk (the path is also displayed there) with details of elapsed time of all activities of the executed processes of your engine.
You might use this data for detailed analysis then.
Related
I am working on a report to monitor certain things on the Power BI report Server. I was wondering what items others may be monitoring on reports and how do they do it.
Some examples of things I want to monitor:
A. Whether the scheduled data refreshes failed or succeeded.
Would love to be able to get the failure message.
B. What is the average response time of a query.
Is there a way to determine when the report is first opened. I would like to calculate initial load time.
C. What was the longest response time of a query per day.
D. How many times a query took longer than 5 seconds.
I'm brand new with Azure Data Factory. Previously I've been working with SSIS and Pentaho. Recently I have started using this tool to create some ETL, and I've noticed some differences between the time values provided at the end of the process. So I wonder what they mean (Duration - Processing Time - Time), and especially why the big difference between Duration and Processing Time, is this difference a standard preparation time for the tool or something like that?
Regards.
When you read the "Duration" time from the top of your screenshot, that it is end-to-end for the pipeline activity. That takes into account all factors like marshaling of your data flow script from ADF to the Spark cluster, cluster acquisition time, job execution, and I/O write time.
The bottom section of your screenshot is the amount of time Spark spent in that stage of your transformation logic, which is all in-memory data frames.
The write time is shown in the data flow execution plan in the Sink transformation and the cluster acquisition time is shown at the top.
There is time model statistics in awr report, is the parse time include in DB CPU time or separate from ?
I've found my database exists a large parse time problem,and I would like to estimate the benefits that can be achieved by reducing the parse time.
thanks!
Time Model Statistics
DB Time represents total time in user calls
DB CPU represents CPU time of foreground processes
Total CPU Time represents foreground and background processes
Statistics including the word "background" measure background process time, therefore do not contribute to the DB time statistic
Ordered by % of DB time in descending order, followed by Statistic Name
we have identify that DB cpu is taking more time .
We will CPU ORDERED by CPU Time. There will how queries are taking time along with Execution time .Select query and check tunning advisor and find recommendation on same.
I hope i have clear your answer.
is there any way so that I can get the breakup of response time provided by JMeter. i.e.
Travel time of total request
processing time
Travel time of total response
I know JMeter works entirely on client side, and the response is the TTLB. But any plugin or by any means to achieve the same?
Thanks in advance.
You are asking what you should know.
There is no plugin which will give you such breakdown (getting processing time of server is impossible unless you have jmeter agents installed on target server. Monitoring agents are not part of Jmeter till now)
You can get approximate request travel time by using new Connect Time feature of Jmeter.
In practice,
Response time = processing time + latency
You can again find latency with multiple network tools or rough idea using ping (JMeter also gives latency. cross verify with ping or wanem)
Once you know latency you can get processing time.
I think you should get breakdown from this.
Add a listener to the thread group:
jp#gc - Composite Graph
jp#gc - Connect Times Over Time
jp#gc - Response Times Over Time
2.jp#gc - Composite Graph Configuration Connect Times Over Time and Response Times Over Time
3.The result after running:
The larger the difference between the two listeners is, the bottleneck is at the network layer, and the smaller the difference is at the server layer.
4.You can also view specific data by adding a View Results in Table listener
Server processing time =Latency - Connect Time
The larger the difference is, the bottleneck is at the service layer, and the smaller the difference is, the bottleneck is at the network layer.
Server processing time covers program processing time, queue waiting time, database query time and so on. This method can confirm whether the bottleneck of response time is at the network layer or the service layer. If it is at the service layer, we may need to analyze further. So the term server processing time seems inaccurate.
I was trying to analyze the AWR report generated for a particular process with a duration of one hour. I am trying to find out which query is taking much time
to while running the process.
When I have gone through the report, I can see SQL ordered by Gets,SQL ordered by CPU Time,SQL ordered by Executions,SQL ordered by Parse Calls,
SQL ordered by Sharable Memory,SQL ordered by Elapsed Time etc.
I can see the SQL Text from the table SQL ordered by Elapsed Time.
My question: Is this the right way to identify the expensive query ? Please advise in this regard.
Elapsed Time (s) SQL Text
19,477.05 select abc.....
7,644.04 select def...
SQL Ordered by Elapsed Time , includes SQL statements that took significant execution time during processing.We have to look at Executions,Elapsed time per Exec (s) etc. along with Elapsed time to analyze.
For example,a query has low Executions and high Elapsed time per Exec (s) and this query could be a candidate for troubleshooting or optimizations.
The best reference I found so far: http://www.dbas-oracle.com/2013/05/10-steps-to-analyze-awr-report-in-oracle.html
AWR is used to see database health. So, I think this is not the good tools to trace a process.
You should use other tools like sql_trace (with tkprof) or dbms_profiler. It will concenrate on your own process.
If you are using sql_trace, you need to connect to the server (or ask to the dba tem) to analyse the trace.
In SQL Ordered by Elapsed time you always need to check the Query which is having low Execution numbers and higher Elapsed time . This would always be the problematic Query . Since Elapsed time is the defined task for a respective Query in this case if it is higher with less number of Executions it means that for some reason the Query is performing not up to expectations .
There is some parameter need to check so we are find issue in progress.
Buffer get is less expensive than physical read because database has to work harder (and more) to get the data. Basically time it would have taken if available in buffer cache + time actually taken to find out from physical block.
If you suspect that excessive parsing is hurting your database’s performance:
check “time model statistics” section (hard parse elapsed time, parse time elapsed etc.)
see if there are any signs of library cache contention in the top-5 events
see if CPU is an issue.
Establishing a new database connection is also expensive (and even more expensive in case of audit or triggers).
“Logon storms” are known to create very serious performance problems.
If you suspect that high number of logons is degrading your performance, check “connection management elapsed time” in “Time model statistics”.
Soft Parsing being low indicates bind variable and versioning issues. With 99.25 % for the soft parse meaning that about 0.75 % (100 – soft parse) is happening for hard parsing. Low hard parse is good for us.
If Latch Hit % is <99%, you may have a latch problem. Tune latches to reduce cache contention.
Library hit % is great when it is near 100%. If this was under 95% we would investigate the size of the shared pool.
In this ration is low then we may need to:
• Increase the SHARED_POOL_SIZE init parameter.
• CURSOR_SHARING may need to be set to FORCE.
• SHARED_POOL_RESERVED_SIZE may be too small.
• Inefficient sharing of SQL, PLSQL or JAVA code.
• Insufficient use of bind variables