I would like to analyze separate ESQL modules for performance on IBM Integration Bus, not whole application with PerfHarness. I know that exists list of good practices for write ESQL (for example, this - ESQL code tips)
But is it tool for performance analysis only one ESQL module?
You can check through your Broker 'Web User Interface'. Just turn your flow (with your ESQL code) statistics on and it will show how many time the process took in each node.
I know this is rather old but it still covers the basics. https://www.ibm.com/developerworks/websphere/library/techarticles/0406_dunn/0406_dunn.html The section on "Isolate the problem using accounting and statistics" should answer your question. And the part on using trace should help you profile the statements within an ESQL module.
The trace file generated at the debug level shows you how long each statement took to execute down to microsecond precision helping you to find the problematic statement or loop.
To get a trace file do the following
Step :1 - Start a user trace using the command below
mqsichangetrace <Node> -u -e <Server> -f <MessageFlowName> -l debug -r
Step :2 - Send a message through the message flow.
Step :3 - Stop the trace using the below MQSI command
mqsichangetrace <Node> -u -e <Server> -f "<Message Flow Name>" -l none
Step :4 - Read the trace content into a file :
mqsireadlog <Node> -u -e <Server> -f -o flowtrace.xml
Step :5 - Format the XML trace file into user readable format.
mqsiformatlog -i flowtrace.xml -o flowtrace.txt
Examine the text file.
Related
I’m new for spike and RISC V. I’m trying to do some dynamic instruction trace with spike. These instructions are from a sample.c file. I have tried the following commands:
$ riscv64-unknown-elf-gcc simple.c -g -o simple.out
$ riscv64-unknown-elf-objdump -d --line-numbers -S simple.out
But these commands display the assembled instructions in an out file, which is not I want. I need to trace the dynamic executed instruction in runtime. I find only two relative commands in spike host option:
-g - track histogram of PCs
-l - generate a log of execution
I’m not sure if the result is what I expected as above.
Does anyone have an idea how to do the dynamic instruction trace in spike?
Thanks a lot!
Yes, you can call spike with -l to get a trace of all executed instructions.
Example:
$ spike -l --isa=RV64gc ~/riscv/pk/riscv64-unknown-elf/bin/pk ./hello 2> ins.log
Note that this trace also contains all instructions executed by the proxy-kernel - rather than just the trace of your user program.
The trace can still be useful, e.g. you can search for the start address of your code (i.e. look it up in the objdump output) and consume the trace from there.
Also, when your program invokes a syscall you see something like this in the trace:
[.. inside your program ..]
core 0: 0x0000000000010088 (0x00000073) ecall
core 0: exception trap_user_ecall, epc 0x0000000000010088
core 0: 0x0000000080001938 (0x14011173) csrrw sp, sscratch, sp
[.. inside the pk ..]
sret
[.. inside your program ..]
That means you can skip to the sycall instruction (that are executed in the pk) by searching for the next sret.
Alternatively, you can call spike with -d to enter debug mode. Then you can set a breakpoint on the first instruction of interest in your program (until pc 0 YOURADDRESS - look up the address in the objdump output) and single step from there (by hitting return multiple times). See also the help screen by entering h at the spike prompt.
cat <<EOF | curl --data-binary #- http://localhost:9091/metrics/job/pushgetway/instance/test_instance
http_s_attack_type{hostname="test1",scheme="http",src_ip="192.168.33.86",dst_ip="192.168.33.85",port="15555"} 44
http_s_attack_type{hostname="other",scheme="tcp",src_ip="1.2.3.4",dst_ip="192.168.33.85",port="15557"} 123
EOF
Change data and write again:
cat <<EOF | curl --data-binary #- http://localhost:9091/metrics/job/pushgetway/instance/test_instance
http_s_attack_type{hostname="test2",scheme="http",src_ip="192.168.33.86",dst_ip="192.168.33.85",port="15555"} 55
http_s_attack_type{hostname="other3",scheme="tcp",src_ip="1.2.3.4",dst_ip="192.168.33.85",port="15557"} 14
EOF
View the data on localhost:9091 becomes the last write data, the data written for the first time is overwritten。
Is there a problem with my operation? Please tell me how to continuously introduce new data without being overwritten or replaced
This is working exactly as designed. The pushgateway is meant to hold the results of batch jobs when they exit, so on the next run the results will replace the previous run.
It sounds like you're trying to do event logging. Prometheus is not a suitable tool for that use case, you might want to consider something like the ELK stack instead.
I am testing monetdb for a colunmnar storage.
I already installed and run the server
but, when I connect to the client and run a query, the response does not show the time to execute the query.
I am connecting as:
mclient -u monetdb -d voc
I already tried to connect with interactive like:
mclient -u monetdb -d voc -i
Output example:
sql>select count(*) from voc.regions;
+---------+
| L3 |
+=========+
| 5570699 |
+---------+
1 tuple
As mkersten mentioned, I would read through the options of the mclient utility first.
To get server and client timing measurements I used --timer=performance option when starting mclient.
Inside mclient I would then disable the result output by setting \f trash to ignore the results when measuring only.
Prepend trace to your query and you get your results like this:
sql>\f trash
sql>trace select count(*) from categories;
sql:0.000 opt:0.266 run:1.713 clk:5.244 ms
sql:0.000 opt:0.266 run:2.002 clk:5.309 ms
The first of the two lines shows you the server timings, the second one the overall timing including passing the results back to the client.
If you use the latest version MonetDB-Mar18 you have good control over the performance timers, which includes parsing, optimization, and runtime at server. See mclient --help.
Hive query output that is using UDFs consists of these 2 warnings at the end. How do I suppress these 2 warnings. Please note that the 2 warnings come right after the output as part of output.
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
hadoop version
Hadoop 2.6.0-cdh5.4.0
hive --version
Hive 1.1.0-cdh5.4.0
If you use beeline instead of Hive the error goes away. Not the best solution, but I'm planning to post to the CDH user group asking the same question to see if it's a bug that can be fixed.
This error occurs due to adding of assembly jar which which contains classes from icl-over-slf4j.jar (which is causing the stdout messages) and slf4j-log4j12.jar.
You can try couple of things to begin with:
Try removing the assembly jar, in case if using.
Look at the following link: https://issues.apache.org/jira/browse/HIVE-12179
This suggest that we can trigger a flag in Hive where spark-assembly is loaded only if HIVE_ADD_SPARK_ASSEMBLY = "true".
https://community.hortonworks.com/questions/34311/warning-message-in-hive-output-after-upgrading-to.html :
Although there is a workaround if to avoid any end time changes and that is to manually remove the 2 lines from the end of the files using shell script.
Have tried to set HIVE_ADD_SPARK_ASSEMBLY=false, but it didn't work.
Finally, I found a post question at Cloudera community. See: https://community.cloudera.com/t5/Support-Questions/Warning-message-in-Hive-output-after-upgrading-to-hive/td-p/157141
You could try the follow command, it works for me!
hive -S -d ns=$hiveDB -d tab=$t -d dunsCol=$c1 -d phase="$ph1" -d error=$c2 -d ts=$eColumnArray -d reporting_window=$rDate -f $dir'select_count.hsql' | grep -v "^WARN" > $gOutPut 2> /dev/null
On the Parse.com cloud-code console, I can see logs, but they only go back maybe 100-200 lines. Is there a way to see or download older logs?
I've searched their website & googled, and don't see anything.
Using the parse command-line tool, you can retrieve an arbitrary number of log lines:
Usage:
parse logs [flags]
Aliases:
logs, log
Flags:
-f, --follow=false: Emulates tail -f and streams new messages from the server
-l, --level="INFO": The log level to restrict to. Can be 'INFO' or 'ERROR'.
-n, --num=10: The number of the messages to display
Not sure if there is a limit, but I've been able to fetch 5000 lines of log with this command:
parse logs prod -n 5000
To add on to Pascal Bourque's answer, you may also wish to filter the logs by a given range of dates. To achieve this, I used the following:
parse logs -n 5000 | sed -n '/2016-01-10/, /2016-01-15/p' > filteredLog.txt
This will get up to 5000 logs, use the sed command to keep all of the logs which are between 2016-01-10 and 2016-01-15, and store the results in filteredLog.txt.