does anybody have a simple pprof use on a go-executable? - go

I have looked at the article about profiling go programs, and I simple do not understand it. Do someone have a simple code example were the performance of code snippet is logged in text file by a profile-"object"?

Here are the commands I use for a simple CPU and memory profiling to get you started.
Let's say you made a benchmark function like this :
File something_test.go :
func BenchmarkProfileMe(b *testing.B) {
// execute the significant portion of the code you want to profile b.N times
}
In a shell script:
# -test XXX is a trick so you don't trigger other tests by asking a non existent specific test called literally XXX
# you can adapt the benchtime depending on the type of code you want to profile.
go test -v -bench ProfileMe -test.run XXX -cpuprofile cpu.pprof -memprofile mem.pprof -benchtime 10s
go tool pprof --text ./something.test cpu.pprof ## To get a CPU profile per function
go tool pprof --text ./something.test cpu.pprof --lines ## To get a CPU profile per line
go tool pprof --text ./something.test mem.pprof ## To get the memory profile
It will present you the hottests spots in each cases on the console.

Related

How to generate flamegraphs from macOS process samples?

Anyone have a clean process for converting samples on macOS to FlameGraphs?
After a bit of fiddling I thought I could perhaps use a tool such as flamegraph-sample, but it seems to give me some trouble and so I thought perhaps there may be other more up-to-date options that I'm missing insomuch that this tool gives an error:
$ sudo sample PID -file ~/tmp/sample.txt -fullPaths 1
Sampling process 198 for 1 second with 1 millisecond of run time between samples
Sampling completed, processing symbols...
Sample analysis of process 35264 written to file ~/tmp/sample.txt
$ python stackcollapse-sample.py ~/tmp/sample.txt > ~/tmp/sample_collapsed.txt
$ flamegraph.pl ~/tmp/sample_collapsed.txt > ~/tmp/sample_collapsed_flamegraph.svg
Ignored 2335 lines with invalid format
ERROR: No stack counts found

JMeter - Summary Report not displaying correctly

I am new to JMeter so bear with me...
I have a setUp Thread Group where I am grabbing a token and then re-using that in the HTTP Header Manager within the main Thread Group. Within that Thread Group I have the following parameters set...
I run this command to execute the tests:
jmeter -n -t PSC_Token.jmx -l testPsc.jtl -f
When I open the testPsc.jtl file though in Summary Report, I would expect that each request would show 600 for # Samples (200 threads * 3 loop count) but it is showing 1200 for each.
I tried deleting the file entirely and re-running it, just in case it was appending or something strange. That doesn't resolve the issue though.
Any ideas?
You're writing the same data into the same file 2 times, the options are in:
Disable (or better delete) the Summary Report listener, in general Listeners don't add any value, they only consume resources
Or remove -l command line argument and run your test just like:
jmeter -n -t PSC_Token.jmx
Also be aware that according to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.5 (or whatever is the latest stable version available at JMeter Downloads page)

JMeter Non-GUI mode (Command line) execution - Tidying up

I'm executing Jmeter from command line using following command
!JMeter -Jjmeter.save.saveservice.samplerData=true -Jjmeter.save.saveservice.response_data=true -Jjmeter.save.saveservice.output_format=xml -Jjmeter.save.saveservice.responseHeaders=true -Jjmeter.save.saveservice.requestHeaders=true -Jsummariser.out=false -n -t .jmx -l JmeterReports\TestReport.xml -j JmeterReports\jmeter.log
At the end of the run, I get a tidying up message and it takes 50 mins. Any hint on how to avoid this. it impacts my testing time.
00:08:42.083 login ticket value is :: LT-1418054-HsUfB5qYlXKKhrnJGGcoGeCeQtTf5
00:59:51.971 Tidying up ... # Tue Mar 22 12:30:26 CET 2022 (1647948626380)
00:59:51.971 ... end of run
First of all consider disabling all these Results File Configuration overrides as storing all requests and responses data into the .jtl file for one hour test causes massive disk IO, moreover writing XML is more resource intensive process than default CSV output
If the problem is still there take a thread dump and inspect what the threads are doing and which are stuck or waiting
Monitor the JVM metrics using JVisualVM or equivalent, it might be the case JVM does excessive garbage collection due to low heap space or something like this
More information:
Reducing resource requirements
9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure

How to trace dynamic instruction in spike (on RISC-V)

I’m new for spike and RISC V. I’m trying to do some dynamic instruction trace with spike. These instructions are from a sample.c file. I have tried the following commands:
$ riscv64-unknown-elf-gcc simple.c -g -o simple.out
$ riscv64-unknown-elf-objdump -d --line-numbers -S simple.out
But these commands display the assembled instructions in an out file, which is not I want. I need to trace the dynamic executed instruction in runtime. I find only two relative commands in spike host option:
-g - track histogram of PCs
-l - generate a log of execution
I’m not sure if the result is what I expected as above.
Does anyone have an idea how to do the dynamic instruction trace in spike?
Thanks a lot!
Yes, you can call spike with -l to get a trace of all executed instructions.
Example:
$ spike -l --isa=RV64gc ~/riscv/pk/riscv64-unknown-elf/bin/pk ./hello 2> ins.log
Note that this trace also contains all instructions executed by the proxy-kernel - rather than just the trace of your user program.
The trace can still be useful, e.g. you can search for the start address of your code (i.e. look it up in the objdump output) and consume the trace from there.
Also, when your program invokes a syscall you see something like this in the trace:
[.. inside your program ..]
core 0: 0x0000000000010088 (0x00000073) ecall
core 0: exception trap_user_ecall, epc 0x0000000000010088
core 0: 0x0000000080001938 (0x14011173) csrrw sp, sscratch, sp
[.. inside the pk ..]
sret
[.. inside your program ..]
That means you can skip to the sycall instruction (that are executed in the pk) by searching for the next sret.
Alternatively, you can call spike with -d to enter debug mode. Then you can set a breakpoint on the first instruction of interest in your program (until pc 0 YOURADDRESS - look up the address in the objdump output) and single step from there (by hitting return multiple times). See also the help screen by entering h at the spike prompt.

How to get the result csv file in between the test run during performance testing using Jmeter?

I am using Jmeter Version 4. For example I am running test for four hours, And during the test run, I want the result file for the test ran from 2nd to 3rd hour.Is it possible to get the result file like that?
I know that we can get the result file from starting to 3rd hour.But I want from 2nd to 3rd hour.
Can I get that.Please suggest?
The easiest option is going for Filter Results Tool which has --start-offset and --end-offset parameters specifying how to "cut" the original .jtl file (in seconds) so you could do something like:
FilterResults --output-file from2ndto3rd_hour.jtl --input-file /path/to/large/result.jtl --start-offset 7200 --end-offset 10800
Filter Results Tool can be installed using JMeter Plugins Manager:
Ideally, you should use this solution that allows you to have live results:
https://jmeter.apache.org/usermanual/realtime-results.html
But if you want to work with CSV, your best bet would be to modify the timestamp format by adding to user.properties :
jmeter.save.saveservice.timestamp_format=yyyyMMddHHmmss
And ensure JMeter flushes on every write to avoid having partial lines:
jmeter.save.saveservice.autoflush=true
And then use grep, for example to take results between 15 and 16h on 26 january 2019
grep "2019012615" results.csv > filter.csv
If you don't want to rely on grep, then you can take the whole file and generate the HTML report using :
jmeter -Jjmeter.reportgenerator.start_date=20190126150000 -Jjmeter.reportgenerator.end_date=20190126160000 -g results.csv -o reportfolder

Resources