get cyclomatic complexity in PRQA - cyclomatic-complexity

i try to have cyclomatic complexity (STCYC) metric in Helix PRQA QAC++ whith the follow command:
qacli report <path_proj> -t MDR -o <path_output_file>
qacli report <path_proj> -t SUR -o <path_output_file>
but the STCYC metric does not appear in the generated report files.
Other metrics appear but not STCYC, do I have to set it somehow?
where am i wrong?

Related

lcov + gcov-9 performance regression because of json usage

I have updated my build environment compiler from gcc 5.5.0 to gcc 9.3.0 and noticed coverage calculation performance regression.
It became 10 times slower (5 hours vs 48 hours for whole project).
My investigation shows that in gcov-9 they started to use json format instead of intermediate text format.
This slowed down intermediate gcov-files creation and parsing.
Minimal example below:
> time geninfo --gcov-tool gcov-5 test5/CPrimitiveLayerTest.cpp.gcno
Found gcov version: 5.5.0
Using intermediate gcov format
Processing test5/CPrimitiveLayerTest.cpp.gcno
Finished .info-file creation
real 0m0.351s
user 0m0.298s
sys 0m0.047s
> time geninfo --gcov-tool gcov-9 test9/CPrimitiveLayerTest.cpp.gcno
Found gcov version: 9.3.0
Using intermediate gcov format
Processing test9/CPrimitiveLayerTest.cpp.gcno
Finished .info-file creation
real 0m8.024s
user 0m7.929s
sys 0m0.084s
I didn't find the way to return to old format but maybe there are any workarounds or patches.
P.S. I know about gcov's argument --json-format, but lcov1.15 can process either json format or so-called intermediate text format. At the same time gcov9 can output either json format or so-called logfile format files
Further investigation shows that this is because of lcov 1.15 uses JSON:PP module for json parsing.
Replacing of JSON:PP to JSON:XS (fast parser) gives required speedup.
So I use next commands to reach it:
# patch geninfo to use fast json parser
> sudo sed -i 's/use JSON::PP/use JSON::XS/g' /usr/local/bin/geninfo
# install perl module
> sudo cpan install JSON:XS

JMeter MergeResults is not handling timeStamp label correctly (millis)

Created two dummy sample projects as (dummy1.jmx and dummy2.jmx) and executed below commands with default settings (JMeter 5.3 default installation with all required plugins installed).
#> jmeter.bat -n -t dummy1.jmx -l dummy1.csv -j dummy1-jmeter.log to execute load
Generated report and timestamps look perfect both in dashboard and graphs
**#> jmeter.bat -g dummy1.csv -o dummy1 -j dummy1-report-jmeter.
#> jmeter.bat -n -t dummy2.jmx -l dummy2.csv -j dummy2-jmeter.log to execute load
Generated report and timestamps look perfect both in dashboard and graphs
#> jmeter.bat -g dummy2.csv -o dummy2 -j dummy2-report-jmeter.log
Used MergeResults plugin to merge the above CSV files to a single file and generated HTML report
#> JMeterPluginsCMD.bat --generate-csv dummy1-dummy2.csv --input-jtl merge.properties --plugin-type MergeResults
Found merged timeStamp label is not valid and also generated report shows invalid DateTime.
#> jmeter.bat -g dummy1-dummy2.csv -o merged -j merged-report-jmeter.log
Is this a bug or am I missing configuration? Even adding jmeter.save.saveservice.timestamp_format=yyyy/MM/dd HH:mm:ss.SSS to user.properties didn't help
merge.properties
inputJtl1=dummy1.csv
prefixLabel1=TEST1:
includeLabels1=.*
excludeLabelsl=
includeLabelRegex1=true
excludeLabelRegex1=
startOffset1=
endOffset1=
inputJtl2=dummy2.csv
prefixLabel2=TEST2:
includeLabels2=.*
excludeLabels2=
includeLabelRegex2=true
excludeLabelRegex2=
startOffset2=
endOffset2=
Unfortunately we cannot help without:
Seeing your merge.properties file contents
Knowing what do you expect
In the meantime I can only tell you where did this 2000-01-01 date came from:
It's declared here:
private static final long REF_START_TIME = 946681200000L;
And being added to the original SampleResult timestamp here:
res.setTimeStamp(res.getTimeStamp() - startTimeRef + REF_START_TIME);
I don't know whether it is a bug or it's designed to work like this (however the crazy logic of substraction of sampler start time from its timestamp is beyond my limited understanding), it's better to check at JMeter Plugins support forum
In the meantime you can use services like BM.Sense for comparing different test runs resutls

Unable to download data using Aspera

I am trying to download data from the European Nucleotide Archive (ENA) using Aspera CLI however my downloads are getting stalled. I have downloaded several files earlier using the same tool but this is happening since last one month. I usually use the following command:
ascp -QT -P33001 -k 1 -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh era-fasp#fasp.sra.ebi.ac.uk:/vol1/fastq/ERR192/009/ERR1924229/ERR1924229.fastq.gz .
From a post on Beta Science, I learnt that this might be due to not limiting the download speed and hence tried usng the -l argument but was of no help.
ascp -QT -l 300m -P33001 -k 1 -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh era-fasp#fasp.sra.ebi.ac.uk:/vol1/fastq/ERR192/009/ERR1924229/ERR1924229.fastq.gz .
Your command works.
you might be overdriving your local network ?
how much bandwidth do you have ?
here "-l 300m" sets a target rate to 300 Mbps, if you have less than 30, this can cause such problems.
try to reduce the target rate to what you actually have.
(using wired ? Wifi ?)

ESQL performance tools

I would like to analyze separate ESQL modules for performance on IBM Integration Bus, not whole application with PerfHarness. I know that exists list of good practices for write ESQL (for example, this - ESQL code tips)
But is it tool for performance analysis only one ESQL module?
You can check through your Broker 'Web User Interface'. Just turn your flow (with your ESQL code) statistics on and it will show how many time the process took in each node.
I know this is rather old but it still covers the basics. https://www.ibm.com/developerworks/websphere/library/techarticles/0406_dunn/0406_dunn.html The section on "Isolate the problem using accounting and statistics" should answer your question. And the part on using trace should help you profile the statements within an ESQL module.
The trace file generated at the debug level shows you how long each statement took to execute down to microsecond precision helping you to find the problematic statement or loop.
To get a trace file do the following
Step :1 - Start a user trace using the command below
mqsichangetrace <Node> -u -e <Server> -f <MessageFlowName> -l debug -r
Step :2 - Send a message through the message flow.
Step :3 - Stop the trace using the below MQSI command
mqsichangetrace <Node> -u -e <Server> -f "<Message Flow Name>" -l none
Step :4 - Read the trace content into a file :
mqsireadlog <Node> -u -e <Server> -f -o flowtrace.xml
Step :5 - Format the XML trace file into user readable format.
mqsiformatlog -i flowtrace.xml -o flowtrace.txt
Examine the text file.

Issue in creating Vectors from text in Mahout

I'm using Mahout 0.9 (installed on HDP 2.2) for topic discovery (Latent Drichlet Allocation algorithm). I have my text file stored in directory
inputraw and executed the following commands in order
command #1:
mahout seqdirectory -i inputraw -o output-directory -c UTF-8
command #2:
mahout seq2sparse -i output-directory -o output-vector-str -wt tf -ng 3 --maxDFPercent 40 -ow -nv
command #3:
mahout rowid -i output-vector-str/tf-vectors/ -o output-vector-int
command #4:
mahout cvb -i output-vector-int/matrix -o output-topics -k 1 -mt output-tmp -x 10 -dict output-vector-str/dictionary.file-0
After executing the second command and as expected it creates a bunch of subfolders and files under the
output-vector-str (named df-count, dictionary.file-0, frequency.file-0, tf-vectors,tokenized-documents and wordcount). The size of these files all looks ok considering the size of my input file however the file under ``tf-vectors` has a very small size, in fact it's only 118 bytes).
Apparently as the
`tf-vectors` is the input to the 3rd command, the third command also generates a file of small size. Does anyone know:
what is the reason of the file under
`tf-vectors` folder to be that small? There must be something wrong.
Starting from the first command, all the generated files have a strange coding and are nor human readable. Is this something expected?
Your answers are as follows:
what is the reason of the file under tf-vectors folder to be that small?
The vectors are small considering you have given maxdf percentage to be only 40%, implying that only terms which have a doc freq(percentage freq of terms occurring throughout the docs) of less than 40% would be taken in consideration. In other words, only terms which occur in 40% of the documents or less would be taken in consideration while generating vectors.
what is the reason of the file under tf-vectors folder to be that small?
There is a command in mahout called the mahout seqdumper which would come to your rescue for dumping the files in "sequential" format to "human" readable format.
Good Luck!!

Resources