How to relog performance monitor circular log - performance

I'm trying to relog performance monitor circular log.
When I do this with "normal" binary file, it's working correctly:
C:\PerfLogs\Admin\Test>relog DataCollector01.blg -f csv -o test.csv
Input
----------------
File(s):
DataCollector01.blg (Binary)
Begin: 2016-11-22 8:18:18
End: 2016-11-22 8:21:18
Samples: 13
100.00%
Output
----------------
File: test.csv
Begin: 2016-11-22 8:18:18
End: 2016-11-22 8:21:18
Samples: 13
The command completed successfully.
But when I created a circular log, then I get the error:
C:\PerfLogs\Admin\Test>relog DataCollector01.blg -f csv -o test.csv
Input
----------------
File(s):
DataCollector01.blg (Binary)
Error:
Unable to read counter information and data from input binary log files.
The DataCollector is running. When I stop it, then I can relog the blg file.

You cannot read a lot while it's open. You have to stop the logging first.

Related

fatal error: An error occurred (404) when calling the HeadObject operation: Key " " does not exist

This is my setup:
I use AWS Batch that is running a custom Docker image
The startup.sh file is an entrypoint script that is reading the nth line of a text file and copying it from s3 into the docker.
For example, if the first line of the .txt file is 'Startup_00001/ Startup_000018 Startup_000019', the bash script reads this line, and uses a for loop to copy them over.
This is part of my bash script:
STARTUP_FILE_S3_URL=s3://cmtestbucke/Config/
Startup_FileNames=$(sed -n ${LINE}p file.txt)
for i in ${Startup_FileNames}
do
Startup_FileURL=${STARTUP_FILE_S3_URL}$i
echo $Startup_FileURL
aws s3 cp ${Startup_FileURL} /home/CM_Projects/ &
done
Here is the log output from aws:
s3://cmtestbucke/Config/Startup_000017
s3://cmtestbucke/Config/Startup_000018
s3://cmtestbucke/Config/Startup_000019
Completed 727 Bytes/727 Bytes (7.1 KiB/s) with 1 file(s) remaining download: s3://cmtestbucke/Config/Startup_000018 to Data/Config/Startup_000018
Completed 731 Bytes/731 Bytes (10.1 KiB/s) with 1 file(s) remaining download: s3://cmtestbucke/Config/Startup_000017 to Data/Config/Startup_000017
fatal error: *An error occurred (404) when calling the HeadObject operation: Key
"Config/Startup_000019 " does not exist.*
My s3 bucket certainly contains the object s3://cmtestbucke/Config/Startup_000019
I noticed this happens regardless of filenames. The last iteration always gives this error.
I tested this bash logic locally with the same aws commands. It copies all 3 files.
Can someone please help me figure out what is wrong here?
The problem was with EOL of the text file. It was set to Windows(CR LF). The docker image is running Ubuntu which caused the error. I changed the EOL to Unix(LF). The problem was solved.

Piping bzip2 output into tdbloader2 (apache-jena) gives "File does not exist"

I want to pipe the output from bzip2 and use it as an input to fill a TDB database using tbdloader2 from apache-jena-3.9.0.
I already found
Generating TDB Dataset from archive containing N-TRIPLES files
but the proposed solution there did not work for me.
bzip2 -dc test.ttl.bz2 | tdbloader2 --loc=/pathto/TDBdatabase_test -- -
produces
20:08:01 INFO -- TDB Bulk Loader Start
20:08:01 INFO Data Load Phase
20:08:01 INFO Got 1 data files to load
20:08:01 INFO Data file 1: /home/user/-
File does not exist: /home/user/-
20:08:01 ERROR Failed during data phase
Similar results I got with with (inspired by https://unix.stackexchange.com/questions/16990/using-data-read-from-a-pipe-instead-than-from-a-file-in-command-options)
bzip2 -dc test.ttl.bz2 | tdbloader2 --loc=/pathto/TDBdatabase_test /dev/stdin
20:34:45 INFO -- TDB Bulk Loader Start
20:34:45 INFO Data Load Phase
20:34:45 INFO Got 1 data files to load
20:34:45 INFO Data file 1: /proc/16256/fd/pipe:[92062]
File does not exist: /proc/16256/fd/pipe:[92062]
20:34:45 ERROR Failed during data phase
and
bzip2 -dc test.ttl.bz2 | tdbloader2 --loc=/pathto/TDBdatabase_test /dev/fd/0
20:34:52 INFO -- TDB Bulk Loader Start
20:34:52 INFO Data Load Phase
20:34:52 INFO Got 1 data files to load
20:34:52 INFO Data file 1: /proc/16312/fd/pipe:[97432]
File does not exist: /proc/16312/fd/pipe:[97432]
20:34:52 ERROR Failed during data phase
unpacking the bz2 file manually and then adding it works fine:
bzip2 -d test.ttl.bz2
tdbloader2 --loc=/pathto/TDBdatabase_test test.ttl
Would be great if someone could point me in the right direction.
tdbloader2 accepts bz2 compressed files on the command line:
tdbloader2 --loc=/pathto/TDBdatabase_test test.ttl.bz2
It doesn't accept input from a pipe - and if it did, then it would not know the syntax is Turtle which it gets from the file extension.

How can I figure out how many threads cut needs in Snakemake rule?

I use cut in one rule of my pipeline and always throws an error, but without any error description.
When I try this command with a simple bash script it is working without any errors.
Here is the rule:
rule convert_bamheader:
input: bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned.bam, stats/SERUM-ACT/good_barcodes_clean_filter.txt
output: bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header.txt, bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header_filtered.tsv
jobid: 15
wildcards: sample=SERUM-ACT
threads: 4
mkdir -p stats/SERUM-ACT
mkdir -p log/SERUM-ACT
samtools view bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned.bam > bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header.txt
cut -f 12,13,18,20-24 bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header.txt | grep -f stats/SERUM-ACT/good_barcodes_clean_filter.txt > bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header_filtered.tsv
Submitted DRMAA job 15 with external jobid 7027806.
Error in rule convert_bamheader:
jobid: 15
output: bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header.txt, bam/SERUM-ACT/exon_tagged_trimmed_mapped_cleaned_header_filtered.tsv
ClusterJobException in line 256 of */pipeline.snake:
Error executing rule convert_bamheader on cluster (jobid: 15, external: 7027806, jobscript: */.snakemake/tmp.ewej7q4e/snakejob.convert_bamheader.15.sh). For detailed error see the cluster log.
Job failed, going on with independent jobs.
Exiting because a job execution failed. Look above for error message
Complete log: */.snakemake/log/2018-12-18T104741.092698.snakemake.log
I thought that it has to do something with the number of threads provided and number of threads needed for the cut step, but I am not sure.
Perhaps someone can help me?
Cheers!

Jmeter: HTML report generation after tests

These are the steps that I followed to generate reports:
I have the .jtl file
I copy paste given sample configuration to my user.properties file located at apache-jmeter-5.0\bin
I convert .jtl to aggregate report using CMDRunner.jar
java -jar CMDRunner.jar --tool Reporter --generate-csv Demo17Results.csv --input-jtl Demo17Results.jtl --plugin-type AggregateReport
Convert csv file got from step#3 to HTML reports
I tried (1) jmeter -g Demo17Results.csv -o htmlReports/
Error: csv' does not contain the field names header, ensure the jmeter.save.saveservice.* properties are the same as when the CSV file was created or the file may be read incorrectly when generating report
An error occurred: Mismatch between expected number of columns:17 and columns in CSV file:11, check your jmeter.save.saveservice.* configuration or check line is complete
I tried (2) jmeter -n -t Demo17Run.jmx -l Demo17Results.csv -e -o htmlReports/
Creating summariser <summary>
Error in NonGUIDriver java.lang.IllegalArgumentException: Results file:Demo17Results.csv is not empty
after emptying the csv file
Creating summariser <summary>
Created the tree successfully using Demo17Run.jmx
Starting the test
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)
Tidying up ...
Error generating the report: org.apache.jmeter.report.core.SampleException: Could not read metadata !
... end of run
What am I doing wrong to generate Jmeter HTML dashboard reports?
You don't need step 2, JMeter default configuration is just fine for dashboard generation
You don't need step 3, the dashboard needs to be created from the Demo17Results.jtl file which contains full raw results, not statistics table
Try re-running your test scenario with forcing deletion of the previous result file via -f argument:
jmeter -n -f -t Demo17Run.jmx -l Demo17Results.jtl -e -o htmlReports/
If nothing helps double check you have not modified required results file configuration settings and increase JMeter logs verbosity for report.dashboard package by adding the next line to log4j2.xml file:
<Logger name="org.apache.jmeter.report.dashboard" level="debug" />
I was getting a similar error when I was trying to generate the HTML dashboard report after the test run. Even though the path to the .jtl file was correct (and there were no spaces in the directory names), I was getting that error “Mismatch between expected number of columns...” I was pointing the command to a copy of the .jtl file I had made in another directory. I changed the command to pick up the .jtl file that was in JMeter\bin and that worked… no more error and the reports were generated. So, the command (run from JMeter\bin) that worked was: jmeter –g log.jtl –o C:\HTML_Reports.
Also, the output folder that is specified must not exist (JMeter will create it) or if it does exist, it must be empty.

How to find Logstash is at EOF?

I am using Logstash with ElasticSearch to analyze and store data out of my apache logs. In my setup logstash is taking input from a file stdin.log.
I want to create a script which automatically insert latest logs into stdin.log when ever logstash have reached at the end of stdin.log. So my question is that is there a way to find whether logstash has reached to eof or not? Can I use sincedb file for this purpose?
I have achieved my goal by comparing size of file with offset provided in sincedb file.
currentPosition = tail -1 .sincedb | awk '{printf $4}'
yields current offset of file logstash's file pointer in logfile. While
fileSize = stat -c '%s' stdin.log
yields total size in bytes. So comparing it
if[[ $currentPosition = $fileSize ]]; then #Proceed
You can look inside the sincedb file to get the inodes and current offsets.
Another option is lsof -oo10 -p $LOGSTASHPID and examine the OFFSET column for the file in question

Resources