Intel VTune command line error - windows

I am trying to use VTune command line to set maximum number of samples to be collected before the collection stops. For this I have using the -msc command but I get an error saying unknown command.
The command I am using is : "C:\Program Files\Intel\VTune Amplifier XE 2015\bin32\amplxe-cl" -collect general-exploration --duration 30 -msc 300
The above command gives me "Unknown command -msc" error
How can I solve this issue.

VTune command line tool can't do what you requested. VTune can't limit collection time based on number of collected samples
I can suggest to decrease collection time (-duration) or decrease amount of data to be collected using -data-limit option.
BTW
The list of supported options can be obtained by following command
amplxe-cl -help collect
List of supported analysis type specific knobs can be obtained this way
amplxe-cl -help collect general-exploration

Related

How to generate flamegraphs from macOS process samples?

Anyone have a clean process for converting samples on macOS to FlameGraphs?
After a bit of fiddling I thought I could perhaps use a tool such as flamegraph-sample, but it seems to give me some trouble and so I thought perhaps there may be other more up-to-date options that I'm missing insomuch that this tool gives an error:
$ sudo sample PID -file ~/tmp/sample.txt -fullPaths 1
Sampling process 198 for 1 second with 1 millisecond of run time between samples
Sampling completed, processing symbols...
Sample analysis of process 35264 written to file ~/tmp/sample.txt
$ python stackcollapse-sample.py ~/tmp/sample.txt > ~/tmp/sample_collapsed.txt
$ flamegraph.pl ~/tmp/sample_collapsed.txt > ~/tmp/sample_collapsed_flamegraph.svg
Ignored 2335 lines with invalid format
ERROR: No stack counts found

Resolution error: Maximum number of iterations reached

While running Coursier from the command line like so
cs resolve --tree <artifact coordinates>
I run into this error
Resolution error: Maximum number of iterations reached
Apparently Coursier has a hard coded limit of 50 iterations and the suggested workaround is to increase that number via coursierMaxIterations := 200 when using SBT.
Since I'm using Coursier form the command line, is there a way to apply this workaround and pass that setting into Coursier from the command line too?

Slots command in hostfile for mpirun not recognised

I saw another question that seemed similar mpirun: token slots not supported but their solution did not work for me.
I get the error
token slots not supported at this time
when running the command mpirun -hostfile temp.txt hostname
where temp.txt is
hostname1 slots=2
hostname2 slots=2
I have the mpirun version 2021.5
Release Date: 20211102 (id: 9279b7d62).
It did not work to instead write
hostname1:2
hostname2:2
in that case the command runs but it instead does the number of physical processors that are available, which is default.
EDIT: I am adding the full output
[host RAMSES]$ mpirun -hostfile temp.txt hostname
[mpiexec#host] HYD_hostfile_process_tokens (../../../../../src/pm/i_hydra/libhydra/hostfile/hydra_hostfile.c:47): token slots not supported at this time
[mpiexec#host] HYD_hostfile_unique_parse (../../../../../src/pm/i_hydra/libhydra/hostfile/hydra_hostfile.c:232): unable to process token
[mpiexec#host] match_arg (../../../../../src/pm/i_hydra/libhydra/arg/hydra_arg.c:83): match handler returned error
[mpiexec#host] HYD_arg_parse_array (../../../../../src/pm/i_hydra/libhydra/arg/hydra_arg.c:128): argument matching returned error
[mpiexec#host] mpiexec_get_parameters (../../../../../src/pm/i_hydra/mpiexec/mpiexec_params.c:1359): error parsing input array
[mpiexec#host] main (../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:1784): error parsing parameters
So I found that on my version of mpi I had to specify processor placement not in the hostfile, as most of the examples I found do, but rather in the machinefile.
So the new command and file look like:
mpirun -machinefile machine.txt hostname
machine.txt:
host1:2
host2:2

Unable to download data using Aspera

I am trying to download data from the European Nucleotide Archive (ENA) using Aspera CLI however my downloads are getting stalled. I have downloaded several files earlier using the same tool but this is happening since last one month. I usually use the following command:
ascp -QT -P33001 -k 1 -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh era-fasp#fasp.sra.ebi.ac.uk:/vol1/fastq/ERR192/009/ERR1924229/ERR1924229.fastq.gz .
From a post on Beta Science, I learnt that this might be due to not limiting the download speed and hence tried usng the -l argument but was of no help.
ascp -QT -l 300m -P33001 -k 1 -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh era-fasp#fasp.sra.ebi.ac.uk:/vol1/fastq/ERR192/009/ERR1924229/ERR1924229.fastq.gz .
Your command works.
you might be overdriving your local network ?
how much bandwidth do you have ?
here "-l 300m" sets a target rate to 300 Mbps, if you have less than 30, this can cause such problems.
try to reduce the target rate to what you actually have.
(using wired ? Wifi ?)

Faster way of Appending/combining thousands (42000) of netCDF files in NCO

I seem to be having trouble properly combining thousands of netCDF files (42000+) (3gb in size, for this particular folder/variable). The main variable that i want to combine has a structure of (6, 127, 118) i.e (time,lat,lon)
Im appending each file 1 by 1 since the number of files is too long.
I have tried:
for i in input_source/**/**/*.nc; do ncrcat -A -h append_output.nc $i append_output.nc ; done
but this method seems to be really slow (order of kb/s and seems to be getting slower as more files are appended) and is also giving a warning:
ncrcat: WARNING Intra-file non-monotonicity. Record coordinate "forecast_period" does not monotonically increase between (input file file1.nc record indices: 17, 18) (output file file1.nc record indices 17, 18) record coordinate values 6.000000, 1.000000
that basically just increases the variable "forecast_period" 1-6 n-times. n = 42000files. i.e. [1,2,3,4,5,6,1,2,3,4,5,6......n]
And despite this warning i can still open the file and ncrcat does what its supposed to, it is just slow, at-least for this particular method
I have also tried adding in the option:
--no_tmp_fl
but this gives an eror:
ERROR: nco__open() unable to open file "append_output.nc"
full error attached below
If it helps, im using wsl and ubuntu in windows 10.
Im new to bash and any comments would be much appreciated.
Either of these commands should work:
ncrcat --no_tmp_fl -h *.nc
or
ls input_source/**/**/*.nc | ncrcat --no_tmp_fl -h append_output.nc
Your original command is slow because you open and close the output files N times. These commands open it once, fill-it up, then close it.
I would use CDO for this task. Given the huge number of files it is recommended to first sort them on time (assuming you want to merge them along the time axis). After that, you can use
cdo cat *.nc outfile

Resources