Resolution error: Maximum number of iterations reached - coursier

While running Coursier from the command line like so
cs resolve --tree <artifact coordinates>
I run into this error
Resolution error: Maximum number of iterations reached
Apparently Coursier has a hard coded limit of 50 iterations and the suggested workaround is to increase that number via coursierMaxIterations := 200 when using SBT.
Since I'm using Coursier form the command line, is there a way to apply this workaround and pass that setting into Coursier from the command line too?

Related

How to generate flamegraphs from macOS process samples?

Anyone have a clean process for converting samples on macOS to FlameGraphs?
After a bit of fiddling I thought I could perhaps use a tool such as flamegraph-sample, but it seems to give me some trouble and so I thought perhaps there may be other more up-to-date options that I'm missing insomuch that this tool gives an error:
$ sudo sample PID -file ~/tmp/sample.txt -fullPaths 1
Sampling process 198 for 1 second with 1 millisecond of run time between samples
Sampling completed, processing symbols...
Sample analysis of process 35264 written to file ~/tmp/sample.txt
$ python stackcollapse-sample.py ~/tmp/sample.txt > ~/tmp/sample_collapsed.txt
$ flamegraph.pl ~/tmp/sample_collapsed.txt > ~/tmp/sample_collapsed_flamegraph.svg
Ignored 2335 lines with invalid format
ERROR: No stack counts found

how to capture all packet size using windows pktmon

I am trying to use pktmon(built-in windows packet analyzer). However from the documentation they mention that by default packet size is limited to 128 bytes but can be increase with the following command pktmon start --etw -p 0.
But running that command gives me this error Error: '0' is not a valid event provider Id. what could be wrong?
So far I've not seen anything helpful on the internet.
Most of the examples on the internet show
pktmon start --etw -p 0 -c 1
The -p doesn't seem to work and also the -c.
So what worked for me is
pktmon start --etw --pkt-size 0 --comp 1
From the utility help:
--pkt-size
Number of bytes to log from each packet. To always log the entire
packet set this to 0. Default is 128 bytes.

Faster way of Appending/combining thousands (42000) of netCDF files in NCO

I seem to be having trouble properly combining thousands of netCDF files (42000+) (3gb in size, for this particular folder/variable). The main variable that i want to combine has a structure of (6, 127, 118) i.e (time,lat,lon)
Im appending each file 1 by 1 since the number of files is too long.
I have tried:
for i in input_source/**/**/*.nc; do ncrcat -A -h append_output.nc $i append_output.nc ; done
but this method seems to be really slow (order of kb/s and seems to be getting slower as more files are appended) and is also giving a warning:
ncrcat: WARNING Intra-file non-monotonicity. Record coordinate "forecast_period" does not monotonically increase between (input file file1.nc record indices: 17, 18) (output file file1.nc record indices 17, 18) record coordinate values 6.000000, 1.000000
that basically just increases the variable "forecast_period" 1-6 n-times. n = 42000files. i.e. [1,2,3,4,5,6,1,2,3,4,5,6......n]
And despite this warning i can still open the file and ncrcat does what its supposed to, it is just slow, at-least for this particular method
I have also tried adding in the option:
--no_tmp_fl
but this gives an eror:
ERROR: nco__open() unable to open file "append_output.nc"
full error attached below
If it helps, im using wsl and ubuntu in windows 10.
Im new to bash and any comments would be much appreciated.
Either of these commands should work:
ncrcat --no_tmp_fl -h *.nc
or
ls input_source/**/**/*.nc | ncrcat --no_tmp_fl -h append_output.nc
Your original command is slow because you open and close the output files N times. These commands open it once, fill-it up, then close it.
I would use CDO for this task. Given the huge number of files it is recommended to first sort them on time (assuming you want to merge them along the time axis). After that, you can use
cdo cat *.nc outfile

Error during go build/run execution

I've created a simple go script: https://gist.github.com/kbl/86ed3b2112eb80522949f0ce574a04e3
It's fetching some xml from the internet and then starts X goroutines. The X depends on file content. In my case it was 1700 goroutines.
My first execution finished with:
$ go run mathandel1.go
2018/01/27 14:19:37 Get https://www.boardgamegeek.com/xmlapi/boardgame/162152?pricehistory=1&stats=1: dial tcp 72.233.16.130:443: socket: too many open files
2018/01/27 14:19:37 Get https://www.boardgamegeek.com/xmlapi/boardgame/148517?pricehistory=1&stats=1: dial tcp 72.233.16.130:443: socket: too many open files
exit status 1
I've tried to increase ulimit to 2048.
Now I'm getting different error, script is the same thou:
$ go build mathandel1.go
# command-line-arguments
/usr/local/go/pkg/tool/linux_amd64/link: flushing $WORK/command-line-arguments/_obj/exe/a.out: write $WORK/command-line-arguments/_obj/exe/a.out: file too large
What is causing that error? How can I fix that?
You ran ulimit 2048 which changed the maximum file size.
From man bash(1), ulimit section:
If no option is given, then -f is assumed.
This means that you now set the maximum file size to 2048 bytes, that's probably not enough for.... anything.
I'm guessing you meant to change the limit for number of open file descriptors. For this, you want to run:
ulimit -n 2048
As for the original error (before changing the maximum file size), you're launching 1700 goroutines, each performing a http get. Each creates a connection, using a tcp socket. These are covered by the open file descriptor limit.
Instead, you should be limiting the number of concurrent downloads. This can be done with a simple worker pool pattern.

Intel VTune command line error

I am trying to use VTune command line to set maximum number of samples to be collected before the collection stops. For this I have using the -msc command but I get an error saying unknown command.
The command I am using is : "C:\Program Files\Intel\VTune Amplifier XE 2015\bin32\amplxe-cl" -collect general-exploration --duration 30 -msc 300
The above command gives me "Unknown command -msc" error
How can I solve this issue.
VTune command line tool can't do what you requested. VTune can't limit collection time based on number of collected samples
I can suggest to decrease collection time (-duration) or decrease amount of data to be collected using -data-limit option.
BTW
The list of supported options can be obtained by following command
amplxe-cl -help collect
List of supported analysis type specific knobs can be obtained this way
amplxe-cl -help collect general-exploration

Resources