I am trying to use pktmon(built-in windows packet analyzer). However from the documentation they mention that by default packet size is limited to 128 bytes but can be increase with the following command pktmon start --etw -p 0.
But running that command gives me this error Error: '0' is not a valid event provider Id. what could be wrong?
So far I've not seen anything helpful on the internet.
Most of the examples on the internet show
pktmon start --etw -p 0 -c 1
The -p doesn't seem to work and also the -c.
So what worked for me is
pktmon start --etw --pkt-size 0 --comp 1
From the utility help:
--pkt-size
Number of bytes to log from each packet. To always log the entire
packet set this to 0. Default is 128 bytes.
Related
i'm trying to install Oracle11g, and this happened, is there a way to fix this?
i had tried to reboot and run the script runfixup.sh still can't resolve the problem.
I'm trying to install Oracle 11gR2 on Oracle Linux 7.4.
While the installer is performing prerequisite checks, we are getting error:
This is a prerequisite condition to test whether the OS kernel parameter semmni is properly set.
More details :
Expected Value : 128
Actual Value : 0
Now if I execute as root:
/sbin/sysctl -a | grep sem
kernel.sem = 32000 1024000000 500 128
Which means that semmni=128.
Can somebody tell me what I'm I doing wrong?
You need to issue the following command
[root#localhost ~]# /sbin/sysctl -p
the changes to take effect.
And then the value(the rightmost one returning below) might be checked by issuing
[root#localhost ~]# more /proc/sys/kernel/sem
32000 1024000000 500 128
I've created a simple go script: https://gist.github.com/kbl/86ed3b2112eb80522949f0ce574a04e3
It's fetching some xml from the internet and then starts X goroutines. The X depends on file content. In my case it was 1700 goroutines.
My first execution finished with:
$ go run mathandel1.go
2018/01/27 14:19:37 Get https://www.boardgamegeek.com/xmlapi/boardgame/162152?pricehistory=1&stats=1: dial tcp 72.233.16.130:443: socket: too many open files
2018/01/27 14:19:37 Get https://www.boardgamegeek.com/xmlapi/boardgame/148517?pricehistory=1&stats=1: dial tcp 72.233.16.130:443: socket: too many open files
exit status 1
I've tried to increase ulimit to 2048.
Now I'm getting different error, script is the same thou:
$ go build mathandel1.go
# command-line-arguments
/usr/local/go/pkg/tool/linux_amd64/link: flushing $WORK/command-line-arguments/_obj/exe/a.out: write $WORK/command-line-arguments/_obj/exe/a.out: file too large
What is causing that error? How can I fix that?
You ran ulimit 2048 which changed the maximum file size.
From man bash(1), ulimit section:
If no option is given, then -f is assumed.
This means that you now set the maximum file size to 2048 bytes, that's probably not enough for.... anything.
I'm guessing you meant to change the limit for number of open file descriptors. For this, you want to run:
ulimit -n 2048
As for the original error (before changing the maximum file size), you're launching 1700 goroutines, each performing a http get. Each creates a connection, using a tcp socket. These are covered by the open file descriptor limit.
Instead, you should be limiting the number of concurrent downloads. This can be done with a simple worker pool pattern.
I am using UFTP to transfer files within the subnetwork computers.
But when I used -H to send only particular computers instead of sending to all computers, it is not working as expected.
Let me explain in detail :
I have two windows machines in same network of IP's 172.21.170.198,172.21.181.216 respectively.
From one of the system, I used below mentioned command to send the file
uftp.exe -R 100000 -H 172.21.170.198,172.21.181.216 e:\setup.exe
But both machines won't receive those file.
But if I use this command both machines will receive the file.
uftp.exe -R 100000 E:\setup.exe
I want to know whether I made any mistake.
Please correct me if I am wrong.
Thanks in Advance.
Kindly revert back for any clarifications.
Regards,
Thiyagu
If ipv6 isn't enabled, it would look like this, converting the ipv4 addresses to hex (with a converter like http://www.kloth.net/services/iplocate.php):
uftp.exe -R 100000 -H 0xAC15AAC6,0xAC15B5D8 e:\setup.exe
But if you have an ipv6 address on the client, the client id sort of comes from the end of it backwards. Like if the address was "fe80::e5ca:e3ca:fea3:153f%5", the command would look like:
uftp.exe -R 100000 -H 0x3f15a3fe e:\setup.exe
(coming from "fe a3 15 3f")
I have plain vanilla CouchDB from Apache, which runs as an App running on a Mac OS X 10.9. If I try to attach an attachment to a document that is above 1 Meg in size, it just hangs and does nothing.
I have tried to use couchdbs on Linux, and there the sky is the limit.
I first thought it had to do with low limits on the mac but it doesn't seem so :
➜ ~ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 709
-n: file descriptors 256
What is causing this ? Why ? And how to fix this ?
Check the config files given by couchdb -c. You probably have this somewhere in them (for some unknown reason):
[couchdb]
max_attachment_size = 1048576 ; bytes
Remove or comment the line and you should be fine.
Or maybe it was compiled with this hardcoded so you could add this line to one of the config file and increase the value.
Update
max_attachment_size is undocumented so probably not safe to use. I leave the original answer as it seems to have solved the problem of the OP but according to the docs, the attachment size should be unlimited. Also attachment_stream_buffer_size is the config key controlling the chunk size of the attachments which might relevant.
I'm trying to understand the behaviour of ping command. Trying to experiment on a windows 7 PC.
On the command prompt, I issued the following command:
ping <some hostname> -l 4096
The output I get is
Pinging <some hostname> [xx.xx.xxx.xx] with 4096 bytes of data:
General failure.
General failure.
General failure.
General failure.
Ping statistics for xx.xx.xxx.xx:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
However, ping <same hostname> -l 32 works just fine.
So my question is why is the server behaving differently for different packet sizes? Is it related to thwart? Or is that my local ping program is configured by default in such a way so as to not sent bigger packets?
Note that -l flag lets you specify the ping req's buffer size.
Your ping packet is probably larger than the local media's MTU, and it's on a network type where fragmentation isn't allowed. Ethernet IPv6 would be one such configuration.