Windows 7 ping general failure - windows

I'm trying to understand the behaviour of ping command. Trying to experiment on a windows 7 PC.
On the command prompt, I issued the following command:
ping <some hostname> -l 4096
The output I get is
Pinging <some hostname> [xx.xx.xxx.xx] with 4096 bytes of data:
General failure.
General failure.
General failure.
General failure.
Ping statistics for xx.xx.xxx.xx:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
However, ping <same hostname> -l 32 works just fine.
So my question is why is the server behaving differently for different packet sizes? Is it related to thwart? Or is that my local ping program is configured by default in such a way so as to not sent bigger packets?
Note that -l flag lets you specify the ping req's buffer size.

Your ping packet is probably larger than the local media's MTU, and it's on a network type where fragmentation isn't allowed. Ethernet IPv6 would be one such configuration.

Related

Multiple pings from a client to a server

I have 2 Virtual Machines. One is a Windows(say client). The other one is a CentOS(say server).
My Windows (Client) IP is 1.1.1.1
My CentOS (Server) has an app running and is listening on port 12345. IP address of the CentOS VM is 2.2.2.2.
I want to generate multiple pings from the windows VM from specific IP/ports to CentOS VM specific IP/ports.
i.e generate multiple pings from a certain port + source IP of 1.1.1.1 to destination IP of 2.2.2.2 + destination port number 12345.
I am looking for something like the foll:
**** ping DIP D.Port SIP S.Port -count 1000 ****
Please note: I need to run this ping from my windows CMD.
Is there a way I can do this from my windows CMD line?
check your system docs, on mine it is:
Mac_3.2.57$ping -S 10.0.0.148 -c 100000 1.1.1.1
PING 1.1.1.1 (1.1.1.1) from 10.0.0.148: 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=59 time=25.521 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=18.837 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=16.605 ms

how to capture all packet size using windows pktmon

I am trying to use pktmon(built-in windows packet analyzer). However from the documentation they mention that by default packet size is limited to 128 bytes but can be increase with the following command pktmon start --etw -p 0.
But running that command gives me this error Error: '0' is not a valid event provider Id. what could be wrong?
So far I've not seen anything helpful on the internet.
Most of the examples on the internet show
pktmon start --etw -p 0 -c 1
The -p doesn't seem to work and also the -c.
So what worked for me is
pktmon start --etw --pkt-size 0 --comp 1
From the utility help:
--pkt-size
Number of bytes to log from each packet. To always log the entire
packet set this to 0. Default is 128 bytes.

ssh exec a simple command cost a few seconds

I find it costs one more seconds that ssh exec a simple command, does it normal? if not, how to speed up it?
[root#ops-test-vm-154:~]# time ssh root#10.17.1.155 'echo "hello,world!"'
hello,world!
real 0m1.805s
user 0m0.009s
sys 0m0.005s
there is low latency between vm-154 and vm-155
[root#ops-test-vm-154:~]# ping 10.17.1.155
PING 10.17.1.155 (10.17.1.155) 56(84) bytes of data.
64 bytes from 10.17.1.155: icmp_seq=1 ttl=64 time=0.142 ms
64 bytes from 10.17.1.155: icmp_seq=2 ttl=64 time=0.136 ms
64 bytes from 10.17.1.155: icmp_seq=3 ttl=64 time=0.129 ms
64 bytes from 10.17.1.155: icmp_seq=4 ttl=64 time=0.110 ms
^C
--- 10.17.1.155 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4421ms
rtt min/avg/max/mdev = 0.110/0.128/0.142/0.014 ms
BTW: I need check service status real time by executing a script in vm-155, so vm-154 execute command ssh vm-155 status.sh every second. But even a simple command echo helloworld cost one more second. So the solution is terrible. I hope speed up it, or may be a better solution.
Best Wishes!
There is vm-155 /etc/ssh/sshd_config, I add UseDNS no and execute service sshd restart, but still need one more second to echo hello,world!
Protocol 2
SyslogFacility AUTHPRIV
PasswordAuthentication yes
ChallengeResponseAuthentication no
GSSAPIAuthentication yes
GSSAPICleanupCredentials yes
UsePAM yes
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
X11Forwarding yes
UseDNS no
Subsystem sftp /usr/libexec/openssh/sftp-server
One thing that you could try is to run SSH in verbose mode and see at which stage it wastes the most time.
ssh -vvv root#10.17.1.155 'echo "hello,world!"'
And then based on your findings adopt your ssh config file to exclude slow cipher suites and other CPU intensive things. Some tips about that here.
However, you will not be able to achieve close to real-time performance over ssh if you establish a new connection every time. You could put your script/command into a loop and set seep value to 1s.
ssh root#10.17.1.155 'while true; do echo "Hello, world!"; sleep 1s; done'
But I would use something that is designed for such application like SNMP protocol. Here is an example configuration:
https://www.incredigeek.com/home/snmp-and-shell-script/
One source of delays during the SSH connection process is DNS lookups by the server. When a client connects to the server, the server can optionally look up the IP address of the client to get its hostname. Depending on a variety of issues, the query may take anywhere from a fraction of a second to ten seconds or more to complete.
The most widely deployed SSH server is OpenSSH. The OpenSSH sshd server has a setting named UseDNS which controls whether it performs DNS queries on incoming connections or not:
UseDNS
Specifies whether sshd(8) should look up the remote host name, and to check that the resolved host name for the remote IP address maps back to the very same IP address.
You should check that UseDNS is set to "no" on the server which you're connecting to.

How can I specify which protocol to use (IPv4 or IPv6) when pinging a website (bash)?

I currently have a shell script which simply takes a URL as an argument and then sends a ping request to it as follows:
ping -c 5 $1
It is required of me to ping to the site using IPv4 and IPv6 where possible, I will then compare results. I have read the man page of ping and cannot see a flag which specifies which protocol to use, I was expecting it to accept a flag -4 for IPv4 and -6 for IPv6 but this does not seem to be the case.
I came across the DNS lookup utility dig which looks promising but have not managed to implement it in my code. My script must take a URL as an argument and no other arguments. I hope this is clear and thanks for your help.
Use ping and ping6 that are available in most distributions.
/tmp $ dig google.com A google.com AAAA +short
172.217.4.174
2607:f8b0:4007:801::200e
/tmp $ ping -c 2 172.217.4.174
PING 172.217.4.174 (172.217.4.174): 56 data bytes
64 bytes from 172.217.4.174: icmp_seq=0 ttl=53 time=35.619 ms
64 bytes from 172.217.4.174: icmp_seq=1 ttl=53 time=34.220 ms
/tmp $ ping6 -c 2 2607:f8b0:4007:801::200e
PING6(56=40+8+8 bytes) 2602:306:b826:68a0:f40e:abca:efdb:71f --> 2607:f8b0:4007:801::200e
16 bytes from 2607:f8b0:4007:801::200e, icmp_seq=0 hlim=55 time=77.735 ms
16 bytes from 2607:f8b0:4007:801::200e, icmp_seq=1 hlim=55 time=81.518 ms

Ping on shell scripts: Some packet loss, but error code $? equals to zero. How can I detect?

Sometimes my DSL router fails in this strange manner:
luis#balanceador:~$ sudo ping 8.8.8.8 -I eth9
[sudo] password for luis:
PING 8.8.8.8 (8.8.8.8) from 192.168.3.100 eth9: 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=47 time=69.3 ms
ping: sendmsg: Operation not permitted
64 bytes from 8.8.8.8: icmp_seq=3 ttl=47 time=68.0 ms
ping: sendmsg: Operation not permitted
64 bytes from 8.8.8.8: icmp_seq=5 ttl=47 time=68.9 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=47 time=67.2 ms
ping: sendmsg: Operation not permitted
64 bytes from 8.8.8.8: icmp_seq=8 ttl=47 time=67.2 ms
^C
--- 8.8.8.8 ping statistics ---
8 packets transmitted, 5 received, 37% packet loss, time 7012ms
rtt min/avg/max/mdev = 67.254/68.183/69.391/0.906 ms
luis#balanceador:~$ echo $?
0
As can be seen, error code $? is 0. So I can not simply detect if the command failed, as the output yields no error for any script.
What is the proper way to detect that there were some packet loss?
Do I need to parse the output with grep or there is some simpler method?
According to the man page, by default (on Linux), if ping does not receive any reply packets at all, it will exit with code 1. But if a packet count (-c) and deadline timeout (-w, seconds) are both specified, and fewer packets before timeout are received, it will also exit with code 1. On other errors it exits with code 2.
ping 8.8.8.8 -I eth9 -c 3 -w 3
So, the error code will be set if 3 packets are not received within 3 seconds.
As #mklement0 noted, ping on BSD behaves in a bit different way:
The ping utility exits with one of the following values:
0 - at least one response was heard from the specified host.
2 - the transmission was successful but no responses were received.
So, in this case one should try workaround it with sending one by one in a loop
ip=8.8.8.8
count=3
for i in $(seq ${count}); do
ping ${ip} -I eth9 -c 1
if [ $? -eq 2 ]; then
## break and retransmit exit code
exit 2
fi
done
Of course, if you need full statistics, just count codes "2" and "0" to some variables and print result / set error code after for loop if you need.

Resources