Network problems on one device - windows

I currently have a problem, that some packages are getting dropped in my local network.
But just on one device.
Ping to local router
Here you can see a ping to my router. I only have this problem on my pc. Mobilephone and Laptops are completly fine.
I tried a network card and two Wlan usb sticks all with the same problem.
Does somebody a clue on what could cause these problems?
*OS: Windows 10 21H2
*CPU usage ideling around 4-10%
*RAM usage 40%
*Network usage 0-1%

Your question is a bit broad - there are so many things that can disturb a network connection, from physical issues (i.e. cable defects, WIFI interference), to driver problems, CPU bottlenecks, etc. That being said, I would tip at a CPU bottleneck (app using most or all of your CPU), but even that is by no means certain.
Take a look at you CPU usage with TaskManager or ProcessExplorer (from the Sysinternals package). They both also show network usage. If your machine shows excessive CPU (constantly over 30% with frequent peeks), then you might want to explorer the reasons for that, and there can be many.
Using those same tools you can also try to identify apps that are possibly using alot of network bandwidth.
Windows has much happening in the background and those processes require resources (CPU, RAM, Network, Harddisk, etc.). Should any of those resources be limited then, you can easily see issues as you describe as there is a certain interdependence between those resources, i.e. you have many apps running with limited RAM, that leads to paging and as the hard disk is slow, the CPU is then busy with data shoveling and can't keep up with the NIC requests.
However, I am theorizing here. Supply some hard data (machine config, OS Info, Network info/config, task list, CPU usage, etc.) and we can continue.

Related

Cluster with Windows 7

Let me keep it simple. I am working in a company on a Software which has a built in auto marking system (Which needs a lot of computer resources). There are Many PCs in my department all with Windows 7 32-Bit and have almost same specs (Same Modal, RAM, Processor). They are connected by LAN Network with 100 Mbps speed. Now i want to make a cluster of computers so i can Run that software on that by utilizing maximum resources of all computers. Is there any special software for that?
I would recommend using a much faster switch. Because they are limited to 100mbps, you will see a low performance gain if any. Here is a link where there are instructions of how to setup a cluster with Windows. But I recommend getting at least a 1Gbps switch since the nodes will need to send data back and forth to each other. Of course, make sure that the computer's ethernet port supports 1Gbps.
https://social.technet.microsoft.com/wiki/contents/articles/2539.diy-supercomputing-how-to-build-a-small-windows-hpc-cluster.aspx

Variable latency in Windows networking

I have a PLC that sends UDP packets every 24ms. "Simultaneously" (i.e. within a matter of what should be a few tens or at most hundreds of microseconds), the same PLC triggers a camera to snap an image. There is a Windows 8.1 system that receives both the images and the UDP packets, and an application running on it that should be able to match each image with the UDP packet from the PLC.
Most of the time, there is a reasonably fixed latency between the two events as far as the Windows application is concerned - 20ms +/- 5ms. But sometimes the latency rises, and never really falls. Eventually it goes beyond the range of the matching buffer I have, and the two systems reset themselves, which always starts back off with "normal" levels of latency.
What puzzles me is the variability in this variable latency - that sometimes it will sit all day on 20ms +/- 5ms, but on other days it will regularly and rapidly increase, and our system resets itself disturbingly often.
What could be going on here? What can be done to fix it? Is Windows the likely source of the latency, or the PLC system?
I 99% suspect Windows, since the PLC is designed for real time response, and Windows isn't. Does this sound "normal" for Windows? If so, even if there are other processes contending for the network and/or other resources, why doesn't Windows ever seem to catch up - to rise in latency when contention occurs, but return to normal latency levels after the contention stops?
FYI: the Windows application calls SetPriorityClass( GetCurrentProcess(), REALTIME_PRIORITY_CLASS ) and each critical thread is started with AfxBeginThread( SomeThread, pSomeParam, THREAD_PRIORITY_TIME_CRITICAL ). There is as little as possible else running on the system, and the application only uses about 5% of the available Quad-core processor (with hyperthreading, so 8 effective processors). There is no use of SetThreadAffinityMask() although I am considering it.
So, you have two devices, PLC and camera, which send data to the same PC using UDP.
I 90% suspect networking.
Either just buffering / shaping mechanism in your switch/router (by the way, I hope your setup is isolated, i.e. you have not just plugged your hardware into a busy corporate network), or network stack in either of devices, or maybe some custom retransmission mechanism in the PLC. Both IP and Ethernet protocols were never meant to guarantee low latencies.
To verify, use Wireshark to view the network traffic.
For best experiment, you can use another PC with three network cards.
Plug your three devices (windows client, PLC, camera) into that PC, and configure network bridge between the 3 cards. This way that second PC will act as an Ethernet switch, and you’ll be able to use Wireshark to capture all network traffic that goes through that.
The answer turned out to be a complex interaction between multiple factors, most of which don't convey any information useful to others... except as examples of "just because it seems to have been running fine for 12 months doesn't give you licence to assume everything was actually OK."
Critical to the issue was that the PLC was a device from Beckhoff to which several I/O modules were attached. It turns out that the more of these modules are attached, the less ability the PLC has to transmit UDP packets, despite having plenty of processor time and network bandwidth available. It looks like a resource contention issue of some kind which we have not resolved - we have simply chosen to upgrade to a more powerful PLC device. That device is still subject to the same issue, but the issue occurs if you try to transmit at roughly every 10ms, not 24ms.
The issue arose because our PLC application was operating right on the threshold of its UDP transmit capabilities. The PLC has to step its way through states in a state machine to do the transmit. With a PLC cycle of 2ms, the fastest it could ever go through the states in the state machine with the I/O modules we had attached turned out to be every 22ms.
Finally, what was assumed at first to be an insignificant and unrelated change on PLC startup tipped it over the edge and left it occasionally unable to keep up with the normal 24ms transmit cycle. So it would fall progressively further behind, giving the appearance of an increasingly latency.
I'm accepting #Soonts answer because careful analysis of some Wireshark captures was the the key to unlocking what was going on.

using pci to interconnect motherboards

i've got a few old mobos and i was wondering whether it might be possible to create a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other? i'm sure it would be an arduous undertaking involving writing a device driver for the header boards and then writing an application to make use of the interconnect; perhaps a simple demo demonstrating a thread running on each processor and use of both sets of ram, perhaps creating a mini virtual machine that maps 2x3gb ram on 32 bit mobos to a single 6 gb address space. a microcontroller may be needed on each pci header card to act as a translator.
given that mobos almost always have multiple pci slots, i wonder if these interconnected card pairs could be used to daisychain mobos in a sort of high speed beowulf cluster.
i would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
does anyone have any advice or has this sort of thing been done before?
Update:
Thanks Martin. What you say makes sense, and it would also seem that if it were possible that it would have already been done before.
Instead, would it be possible to indirectly control the slave cpu by booting it using a "pretend" bootable storage device (hard disk, usb stick, etc)? As long as the slave mobo thinks its being operated by an operating system on a real device it should work.
This could potentially extend to any interface (sata, ide, usb etc); if you connected two pcs together with a sata/ide/usb cable (plug one end of an ide ribbon into one mobo and the other into another mobo), that would be all the hardware you need. the key is in creating a new driver for that interface on the master pc, so instead of the master pc treating that interface as having a storage device on it, it would be driven as a dummy bootable hard disk for the slave computer. this would still be a pretty difficult job for me because i've never done device drivers before, but at least i wouldn't need a soldering iron (which would be much further beyond me). i might be able to take an open source ide driver for linux, study it, and then butcher it to create something that kindof acts in reverse (instead of getting data off it, an application puts data onto it for the slave machine to access like a hard disk). i could then take a basic linux kernel and try booting the slave computer from an application on the master computer (via the butchered master pc ide/sata/usb device driver). for safety, i would probably try to isolate my customised driver as much as possible by targeting an interface not being used for anything else on the master pc (the master pc might use all sata hard disks with the ide bus normally unused, so if i created a custom ide driver it might cause less problems with the host system - because it is sata driven).
Does anyone know if anything like this (faking a bootable hard disk from another pc) has ever been tried before? It would make a pretty cool hackaday on youtube, but also seriously it could add a new dimension to parallel computing if it proved promising.
The PCI bus can't take over the other CPU.
You could make an interconnect that can transfer data from a program on one machine to another. An ethernet card is the most common implementation but for high performance clusters there are faster direct connections like infiband.
Unfortunately PCI is more difficult to build cards for than the old ISA bus, you need surface mount controller chips and specific track layouts to match the impedance requirements of PCI.
Going faster than a few megabit/s involves understanding things like transmission lines and the characteristics of the connection cable.
would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
LOL. PCI is an 32-Bit 33MHz Bus at minimum. So simply out of reach for an ATMEGA.
But your idea of:
a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other [...]
This is cheaply possible with just a pair of PCI Firewire (IEEE 1394) cards (and a Firewire cable). There is even a linux driver that allows remote debugging over firewire.

Decreasing performance of dev machine to match end-user's specs

I have a web application, and my users are complaining about performance. I have been able to narrow it down to JavaScript in IE6 issues, which I need to resolve. I have found the excellent dynaTrace AJAX tool, but my problem is that I don't have any issues on my dev machine.
The problem is that my users' computers are ancient, so timings which are barely noticable on my machine are perhaps 3-5 times longer on theirs, and suddenly the problem is a lot larger. Is it possible somehow to degrade the performance of my dev machine, or preferrably of a VM running on my dev machine, to the specs of my customers' computers?
I don't know of any virtualization solutions that can do this, but I do know that the computer/CPU emulator Bochs allows you to specify a limit on the number of emulated instructions per second, which you can use to simulate slower CPUs.
I am not sure if you can cpu bound it, but in VirutalBox or Parallel, you can bound the memory usage. I assume if you only give it about 128MB then it will be very slow. You can also limit the throughput on the network with a lot of tools. I guess the only thing I am not sure about is the CPU. That's tricky. Curious to know what you find. :)
You could get a copy of VMWare Workstation and choke the CPU of your VM.
With most virtual PC software you can limit the amount of RAM, but you are not able to set the CPU to a slower speed as it does not emulate a CPU, but uses the host CPU.
You could go with some emulation software like bochs that will let you setup an x89 processor environment.
You may try Fossil Toys
* PC Speed
PC CPU speed monitor / benchmark. With logging facility.
* Memory Load Test
Test application/operating system behaviour under low memory conditions.
* CPU Load Test
Test application/operating system behaviour under high CPU load conditions.
Although it doesn't simulate a specific CPU clock speed.

what could be the effect on performance moving from virtual VMware to Physical servers?

I was wondering what could be the effect and possible advantage/disadvantage of replacing the Virtual VMWare with physical servers on performance of a web application
Unfortunately there's not nearly enough information to give any advice on this.
If you have one VMware ESX server on a high-end hardware box, converting it to a physical server will give you a minimal performance advantage.
But there are SO many variables here, your application could be going slower than it would on a physical machine for a number of VMware configuration reasons. Generally a properly configured VMware infrastructure in a production environment won't be much slower than the physical equivalent with the same allocated resources.
You need to look at where your performance is being hit - are you CPU, memory, drive, network bound? You need to understand what is slowing you down before you can even start to ask questions like this.
If you are CPU bound then can you move clients between you host boxes to even out load. If you are memory bound then can you increase the RAM assigned to the bound client. Drive bound - then you need to sort out your SAN throughput. If you are network bound then adding an additional network card to the host machine and adding an additional virtual network to spread the load can help.

Resources