Win7 tracert in batch file to find occasional high latency packet - windows

I'm working on an LAN issue which manifests as an occasional very high latency packet (~30 ms vs typical 3 ms round trip times). Using Windows ping -t I discovered that we're looking at one packet out of a hundred or more. I'd like to write a script that runs tracert until it gets a large latency (say 20+ ms) hop and then records the tracert output for additional diagnostics.
The problem is, this is not my desktop system and there aren't any of the handy dandy add on tools (awk, sed, perl, etc.) I'd normally use for manipulating the output file to get what I want.
Is there a way to collect this information with just Win7 command line commands?
Alternatively, is there a better command or approach for diagnosing this issue?

Related

Packet Corruption: Why sometimes ffmpeg .bat batch video editing makes my computer unstable unable to restart?

I'm doing very time consuming ffmpeg video editing. That's why I put my commands into a .bat batch file and run them over night. Usually that works fine, but from time to time when I look the next moring I see an error message of this kind:
From that state on, I didn't find any good way to close the console. When I press the [x] button in the top right corner, it freezes. When I try to kill the application using the task manager nothing happens. Even explorer.exe cannot be closed using the task manager. A shutdown won't do anything. During the last month I had this problem about three times and the only way I could close it was to long press the power button of the computer until it was turned off "the bad way".
Any ideas what to in such situations?
Or even better: How to prevent those situations?
What can the reason(s) be for the error?
Do you understand the message?
When the computer is started again the next morining and I run the same .bat file again everything works fine. So the same error does not repeat and the video is edited nicely!
Edit: Now, about one week after posting this question the problem occurred many more times! It is very annoying. I guess it has to do with the external hard drive connected by USB. Sometimes it randomly interrupts the connection! That might be the reason for the behavior. Whatever its causing the error, I want to learn a solution how to deal with this in future. I don't want to always push the reset button of my computer. I want a proper way to be able to shut it down.
To narrow down the cause, what is causing this error, and what is not, here is a list of seven seven seemingly isolated solutions that each alone or all together should fix your problem:
The .bat Batch File
Apparently there is nothing wrong with your coded .bat batch files.
If that was the case, then none of your past videos would have rendered.
But just to be sure, try to run your .bat in a different laptop or computer on the heaviest and most demanding video editing project files just to make sure that the .bat files in fact and without a doubt flawless.
The Computer CPU
Make sure that your CPU runs flawlessly not just for 30 minutes but for the hours long burn tests that are the video
projects at night you mention. Poor contact between a concave or convex heatsink and cpu or lack of or too much of thermal paste can make cpu too hot and unstable during prolonged cpu intensive burn tests. A software like OCCT or Intel
Burn Test should be able to run for hours in your case without a
single fault.
The Computer RAM
To test your memory you can use MemTest86 or my favourite the open source MemTest86+ which should run for hours without a single memory error.
The OS Integrity
Run CMD as admin, and type chkdsk c: /f or chkdsk c: /f /r /x and press Y to check and repair (after a reboot) the local hard drive c: or any other partitions that are the source or destination of your rendering projects. When your computer encounters a sudden shutdown or detects a corrupted file system, sometimes this is the cause of a corrupted OS file. This checks for the integrity of the most important system files. Also sfc /scannow is another way to check System Files which scans and repairs system files.
The Harddrive
Connect your external drive locally, and run both a short and deep long test to make sure the harddrive has zero cluster faults. A SMART test from Crystical Disk Info famout for their Crystal Disk Test, can be a good way to see all the past errors on a Harddrive. Also, try to run the nightly batch files on the HDD connected internally. That way you can rule out the next item:
The Cable Quality
Cat rated UTP networking and USB cables are notoriously known for their poor manufacturing quality and low reliability. Not just over time, but new out of the box they can be the cause of disconnects, bad connections and low throughput. There is not something like they work 100% or they work 0%. Sometimes they sit right in between and "work, but to a degree" enough to be sold, with the absolute bare minimum and sometimes under minimum quality strands that are anything but cupper. So check your cables, replace the cables with other cables that you have laying around. CCA (Copper Cladded Aluminium) is the garbage to stay away from. Get proper Cupper only cables.
USB to SATA (HDD) or M.2 NVMe (SSD) Adapter Chip
Some USB-to-SATA adapters are notorious for their low stamina, stopping working when the adapter chips become exhausted in professional usage over prolonged continous workloads, resulting in disconnects even if they would be connected via a cupper USB 3.2 cable to the computer! The internet is full of forums with people having problems with older generation cheaper JMicron chips causing interruptions causing failures in copying files from or to the PC. Realtek chips are somewhat better, but often the solutions on the last page shows all problems went away when they bought an expensive adapter that uses an ASMedia chip.

Memory dump for period of time

When a program is misbehaving, it is pretty easy to capture a memory dump of the process, and then analyze it with a tool like WinDBG. However, this is pretty limited, you only get a snapshot of what the process is doing, and in some cases finding why a certain part of the code was reached is really difficult.
Is there any way of capturing memory dumps for a period of time, like recording a movie rather than taking a picture, which would indicate what changed in that period of time, and the parts of the code that were executed in that time interval?
Recording many memory dumps
Is there any way of capturing memory dumps for a period of time, like recording a movie rather than taking a picture
Yes, that exists. It's called Procdump and you can define the number of dumps with the -n parameter and the seconds between dumps with -s. It might not work well for small values of s, because it takes longer to take the crash dump.
Example:
procdump -ma -n 10 -s 1 <PID> ./dumps
However, this technique is usually not very helpful, because you now have 10 dumps to analyze instead of just 1 - and analyzing 1 dump is already difficult. AFAIK, there's no tool that would compare two dumps and give you the differences.
Live debugging
IMHO, what you need is live debugging. And that's possible with WinDbg, too. Development debugging (using an IDE) and production debugging are two different skills. So you don't need to install a complete IDE such as Visual Studio on your customer's production environment. Actually, if you copy an existing WinDbg installation onto a USB stick, it will run portable.
Simply start WinDbg, attach to a process (F6), start a log file (.logopen), set up Microsoft symbols, configure exceptions (sx) and let the program run (g).
Remote debugging
Perhaps you may even want to have a look into WinDbg's remote debugging capabilities, however, that's a bit harder to set up, usually due to IT restrictions (firewall etc.).
Visual Studio also offers remote debugging, so you can use VS on your machine and just install a smaller program on your customer's machine. I hardly have experience with it, so I can't tell you much.
Logging
the parts of the code that were executed in that time interval?
The most typical approch I see applied by any company is turning on the logging capabilities of your application.
You can also record useful data with WPT (Windows Performance Toolkit), namely WPR (Windows Performance Recorder) and later analyze it with WPA (Windows Performance Analyzer). It will give you call stacks over time.

How to prevent script from bringing computer to near standstill

I have a bat script which is rather complex and runs every 15 minutes. It opens a browser, runs an iMacro to sign in and download a file, closes the browser, extracts the file, initiates a javascript which verifies that the downloaded file is more recent than the one downloaded 15 minutes earlier, opens Excel, imports the downloaded file, triggers a very involved VB Script, exports a csv file, closes Excel, opens a new browser, logs in to a 2nd site, uploads the csv file and closes the browser again.
Meanwhile I'm doing my job, which requires many browser tabs open in several different browsers, and web development software.
While the script is running, my computer will frequently come to a near standstill, preventing me from doing any other work - presumedly because the CPU usage is maxed out. Not only can I not do other work, but my script frequently fails to complete because the browser is so slow it times out before the page loads.
Task Manager tells me that my CPU usage while running the script is 98-100% and I'm using 7 out of 8 MB of RAM. Obviously, I'm pushing my computer to its limits. Is there anything I can do to help minimize the slow down, such as allocate some RAM, partition my hard drive, make a sacrifice to the processor gods, etc.? My computer is a 64-bit running Windows 7 Pro with 8MB of RAM and a 3.00 GHz processor. I can't get a new computer but I can probably ask for additional RAM if it would help.
I don't know very much about performance optimization, so any suggestions are welcome. I can't stop using the script, run it less often, or run it on a different computer.
In a script that loops to repeat a task immediately then the CPU usage will rise to very high levels.
Using timeout or ping to generate a delay between loops reduces the CPU usage.

concurrent pipelining in windows with cygwin

Lets say I had a series of operations I wanted to apply to some data. The programs implementing the operations are not necessarily written in the same language, but they all work by reading from STDIN and writing to STDOUT.
In a unix environment it can be set it up as a pipeline like:
cat data.txt | prog1.sh | prog2.pl | prog3.py | prog4 > out.txt
and it will execute the 4 operations concurrently on the stream of data.
Does the same happen in windows?
I remember testing this out a few years ago with cygwin on windows xp, but I only saw a single prog running in the task manager.
Has anything changed with cygwin, the new xp service packs, or windows 7/8 that would allow for concurrent pipelining? Or has it always worked and I just made a silly mistake in my tests?
I don't have access to a windows machine right now or I'd test it out myself. If someone knows what's going on, I appreciate any help.
While the Unix-like layer implemented by Cygwin has many flaws compared to a native POSIX system or to native Windows programming (especially where performance is concerned), the pipes it implements are quite "real." The programs in the pipeline will run concurrently and will process the data they receive in parallel.
However, as with any pipeline, the speed of the entire operation will be determined by the speed of the slowest component. So if one of the programs in the pipeline is markedly less efficient than the others, it will dominate the CPU usage in the process list.

Windows 7: poor GUI response in my program while downloading data; is there some way to improve this?

I've written a program that (among other things) downloads multiple large files from a server on the LAN, using TCP. This program runs fine under Linux, MacOS/X, and generally under Windows as well (it uses Qt for the GUI and straight sockets calls for networking), but on certain Windows machines the download appears to be too much for the machine to handle, and I'm wondering if anyone has any ideas as to why that is and what can be done about it.
When downloading files, my program spawns a separate I/O thread that basically just sits in a loop, downloading data over TCP and writing it to a file, writing 128KB per call to QFile:write(). Each file is typically several hundred megabytes long, and a typical download session writes out several dozen of these files. Note that the I/O thread runs independently of the GUI thread, so I wouldn't expect it to affect GUI's performance much if at all -- especially not when running on a multicore PC.
The PC in question is a Core-2Duo Quad Q6600 running at 2.40GHz, with 4GB of RAM. It's running Windows 7 Ultimate SP1, 32-bit. It is receiving data over a Gigabit Ethernet connection and writing it to files on the NTFS-formatted boot partition of the 232GB internal Hitachi ATA drive.
The symptom is that sometimes during a download (seemingly at random) the program's GUI will become non-responsive for 10 to 30 seconds at a time, and often the title bar of the window will have "(not responding)" appended to it. The symptom will then clear up again and the download will proceed normally again. Another symptom is that the desktop is extremely sluggish during the download... for example, if I click on the "Start" button, the Start menu will take ~30 seconds to populate, instead of being populated near-instantaneously as I would expect.
Note that Task Manager shows plenty of free memory, but it does show short spikes of CPU usage to 100% one one of the 4 cores, at the same time the problems are seen.
The data is arriving over Gigabit Ethernet, and if I have my program just receive the data and throw it away (without writing it to the hard drive), the machine can maintain a constant download rate of about 96MB/sec without breaking a sweat. If I write the received data to a file, however, the download rate decreases to about 37MB/sec, and the symptoms described above start to appear.
The interesting thing is that just for curiosity's sake I added this call to my I/O thread's entry function, just before the beginning of its event loop:
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_BELOW_NORMAL);
When I did that, the "(not responding)" symptoms cleared, but then download speed was reduced to only ~25MB/sec.
So my questions are:
Does anyone know what might be causing the sporadic hangups of the GUI when the hard drive is under a heavy write-load?
Why does lowering the I/O thread's priority cause the download rate to drop so much, given that there are three idle cores on the machine? I would think that even a lower-priority thread would have plenty of CPU available in this situation.
Is there any way to get a maximum download rate without causing Windows' desktop responsiveness and/or my app's GUI responsiveness to suffer problems?
Without seeing any code is hard to answer but this seems to be something related to processors and the fact that your download thread is not leaving any space for other threads to performs other operations.
It seems it never waits and that the driver of the network card is not well written.
Are you sure your thread is entering in an idle state when there is no data incoming?
In OS with a single processor a for (;;) {} will consume 100% cpu and if it talks continuously with the kernel it may stops other processes or other threads for doing that, especially if there is a bug or a very bad behaviour in some network card driver in your case.
Probably putting the thread priority below normal you are asking the OS to use your thread less often, this gives by a magical combination of things that allow things to not hang too much.
Check the code, maybe you are forgetting something?
Check if adding a sleep(0) to force the OS to yield to another thread sometime will make things better, but this is a temporary fix, you should find why your thread is consuming 100% cpu, if it is.

Resources