How to prevent script from bringing computer to near standstill - performance

I have a bat script which is rather complex and runs every 15 minutes. It opens a browser, runs an iMacro to sign in and download a file, closes the browser, extracts the file, initiates a javascript which verifies that the downloaded file is more recent than the one downloaded 15 minutes earlier, opens Excel, imports the downloaded file, triggers a very involved VB Script, exports a csv file, closes Excel, opens a new browser, logs in to a 2nd site, uploads the csv file and closes the browser again.
Meanwhile I'm doing my job, which requires many browser tabs open in several different browsers, and web development software.
While the script is running, my computer will frequently come to a near standstill, preventing me from doing any other work - presumedly because the CPU usage is maxed out. Not only can I not do other work, but my script frequently fails to complete because the browser is so slow it times out before the page loads.
Task Manager tells me that my CPU usage while running the script is 98-100% and I'm using 7 out of 8 MB of RAM. Obviously, I'm pushing my computer to its limits. Is there anything I can do to help minimize the slow down, such as allocate some RAM, partition my hard drive, make a sacrifice to the processor gods, etc.? My computer is a 64-bit running Windows 7 Pro with 8MB of RAM and a 3.00 GHz processor. I can't get a new computer but I can probably ask for additional RAM if it would help.
I don't know very much about performance optimization, so any suggestions are welcome. I can't stop using the script, run it less often, or run it on a different computer.

In a script that loops to repeat a task immediately then the CPU usage will rise to very high levels.
Using timeout or ping to generate a delay between loops reduces the CPU usage.

Related

Access slow launch from .accdb, but not from within Access application

This just started happening for no good reason I can find.
If I launch the MSACCESS.EXE program, then open a database. The database opens within 1 second.
If I launch the same database by double-clicking on the .accdb file's icon. It takes about 40 seconds for the Access window to appear, and less than 1 second after that the database opens.
The database is local, and both Access and the DB are on an SSD. The system is an Asus Z97 motherboard, i7-4790K # 4MHz (not overclocked) with 32gb RAM and about 200gb of free hard disk space.
In both cases, performance after opening is excellent with no issues. It appears it's only the launching of MSACCESS.EXE by double-clicking a .accdb file that is affected. I double-checked the file association for .accdb and it points to the correct executable.
I captured some data with Performance Monitor during the 40-second pause. MSACCESS.EXE is using about 0.4% CPU, doing almost no disk I/O, and there's no network activity.
I've already tried "Compact and Repair" but that had no effect.
This just started happening, and now seems to be affecting Access on ALL .accdb files. They open instantly from within Access but take 40 seconds to open when double-clicked. I haven't installed any new software or Windows updates recently.
Curiously, if I change the .accdb extension to .accdr (runs the db in the client runtime instead of full Access) the database will launch instantly.
What could possibly be going on here? I've searched the web and found some posts having to do with databases on a network share, but that doesn't apply here.
For anyone else encountering this issue, it appears this bug has nothing to do with Access specifically.
I needed to shutdown the machine, and when I did so, Windows seemed to completely ignore multiple shutdown requests. As I was googling to troubleshoot, after about 10 minutes, the shutdown did finally start. It took another 10 minutes to shutdown.
After rebooting the slow launch problem no longer occurs, there's only about a 2 second delay, which I assume is just MSACCESS.EXE loading "cold".
So, the problem is most likely in Windows and not Access.
I spent ages looking for the answers to this on various sites but eventually cobbled together my own fix, so hopefully this saves others some time.
This worked for me and reduced the load time from circa 4 minutes - even just opening a blank accdb fle - to seconds... So 4 mins if double-clicking an accdb. Once MS Access open and using File | Open it was fast.
I had two instances of MS Access both on Windows Servers that can see the Internet but goes through a corporate proxy etc.
After getting some hints by Googling this issue I suspected that the 4 mins or so was some sort of timeout trying to access a site or sites (MS Office apps do this) and that eventually when the proxy returned a timeout then Access started responding again. It was quick on the 2nd open because it didn't redo this request.
Based on this, I tried to divert certain sites to 127.0.0.1 and turn off all the Internet options in Trust Centre | Privacy etc. Nothing worked.
Finally, I got the solution. In Windows Defender firewall I created a new Application rule for the MSACCESS.EXE. This was an outbound rule that blocked all Internet traffic. After this the first double-click was fast again. I assume with traffic totally blocked, whatever request is going out to sites, is immediately stopped and returns a "no internet" to Access, which then carries out executing, rather than waiting for the 3-4min timeout.

Make chrome use the memory and cpu

So, yesterday I opened task manager in Win 8 (64 bit) and noticed that Chrome (32-bit for some reason) didn't use the whole power my PC has got. So I was running an AI JavaScript program and I noticed that my CPU was running at 1% and Memory was only runnning 120 MB, and that forced me to think why would I wait 5 minutes for it to run instead of somehow boosting it to at least 60%. As far as I know Windows automatically distributes the hardware usage to programs so I'm asking what's the problem:
Is it because it's x32?
Is it because I should manually configure windows to give it more power?
Note: I did search Google, but all I got is that people actually complain about High CPU usage and I've got the opposite.
32 bit doesn't make a difference here. Javascript is inherently single-threaded, so by default (not counting web workers) it won't use more than a single core on your machine. It just cannot. Also memory usage doesn't necessarily tell you how hard a program is working. Some need lots of memory, others only little.
It's up to programs to use the resources of the machine most efficiently; if they don't, there is nothing you can do with Windows to make them run better or faster.

loading times differ from localhost and remote

I'm running a website (made using fuel MVC) which is stored in a computer in my house, so I run it with
http://localhost
and it loads in 200ms, more or less. This time is only the time php script needs to execute, so time to transfer across the network is not taken into account. It is just execution time of the script.
However, when I access the same script out of my network (my pc at work, for example), that time measure is higuer than one second for the same page, and I don't know why it takes almost 10 times more to execute the php script when running it from an outside computer.
The computer I'm running the website on is a Windows 7 Ultimate with 3GB of RAM memory.
Any idea?

Windows 7: poor GUI response in my program while downloading data; is there some way to improve this?

I've written a program that (among other things) downloads multiple large files from a server on the LAN, using TCP. This program runs fine under Linux, MacOS/X, and generally under Windows as well (it uses Qt for the GUI and straight sockets calls for networking), but on certain Windows machines the download appears to be too much for the machine to handle, and I'm wondering if anyone has any ideas as to why that is and what can be done about it.
When downloading files, my program spawns a separate I/O thread that basically just sits in a loop, downloading data over TCP and writing it to a file, writing 128KB per call to QFile:write(). Each file is typically several hundred megabytes long, and a typical download session writes out several dozen of these files. Note that the I/O thread runs independently of the GUI thread, so I wouldn't expect it to affect GUI's performance much if at all -- especially not when running on a multicore PC.
The PC in question is a Core-2Duo Quad Q6600 running at 2.40GHz, with 4GB of RAM. It's running Windows 7 Ultimate SP1, 32-bit. It is receiving data over a Gigabit Ethernet connection and writing it to files on the NTFS-formatted boot partition of the 232GB internal Hitachi ATA drive.
The symptom is that sometimes during a download (seemingly at random) the program's GUI will become non-responsive for 10 to 30 seconds at a time, and often the title bar of the window will have "(not responding)" appended to it. The symptom will then clear up again and the download will proceed normally again. Another symptom is that the desktop is extremely sluggish during the download... for example, if I click on the "Start" button, the Start menu will take ~30 seconds to populate, instead of being populated near-instantaneously as I would expect.
Note that Task Manager shows plenty of free memory, but it does show short spikes of CPU usage to 100% one one of the 4 cores, at the same time the problems are seen.
The data is arriving over Gigabit Ethernet, and if I have my program just receive the data and throw it away (without writing it to the hard drive), the machine can maintain a constant download rate of about 96MB/sec without breaking a sweat. If I write the received data to a file, however, the download rate decreases to about 37MB/sec, and the symptoms described above start to appear.
The interesting thing is that just for curiosity's sake I added this call to my I/O thread's entry function, just before the beginning of its event loop:
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_BELOW_NORMAL);
When I did that, the "(not responding)" symptoms cleared, but then download speed was reduced to only ~25MB/sec.
So my questions are:
Does anyone know what might be causing the sporadic hangups of the GUI when the hard drive is under a heavy write-load?
Why does lowering the I/O thread's priority cause the download rate to drop so much, given that there are three idle cores on the machine? I would think that even a lower-priority thread would have plenty of CPU available in this situation.
Is there any way to get a maximum download rate without causing Windows' desktop responsiveness and/or my app's GUI responsiveness to suffer problems?
Without seeing any code is hard to answer but this seems to be something related to processors and the fact that your download thread is not leaving any space for other threads to performs other operations.
It seems it never waits and that the driver of the network card is not well written.
Are you sure your thread is entering in an idle state when there is no data incoming?
In OS with a single processor a for (;;) {} will consume 100% cpu and if it talks continuously with the kernel it may stops other processes or other threads for doing that, especially if there is a bug or a very bad behaviour in some network card driver in your case.
Probably putting the thread priority below normal you are asking the OS to use your thread less often, this gives by a magical combination of things that allow things to not hang too much.
Check the code, maybe you are forgetting something?
Check if adding a sleep(0) to force the OS to yield to another thread sometime will make things better, but this is a temporary fix, you should find why your thread is consuming 100% cpu, if it is.

What is causing one Vista machine to be 10 times faster than another machine?

We run a Fortran console program we have run for years. Recently we purchased identical new HP server class machines (4 processors, 8 gig ram, 4 hard drives) for everyone in the office. We configured them identically as nearly as we know. We can compile the Fortran program on one machine, pass the executable to the different machines, and on two machines it executes painfully slow, while on two others it has modest performance (but not as good as before we upgraded from XP machines).
It uses almost no console output (about 40 lines) but outputs about 15 megs of files.
We open task manager to see what's going on, and we see that on the slow machines it's loading ONE CPU to about 15%. On the fast machines it's loading ALL CPUs to about 40% (but one of them seems to load more than the others). As I recall, on XP it loaded the CPU to 99%, and ran much faster.
These machines are the employees' general purpose machines, and have lots of company bloatware on them. And there is the possibility they have slightly different directory structures. But what seems totally puzzling to me is why Vista is not giving them more CPU time. If the CPUs were loading up, I might blame the performance variation on different directory structures, but not loading up the CPUs just boggles my mind.
David
if there's a bottleneck in IO, the CPU wouldn't be loaded as much because it's mostly waiting for the IO to take place. One could even imagine this to cause the one CPU vs many CPUs problem if there's just no point in kicking in another CPU because there's plenty of time between while waiting. What if you take an external HD and try out if the differences also take place if you run the same program on that HD on different machines?
Please go into Windows task manager, Performance / - Select in [View] the option: [Kernel Times] and look what's displayed on the bars during program execution.
If its only 15% load on quad+hyperthreading box, that says basically, OpenMP, MPI (or whatever it uses) - isn't properly working - works on 1/8 => 15%. Can you run the MPI-test command for your specific system in order to check for errors in multiprocessing on each box? Therefore, the question would be - why does the multiprocessing environment not work?
Regards
rbo
SWAG, but have you checked your virus scanner configuration? If the scanner isn't set to ignore the type of file you're writing on the slow machine, then each write to those files might be getting intercepted and scanned before being written to the disk. This could lead to the process sitting in I/O wait and not getting scheduled as often.
Vista had a problem with some uncontrollably memory leaks, perhaps this is your error, some conflict in the "bloatware" is causing a memory leak and so your Fortran program is running so much slower?
I assume you have tested this with all programs ended. It seems unlikely that your console program is the issue. Sounds like there's definitely a memory conflict going on though.

Resources