I have a certain set of formula in Range("E10001:E20000") which i'd like to copy to Columns("F:CZ") total 1M cells. Options:
Range("E10001:E20000").Copy
Range("F10001:CZ20000").PasteSpecial xlPasteAll
Range("F10001:CZ20000").formula = Range("E10001:E20000").formula
Range("E10001:CZ20000").fillRight
Range("E10001:CZ20000").Select
followed by Ctrl+R in Excel
At the end, to make the sheet light, i replace formulae with Values,
Range("F10001:CZ20000").Value = Range("F10001:CZ20000").Value
What i notice is that Option 4 is way faster than the rest. Can anybody explain the performance penalties in the first 3 options. Note, i'm unfamiliar with time functions and measured physically using seconds clock.
Short answer: Do not use shortcuts with the SendKeys method!
Longer answer: Shortcuts can lead to unexpected results, in my local settings "^r" would (start to) insert a table.
Feel free to use a timer with starttime = Now at the beginning of your code and
timetaken = Now - starttime
MsgBox (Hour(timetaken) & ":" & Minute(timetaken) & ":" & Second(timetaken))
to measure the performance though.
As a rule of thumb, your time is worth way more than system time and corrupting your data by accidentally sending the wrong set of instructions is never worth the risk.
Your operation with 2 × 10,000 cells is not worth optimising.
Related
First question is: Is this script as fast as possible as it can get?
I'm talking about the commands in the start are they necessary/unnecessary? do they help at all when its about simple key remapping?
I would like it to run as fast as possible since I do pretty intense & fast stuff with this script.
The 2nd question is: How to prevent this script from spamclicking?
If I keep "E or R" held down, it will spam click which I do not want it to do.
How to fix this?
#NoEnv
#MaxHotkeysPerInterval 99000000
#HotkeyInterval 99000000
#KeyHistory 0
ListLines Off
Process, Priority, , A
SetBatchLines, -1
SetKeyDelay, -1, -1
SetMouseDelay, -1
SetDefaultMouseSpeed, 0
SetWinDelay, -1
SetControlDelay, -1
SendMode Input
*e::Click
return
*r::Click
return
Ins::Suspend
return
Answer to the second question, add the KeyWait command to your hotkeys, KeyWait - Syntax & Usage | AutoHotkey
*e::
Click
KeyWait, e
return
*r::
Click
KeyWait, r
return
Answer to the first question, it looks like you may have gotten the header lines from here: How to optimize the speed of a script as much as possible. The post explains each line (see below) and even includes some benchmarks for other things.
In your case, if you're only doing simple key remapping, it's probably not worth including any of these lines with exception to SendMode , Input, as that is the most reliable send mode. You mention doing "pretty intense & fast stuff with this script"; what this means exactly will determine if the header lines are necessary.
Notes
1. #NoEnv is recommended for all scripts, it disables environment variables.
2. The default #MaxHotkeysPerInterval along with #HotkeyInterval will stop your script by showing message boxes if you have some kind of rapid autofire loop in it. Just put some insane unreachable high number to ignore this limit.
3. ListLines and #KeyHistory are functions used to "log your keys". Disable them as they're only useful for debugging purposes.
4. Setting an higher priority to a Windows program is supposed to improve its performance. Use AboveNormal/A. If you feel like it's making things worse, comment or remove this line.
5. The default SetBatchLines value makes your script sleep 10 milliseconds every line. Make it -1 to not sleep (but remember to include at least one Sleep in your loops, if any!)
6. Even though SendInput ignores SetKeyDelay, SetMouseDelay and SetDefaultMouseSpeed, having these delays at -1 improves SendEvent's speed just in case SendInput is not available and falls back to SendEvent.
7. SetWinDelay and SetControlDelay may affect performance depending on the script.
8. SendInput is the fastest send method. SendEvent (the default one) is 2nd place, SendPlay a far 3rd place (it's the most compatible one though). SendInput does not obey to SetKeyDelay, SetMouseDelay, SetDefaultMouseSpeed; there is no delay between keystrokes in that mode.
9. If you're not using any SetTimer, the high precision sleep function is useful when you need millisecond reliability in your scripts. It may be problematic when used with SetTimer in some situations because this sleep method pauses the entire script. To make use of it, here's an example that waits 16,67 milliseconds:
DllCall("Sleep",UInt,16.67)
10. When using PixelSearch to scan a single pixel of a single color variation, don't use the Fast parameter. According to my benchmarks, regular PixelSearch is faster than PixelSearch Fast in that case.
11. According to the documentation (this text is found in the setup file), the Unicode x64bit version of AHK is faster, use it when available.
I am a complete novice! Apologies if this appears utterly basic.
I have a distribution which changes over time (cycles) and I want to show this (single series) on a dynamic plot. I have achieved it by recording a macro while I change the yvalues (by 1 row at a time) and then adding in a delay as follows:
Application.Wait (Now + TimeValue("00:00:01"))
However, I don't know how to define the yvalues range such that it will jump to the next row (of y values) and display that.
It could be up to 200 cycles.
I would like to use R3C3 type notation to define the y values and a 'for i < 200.....next i' approach, but I have tried and failed several times.
The specific issue I have is demonstrated here; see the C3:M3 changing up to C7:M7: please see attached image.
Code issue
Any tips? Thanks.
I have a VB script that does row processing on a small Excel file (35 columns, 2000 rows, no hidden content). It takes about 3 seconds to process the file when applied to an .xls file. If I save the .xls as a .xlsx file, the same script takes about 30 to 60 seconds, completely freezing Excel several times in the process (although it finishes and the results are correct).
I suspect some loop parts of the code are responsible, but I need to find out which. I have found scripts like https://stackoverflow.com/a/3506143/5064264 which basically amount to timing all parts of the script by hand - similar to printf-debugging, which is not very elegant and also consumes much time for large scripts.
Is there a better alternative to semi-manual profiling, like performance analyzers in most IDEs, or exist any other approaches to this problem? I do not want to post the code as it is rather long and I also want to solve it for other scripts in the future.
yes - kind of. I use this to test how fast things are running and experiment with different approaches to get better timings. I stole this from somewhere here on SO.
sub thetimingstuff()
Dim StartTime As Double
Dim SecondsElapsed As Double
StartTime = Timer
'your code goes here
SecondsElapsed = Round(Timer - StartTime, 2)
MsgBox "This code ran successfully in " & SecondsElapsed & " seconds", vbInformation
end sub
Another EDIT
you could bury debug.print lines sporadically through your code to see how much each chunk takes to execute.
The error in my case was as Comintern's comment suggested - a method was copying whole columns into a dictionary to reorder them, instead of just the cells with values in them. To preserve this for future readers, I am copying his/her excellent comment here, thanks again!
An xls file has a maximum of 65,536 rows. An xlsx file has a maximum of 1,048,576 rows. I'd start by searching for .Rows. My guess is that you have some code iterates the entire worksheet instead of only rows with data in them.
As for generic profiling or how to get there, I used a large amount of breakpoints in the editor (set/remove with left-click in the line number column of the code editor):
I first set them every 20 lines apart in the main method, ran the program with the large dataset, then after each press of continue (F5) manually measured if it was slow or not.
Then I removed the breakpoints in the fast regions and added additional ones in the slow regions to further narrow it down into methods and then into lines inside the methods.
At the end, I could verify if by commenting out the responsible line of code and fix it.
We have a VB6 program that does some string processing in a loop (approximately 500 times) and appends info to a textbox. Each iteration on the loop includes basic operations like Trim, Len, Left, Mid, etc, and finally appends the string to a textbox (the whole form is still invisible at this point). Finally, after the loop, the code calls Show on the form.
On Windows XP, these 500 loops take about 4 seconds. On Windows 7, the exact same code runs in about 90 seconds.
Any suggestions on how to fix this?
Thanks.
I'm guessing you append the text box on each loop iteration... If you can, store everything in a variable and append it to the TextBox once after the loop is finished. Displaying text in a text box takes a lot of time in VB6.
EDIT:
After further investigation and testing, I came to a conclusion that performance on directly assigning strings to the Text property of a TextBox degrades dramatically when the length of the control reaches maximum. The maximum on my PC is 65535 for some reason, even though according to MSDN its
Windows NT 4.0, Windows 2000, Windows 2000 Professional, Windows 2000 Server, Windows 2000 Advanced Server, Windows XP Home Edition, Windows XP Professional x64 Edition, Windows Server 2003 Platform Note: For single line text box controls, if the MaxLength property is set to 0, the maximum number of characters the user can enter is 2147483646 or an amount based on available memory, whichever is smaller
Basically what seems to be happening, if you keep adding text to the TextBox each iteration, it isn't that slow until you reach maximum. What's even more puzzling, when you try to add text beyond the maximum, there will be no errors, but performance degrades significantly.
In my test loop I go from 0 to 12773 I have this:
Text2.Text = Text2.Text + CStr(a) + " "
So when the loop is completed in 4 seconds, the Text2.Text is 65534 characters long. Now, when I double the loop to go beyond the maximum allowed length of the TextBox, it takes three times as much time to complete it.
12773 - 4 seconds
12773*2 - 16 seconds
After realizing this, my first thought was to replace the TextBox with a RichTextBox. But the performance of the latter is even worse. This is assuming you update it every iteration.
It seems that you are stuck with a dilemma - suffer slow performance or change the code to update text box only once after the loop is completed. Further, due to TextBox's maximum length limit, I recommend switching to a RichTextBox or depending on the purpose of this - some other object.
I hope my findings are helpful - it has certainly been fun finding out all these little programming quirks.
Try LockWindowUpdate to switch off updating for your form.
Declare Function LockWindowUpdate Lib "user32" (ByVal hWnd As Long) As Long
'To turn it on just call it like this, passing it the hWnd of the window to lock.
LockWindowUpdate Form1.hWnd
'intensive updating here
'to turn it off just call it and pass it a zero.
LockWindowUpdate 0
From here
I would recommend that you find out exactly what is slow. Redo your timings before and after the string concatenation, and then before and after the copy of the string into the text box. Have the Ole Automation string operations somehow become slower, or has the copying of text into a VB text box become slower?
Once you know this, we can continue with phase 2 ... :-)
Does anyone know an elegant way to determine “System Load” preferably using Windows performance counters? In this case I mean “System Load” in the classical (UNIX) sense of the term and not in the commonly confused “CPU Utilization” percentage.
Based on my reading … “System Load” is typically represented as a float, defining the number of processes in a runnable state (i.e. not including the number of processes that are currently blocked for one reason or another) that could be run at a given time. Wikipedia gives a good explanation here - http://en.wikipedia.org/wiki/Load_(computing).
By-the-way I’m working in C# so any examples in that language would be greatly appreciated.
System load, in the UNIX sense (and if I recall correctly), is the number of processes which are able to be run that aren't actually running on a CPU (averaged over a time period). Utilities like top show this load over, for example, the last 1, 5 and 15 minutes.
I don't believe this is possible with the standard Win32 WMI process classes. The process state field (ExecutionState) in the WMI Win32_Process objects are documented as not being used.
However, the thread class does provide that information (and it's probably a better indicator as modern operating systems tend to schedule threads rather than processes). The Win32_Thread class has an ExecutionState field which is set to one of:
0 Unknown
1 Other
2 Ready
3 Running
4 Blocked
5 Suspended Blocked
6 Suspended Ready
If you were to do a query of that class and count up the number of type 2 (and possibly type 6; I think suspended means swapped out in this context), that should give you your load snapshot. You would then have to average them yourself if you wanted averages.
Alternatively, there's a ThreadState in that class as well:
0 Initialized (recognized by the microkernel).
1 Ready (prepared to run on the next available processor).
2 Running (executing).
3 Standby (about to run, only one thread may be in this state at a time).
4 Terminated (finished executing).
5 Waiting (not ready for the processor, when ready, it will be rescheduled).
6 Transition (waiting for resources other than the processor).
7 Unknown (state is unknown).
so you could look into counting those in state 1 or 3.
Don't ask me why there's two fields showing similar information or what the difference is. I've long since stopped second-guessing Microsoft with their WMI info, I just have to convince the powers that be that my choice is a viable one :-)
Having just finished developing a Windows client for our own monitoring application, I'd just suggest going for 1-second snapshots and averaging these over whatever time frame you need to report on. VBScript and WMI seem remarkably resilient even at one query per second - it doesn't seem to suck up too much CPU and, as long as you free everything you use, you can run for extended periods of time.
So, every second, you'd do something like (in VBScript, and from memory since I don't have ready access to the code from here):
set objWmi = GetObject("winmgmts:\\.\root\cimv2")
set threadList = objWmi.ExecQuery("select * from Win32_Thread",,48)
sysLoad = 0
for each objThread in threadList
if objThread.ThreadState = 1 or objThread.ThreadState = 3 then
sysLoad = sysLoad + 1
end if
next
' sysLoad now contains number of threads waiting for a CPU. '