how to Speed Up the VBA Macros - performance

I am Generating a New Sheets using macros. For a New Sheet generation , Data is retrieved from more than 4 MS Access DB. Each DB had minimum 200 field. My Macro code includes
1. Cell locking
2. Alignment and formatting
3. One third of the cells in the sheet had a formulas
4. Cell reference with other Workbooks
My problem is every sheet generation it takes minimum one hour to complete the hole process. But it seems to me it's taking way too long.
I am already added the Application.ScreenUpdating = True to speed up the code but still it takes same time. How to do speed up the code , If you have any idea please guide me.
`For Ip = 5 To 150
resp = Range("B" & Ip).Value
With ActiveSheet.QueryTables.Add(Connection:= _
"ODBC;DSN=henkel2;DBQ=C:\Hl-RF\RSF-Temp.mdb;DriverId=25;FIL=MS Access;MaxBufferSize=2048;" _
, Destination:=Range("IV4"))
.CommandText = "select Vles from " & Shtname & " where cint(PrductID)='" & resp & "' and cint(DepotID) = '" & cnt1 & "' and Mnth = '" & mnths & "' and Type='" & typs & "'"
.Name = "tab product"
.FieldNames = True
.RowNumbers = False
.FillAdjacentFormulas = False
.PreserveFormatting = True
.RefreshOnFileOpen = False
.BackgroundQuery = True
.RefreshStyle = xlInsertDeleteCells
.SavePassword = False
.SaveData = True
.AdjustColumnWidth = True
.RefreshPeriod = 0
.PreserveColumnInfo = True
.SourceConnectionFile = _
"C:\Hl-RF\tabct.odc"
.Refresh BackgroundQuery:=False
End With`
Is There Is any way to Reduce the loop iteration time
Thanks In advance

Surely you mean
Application.ScreenUpdating = False
Apart from that you could also look to disable the recalculation of the workbook whilst the macro is running and see if that makes a difference. This is of course assuming that the bottle neck is with the spreadsheet part of the process, if its taking ages to get the data from access that might be an area to look at

Get hold of a copy of Professional Excel Development which includes an excellent profiling utility called PerfMon. It will allow you to see which parts of the report are taking all the time so you can analyse and rewrite

You can try the usual vba optimization methods of setting calculation to manual and disabling ScreenUpdating.
Dim calc As XlCalculation
calc = Application.Calculation
Application.ScreenUpdating = False
Application.Calculation = xlCalculationManual
Application.ScreenUpdating = True
Application.Calculation = calc
Put your code or function call between Application.Calculation = xlCalculationManual and Application.ScreenUpdating = True.
This is from my previous Post
Note: I coundn't find info weather or not you run the code from within Access or Excel. If you create the Excel Workbook from Access you probably have some code like this:
Dim xlApp As Excel.Application
Set xlApp = new Excel.Application
In this case you would have to change Application in the code above to xlApp. For example:
xlApp.Calculation = xlCalculationManual

I'd try to do MORE of the work on the database side. Generate the reports you want on the database side, and then export the results to Excel.
Access is MUCH better at automating reports than Excel is.

There is some disucussion of this topic here.
Edit:
Ok, then the next step is to identify which parts of your code are taking the longest. The simplest way to do this is to make a copy of your code and just start measuring various parts like this:
Private Declare Function GetTickCount Lib "kernel32.dll" () As Long
Private mlngStrt As Long
Private mlngEnd As Long
Private Const u As Long = 10000000
Public Sub Example()
Dim i As Long
mlngStrt = GetTickCount
For i = 0 To u
Next
mlngEnd = GetTickCount
Debug.Print "Section1", mlngEnd - mlngStrt
mlngStrt = GetTickCount
ExampleSubCall
mlngEnd = GetTickCount
Debug.Print "ExampleSubCall", mlngEnd - mlngStrt
mlngStrt = GetTickCount
For i = 0 To (u * 1.5)
Next
mlngEnd = GetTickCount
Debug.Print "Section2", mlngEnd - mlngStrt
Debug.Print "Example Complete"
End Sub
Private Sub ExampleSubCall()
Dim i As Long
For i = 0 To (u * 0.75)
Next
End Sub
This approach is fairly straight-forward. The drawback here is that you need to insert all of the timing statements and then turn around and remove them. Which is why I would work on a copy.
Once you know what parts are taking the longest you know where to focus your attention and what to ask for help with.

Take a look at Chris comments. We believe that your performance bottleneck is likely to be in the way you're querying the database rather than in the VBA code that applies the data into the sheet.
Simple questions about Access performance:
- Your tables have indexes?
- Are you using any kind of table join?
- Are the Access databases local on your computer or being accessed remotely?
Again, I'm only reinforcing what Chris already commented.

Yes, make a table in Access to hold your client IDs. Then create the query here and connect to it with the external data connector. After that refresh it manually or use VBA to refresh the connection whenever you're ready.

Related

Excel VBA - strange behaviour and poor performance when UDF wraps VLOOKUP

I want to write a user defined function that wraps VLOOKUP. All it requires is a reference to the column that data should be imported from, and it will execute a VLOOKUP assuming that the IDs are in column A and there are fewer than 3000 rows to search.
Function AutoVlookup( importFrom As Range) As Variant
Dim arg1, arg2, arg3, arg4 As Variant
Dim arg1Str, arg2Str As String
arg1Str = "$A" & Application.Caller.row 'get ID
arg1 = Application.Caller.Parent.Range(arg1Str)
arg2Str = "$A$1:$" & Split(cells(1, importFrom.column).Address, "$")(1) & "$3000"
arg2 = importFrom.Parent.Range(arg2Str) 'get range to search in (in other workbook)
arg3 = importFrom.column 'get column to return
arg4 = False 'exact match
AutoVlookup = Application.WorksheetFunction.VLookup(arg1, arg2, arg3, arg4)
End Function
I am running into two problems.
Firstly, the execution time is terrible. It takes several minutes to run this formula 1000 times, whereas the same VLOOKUP not wrapped in a UDF is very fast.
Secondly, when I first fill a column with =AutoVLookup(<column in other workbook>) every row will incorrectly show the same result until something triggers them to recalculate.
What am I doing wrong?
edit, answer:
Here is the code I made using advice from Santosh and Charles:
Function EasyLookup(importFrom As Range) As Variant
Application.Volatile False 'does not recalculate whenever cells on sheet change
Dim Id As String
Dim match As Integer
Dim importColumnAddress As String
Dim initialCalculationSetting As XlCalculation
Dim initialScreenUpdateMode As Boolean
Dim initialEnableEventsMode As Boolean
'saving the settings, to be reverted later
initialScreenUpdateMode = Application.ScreenUpdating
initialCalculationSetting = Application.Calculation
initialEnableEventsMode = Application.EnableEvents
'changes screen update and calculation settings for performance
Application.ScreenUpdating = False
Application.Calculation = xlCalculationManual
Application.EnableEvents = False
'find ID on formula's sheet
Id = Application.caller.Parent.Cells(Application.caller.row, 1).value
'find row with ID on column A of data source sheet
match = Application.WorksheetFunction.match(Id, importFrom.Parent.Range("$A$1:$A$4000"), 0) 'assumes no more than 4000 rows.
'retrieve value from importFrom's column, on the row where ID was found
importColumnAddress = Split(Cells(1, importFrom.column).Address, "$")(1)
importColumnAddress = importColumnAddress & ":" & importColumnAddress
EasyLookup = Application.WorksheetFunction.Index(importFrom.Parent.Range(importColumnAddress), match)
'revert performance tweaks
Application.ScreenUpdating = initialScreenUpdateMode
Application.Calculation = initialCalculationSetting
Application.EnableEvents = initialEnableEventsMode
End Function
It is much faster because it does not read in as much data, as it uses INDEX/MATCH rather than VLOOKUP. It also does not recalculate every time a cell in the sheet changes.
Try below code :
Function AutoVlookup(importFrom As Range) As Variant
Application.Volatile False
Application.Calculation = xlCalculationManual
Application.ScreenUpdating = False
Application.EnableEvents = False
Dim arg1, arg2, arg3, arg4 As Variant
Dim arg1Str, arg2Str As String
Dim rng As Object
Set rng = Application.Caller
arg1Str = "$A" & rng.Row 'get ID
Set arg1 = Application.Caller.Parent.Range(arg1Str)
arg2Str = "$A$1:$" & Split(Cells(1, importFrom.Column).Address, "$")(1) & "$3000"
Set arg2 = importFrom.Parent.Range(arg2Str) 'get range to search in (in other workbook)
arg3 = importFrom.Column 'get column to return
arg4 = False 'exact match
AutoVlookup = Application.VLookup(arg1, arg2, arg3, arg4)
Application.ScreenUpdating = True
Application.Calculation = xlCalculationAutomatic
Application.EnableEvents = True
End Function
The main reasons your UDF is slow are: 1) you are forcing it to import 3000 rows of data from Excel to the VBA variant and then passing the 3000 rows of data back to VLOOKUP rather than just using a reference to the range 2) You are not bypassing the VBE Refresh bugsee the series of posts about building a faster lookup etc at http://fastexcel.wordpress.com/2011/07/20/developing-faster-lookups-part-1-using-excels-functions-efficiently/ Also your UDF will not work correctly in circumstances where it references cells that are not included in the importfrom range. Finally I am not sure I understand what you are trying to achieve: would it not be simpler (and much more efficient) to use INDEX or implicit referencing rather than VLOOKUP?

CPU usage goes to 100% while reading from excel file?

I created a VB.Net application that will read from excel file and put the data into a table.
I used an excel sheet which has 3 columns and 65000 rows.
Before starts reading the excel my machine's CPU Usage is around 15%, but during reading the CPU Usage jumps upto 95%.
I don't know why it is happening? Can someone help me in this issue?
The following is the code i'd written:
Private Sub readFromExcel(ByVal fileName As String, ByVal sheetName As String)
Dim connString As String = "data source=XE; user=test; password=test"
Dim con As New OracleConnection(connString)
Dim str1 As String
Dim str2 As String
Dim str3 As String
Dim xlApp As Excel.Application
Dim xlWorkBook As Excel.Workbook
Dim xlWorkSheet As Excel.Worksheet
xlApp = New Excel.ApplicationClass
xlWorkBook = xlApp.Workbooks.Open(fileName)
xlWorkSheet = xlWorkBook.Worksheets(sheetName)
Dim x As Integer
Dim y As Integer
Dim i As Integer
x = xlWorkSheet.Rows.Count()
y = xlWorkSheet.Columns.Count()
Try
For i = 1 To x - 1
'MsgBox(xlWorkSheet.Cells(i, 0).value)
str1 = xlWorkSheet.Cells(i, 1).value
str2 = xlWorkSheet.Cells(i, 2).value
str3 = xlWorkSheet.Cells(i, 3).value
insertData()
Next
Catch ex As Exception
MsgBox(ex.Message())
Finally
con.Close()
xlWorkBook.Close()
End Try
End Sub
Private Sub insertData()
Dim str As String
str = "insert into test_import values('" + str1 + "'," + str2 + "," + str3 + ")"
Dim cmd As New OracleCommand()
cmd.CommandText = str
cmd.Connection = con
cmd.ExecuteNonQuery()
End Sub
thx in advance.
This is entirely normal. A program only doesn't burn 100% core when it gets bogged down by I/O. Reading from a disk or network card, that blocks a program while the operating system supplies the data. Your code doesn't bog down like that, you are asking it to do a bunch of work. Getting 195,000 cell values one by one just takes a while. Excel is an out-of-process COM server so every cell read requires two CPU context switches. You can optimize it a bit by using a Range instead. Or by running it on a machine with a two-core CPU so it only shoots up to 50%.
Feature, not a bug.
Its much faster to read the 195000 cells into an Object array in one go and then loop the object array. (There is a very high overhead for each .Net call to the Excel object model)
Use get_range(cell1,cell2) method to get the cells value.
You can use it to take the cell value by row, by column, or take all cells value in one go.
Keep watch the CPU usage when you adjust the code to read the cell value either by row, by column, or take all cells value in one go.

Trying to automatically split data in excel with vba

I have absolutely no experience programming in excel vba other than I wrote a function to add a data stamp to a barcode that was scanned in on our production line a few weeks back, mainly through trial and error.
Anyways, what I need help with right now is inventory is coming up and every item we have has a barcode and is usually scanned into notepad and then manually pulled into excel and "text to columns" is used. I found the excel split function and would like a little bit of help getting it to work with my scanned barcodes.
The data comes in in the format: 11111*A153333*11/30/11 plus a carriage return , where the * would be the delimiter. All the examples I've found don't seem to do anything, at all.
For example here is one I found on splitting at the " ", but nothing happens if I change it to *.
Sub splitText()
'splits Text active cell using * char as separator
Dim splitVals As Variant
Dim totalVals As Long
splitVals = Split(ActiveCell.Value, "*")
totalVals = UBound(splitVals)
Range(Cells(ActiveCell.Row, ActiveCell.Column + 1), Cells(ActiveCell.Row, ActiveCell.Column + 1 + totalVals)).Value = splitVals
End Sub
And this is applied in the Sheet1 code section, if that helps.
It really can't be this complicated, can it?
Edit: Trying to add in Vlookup to the vba.
So as I said below in the comments, I'm now working on getting the vlookup integrated into this, however it just returns N/A.
Here is the sub I wrote based on the link below
Public Sub vlook(ByRef codeCell As Range)
Dim result As String
Dim source As Worksheet
Dim destination As Worksheet
Set destination = ActiveWorkbook.Sheets("Inventory")
Set source = ActiveWorkbook.Sheets("Descriptions")
result = [Vlookup(destination!(codeCell.Row, D), source!A2:B1397, 2, FALSE)]
End Sub
And I was trying to call it right after the For loop in the worksheet change, and just created another for loop, does this/should this be a nested for loop?
Just adding the code to the VBA behind the worksheet won't actually cause it to get called. You need to handle the worksheet_change event. The following should help:
Private Sub Worksheet_Change(ByVal Target As Range)
Application.EnableEvents = False
Dim cell As Range
For Each cell In Target.Cells
If cell.Column = 1 Then SplitText cell
Next
Application.EnableEvents = True
End Sub
Public Sub SplitText(ByRef codeCell As Range)
'splits Text active cell using * char as separator
Dim splitVals As Variant
Dim totalVals As Long
splitVals = Split(codeCell.Value, "*")
totalVals = UBound(splitVals)
Range(Cells(codeCell.Row, codeCell.Column), Cells(codeCell.Row, codeCell.Column + totalVals)).Value = splitVals
End Sub
If you want to process the barcodes automatically on entering them, you need something like this (goes in the worksheet module).
Private Sub Worksheet_Change(ByVal Target As Range)
Dim splitVals As Variant
Dim c As Range, val As String
For Each c In Target.Cells
If c.Column = 1 Then 'optional: only process barcodes if in ColA
val = Trim(c.Value)
If InStr(val, "*") > 0 Then
splitVals = Split(val, "*")
c.Offset(0, 1).Resize( _
1, (UBound(splitVals) - LBound(splitVals)) + 1 _
).Value = splitVals
End If
End If 'in ColA
Next c
End Sub

Identical Macros Speed Difference

I have 2 workbooks that contain the same macro. In one workbook the macro runs super fast, less than a second. In the other it takes almost 30 seconds to run. I'm using Excel 2003. The page breaks are off in both workbooks. I don't know what could be causing one to run slower than the other. Any ideas?
Sub viewFirst()
Dim dataSheet As Worksheet, inputSheet As Worksheet, projectID As Long
Dim projectRow As Long, lLastRec As Long, inputLastRow As Long, dataLastRow As Long, x As Long, sh As Shape
Worksheets("Input").Select
ActiveSheet.Protect "", UserInterfaceOnly:=True
Range("a1").Select
ActiveSheet.Pictures.Insert ("working.jpg")
Application.ScreenUpdating = False
Application.Calculation = xlCalculationManual
Application.EnableEvents = False
Set inputSheet = Worksheets("Input")
Set dataSheet = Worksheets("Database")
With inputSheet
inputLastRow = .Cells(.Rows.Count, "A").End(xlUp).Offset(1, 0).Row - 1
End With
With dataSheet
dataLastRow = .Cells(.Rows.Count, "A").End(xlUp).Offset(1, 0).Row - 1
lLastRec = dataLastRow - 1
End With
With inputSheet
.Range("currentProject").Value = 1
projectID = .Range("currentProject").Value
projectRow = projectID + 1
For x = 1 To inputLastRow
If Range("b" & x).HasFormula Then
x = x + 1
End If
If x > inputLastRow Then
Exit For
End If
If Not Range("b" & x).HasFormula Then
.Range("b" & x).Value = dataSheet.Cells(projectRow, 2 + x)
End If
Next x
.Range("d125").Value = dataSheet.Cells(projectRow, 2 + 149)
.Range("d128").Value = dataSheet.Cells(projectRow, 2 + 150)
.Range("d131").Value = dataSheet.Cells(projectRow, 2 + 151)
.Range("d134").Value = dataSheet.Cells(projectRow, 2 + 152)
.Range("d137").Value = dataSheet.Cells(projectRow, 2 + 153)
.Range("d140").Value = dataSheet.Cells(projectRow, 2 + 154)
End With
With ActiveSheet
For Each sh In .Shapes
If sh.Type = msoPicture Then
ActiveSheet.Unprotect ""
sh.Delete
ActiveSheet.Protect "", UserInterfaceOnly:=True
End If
Next sh
End With
Range("b5").Select
Application.EnableEvents = True
Application.Calculation = xlCalculationAutomatic
Application.ScreenUpdating = True
End Sub
-EDIT-
osknows, thanks for the response. Just to clarify, the workbooks are never open at the same time, and again the workbooks are identical except for the data on the dataSheet - the dataSheet where the macro runs slowly has 35 Rows x 204 Columns, the dataSheet that runs quickly has 56 Rows X 156 Columns. I am going to search for hidden columns or non-blank cells on the input sheet.
Without seeing the 2 workbooks it's difficult to tell. The best advice is to measure exactly the speed of your code by...
In a module decare
Public Declare Function GetTickCount Lib "kernel32" () As Long
then in your code between certain lines of code place
dtStart = GetTickCount
dtline2 = GetTickCount
dtline3 = GetTickCount
dtline4 = GetTickCount
..
etc
the number of ticks between dtStart and dtline2 equals dtline2 - stStart etc
Also a number of factors that could slow things down:
inputLastRow = .Cells(.Rows.Count, "A").End(xlUp).Offset(1, 0).Row - 1
could include many rows that seem blank but aren't.
Set inputSheet = Worksheets("Input") & Set dataSheet = Worksheets("Database") may be massive complex ranges
For Each sh In .Shapes could include many duplicate shapes over each other that look identical
You have undefined ranges & sheets that if you have multiple workbooks open and using them while code runs then workbooks/worksheets/ranges are not explicitly defined. (Eg .Range versus Range) Get into the habit of using the full path to a range Filepath/Workbook/Sheet/Range or cell etc using With statements
eg
With ThisWorkbook
With SheetXYZ
With .range("XYZ1")
End with
End With
End With
or
With ThisWorkbook
With SheetXYZ.range("XYZ1")
.formula = "=Now()"
End With
End With
Also check out this handy site Excel Pages
On the slower machine, unload any Add-ins. If you have an add-in with a global change event, that will fire every time any worksheet changes, and could be causing the slow down. You're writing to the spreadsheet quite a bit, so it would be called a lot.
Instead of writing cell-by-cell, consider building an array (2-dimensions, lower bound of 1) and write all the data to the cell in one big swoop. Here's an example of how that works
Sub WriteOnce()
Dim aReturn() As Double
Dim i As Long, j As Long
Const lLASTROW As Long = 10
Const lLASTCOL As Long = 5
ReDim aReturn(1 To lLASTROW, 1 To lLASTCOL)
For i = 1 To lLASTROW
For j = 1 To lLASTCOL
aReturn(i, j) = Rnd
Next j
Next i
Sheet1.Range("A1").Resize(UBound(aReturn, 1), UBound(aReturn, 2)).Value = aReturn
End Sub
Since I'm only accessing the worksheet once, any event handlers will only fire once.

Speed up this Find/Filter Operation - (VB6, TextFile, ADO, VFP 6.0 Database)

I'm trying to figure out how to speed up this operation. Before I import a record from the text file I first need to see if one exists in the database. If it does exist I'm going to perform an update operation on it. If it does not exist I'm going to create a new record.
Running the code you see below this operation takes somewhere in the neighborhood of 3 hours.
I've tried using ADO's find method and it actually appears to be slower than the filter method.
The database is a Visual Foxpro 6 database. The table does have an index on the item_cd field but the table does not have any primary key established. This is out of my control since I didn't write the software and I'm trying to stay away from making any structural changes to the database.
There are 46652 rows in the text file and about 650,000 records/rows in the ADO recordset. I think slimming down the recordset would be the biggest step in fixing this but I haven't come up with any way of doing that. I'm trying to prevent creating duplicate records since there is no primary key and so I really need to have the entire table in my recordset.
Because I'm running this on my local machine it appears that the operation is limited by the power of the CPU. In actuality this might be used across the network, especially if I can get it to go faster.
Dim sFileToImport As String
sFileToImport = Me.lstFiles.Text
If sFileToImport = "" Then
MsgBox "You must select a file from the listbox to import."
Exit Sub
End If
If fConnectToDatabase = False Then Exit Sub
With gXRst
.CursorLocation = adUseClient
.CursorType = adOpenKeyset
.LockType = adLockReadOnly
.Open "SELECT item_cd FROM xmsalinv ORDER BY item_cd ASC", gXCon
End With
Call fStartProgress("Running speed test.")
Dim rstTxtFile As ADODB.Recordset
Set rstTxtFile = New ADODB.Recordset
Dim con As ADODB.Connection
Set con = New ADODB.Connection
Dim sConString As String, sSQL As String
Dim lRecCount As Long, l As Long
Dim s As String
sConString = "DRIVER={Microsoft Text Driver (*.txt; *.csv)};Dbq=" & gsImportFolderPath & ";Extensions=asc,csv,tab,txt;Persist Security Info=False;"
con.Open sConString
sSQL = "SELECT * FROM [" & sFileToImport & "]"
rstTxtFile.Open sSQL, con, adOpenKeyset, adLockPessimistic
If Not (rstTxtFile.EOF And rstTxtFile.BOF) = True Then
rstTxtFile.MoveFirst
lRecCount = rstTxtFile.RecordCount
Do Until rstTxtFile.EOF = True
'This code appears to actually be slower than the filter method I'm now using
'gXRst.MoveFirst
'gXRst.Find "item_cd = '" & fPQ(Trim(rstTxtFile(0))) & "'"
gXRst.Filter = "item_cd = '" & fPQ(Trim(rstTxtFile(0))) & "'"
If Not (gXRst.EOF And gXRst.BOF) = True Then
s = "Item Found - " & Trim(rstTxtFile(0)) 'item found
Else
s = "Item Not Found - " & Trim(rstTxtFile(0)) 'Item not found found
End If
l = l + 1
Call subProgress(l, lRecCount, s)
rstTxtFile.MoveNext
Loop
End If
Call fEndProgress("Finished running speed test.")
Cleanup:
rstTxtFile.Close
Set rstTxtFile = Nothing
gXRst.Close
A simple solution to speed up Yours_Rs.find response is to use the Yours_Rs.move statement first if it is possible for you. What I have done is to use MyRs.move statement prior to using MyRs.find to come in the vicinity of my actual record. It had really worked for me as response of move statement is quite brisk.
I was using it to locate a patient record. So, moving the pointer to a record near the actual record made MyRs.find statement to work with the speed of light.
regards,
MAS.
doesn't answer your question and this is a pretty old thread, but
why don't you import your text file to a temporary table on your db then do a join?
something like
SELECT tt.* FROM texttemp tt left outer join xmsalinv xal on tt.field1=xal.item_cd where xal.item_cd is null
this should return the contents of your imported text file which don't have any item_cd matches in the database, since you're dealing with a text file that complicates the query which is why i'm wondering your not importing the contents to a temporary table.
now assuming you know the mapping of the fields, you can probably also use this to insert assuming your db accepts insert select notation it'd be insert into xmsalinv (fields) select (matching fields) from (as above...)
this moves your choke points to the import process, which i'm hoping is quick.
the ado collections seem like they're pretty stupid, so they don't benefit from any sort of knowledge about the data and are kinda slow.
ah next item on "vb6 filter" google http://www.techrepublic.com/article/why-ados-find-method-is-the-devil/1045830
this response is based on basic sql knowledge and not tailored to foxpro
Use a firehose cursor for the VFP query's results if you aren't, and see your other post here for suggestions regarding the text file Recordset.
Perhaps better yet though, you might try getting rid of your slow "loop and search" aproach.
I would probably create a temporary Jet 4.0 MDB from scratch for each text file you want to look up. Import the text data, index your key field. Use ADOX to define a linked table over in the VFP database. The use a query to do your matching.
Close and dispose of the MDB afterward.
In response to Bob Riemersma's post, the text file is not causing the speed issues. I've changed my code to open a recordset with a query looking for a single item. This code now runs in 1 minute and 2 seconds as opposed to the three to four hours I was looking at the other way.
Dim sFileToImport As String
sFileToImport = Me.lstFiles.Text
If sFileToImport = "" Then
MsgBox "You must select a file from the listbox to import."
Exit Sub
End If
If fConnectToDatabase = False Then Exit Sub
Call fStartProgress("Running speed test.")
Dim rstTxtFile As ADODB.Recordset
Set rstTxtFile = New ADODB.Recordset
Dim con As ADODB.Connection
Set con = New ADODB.Connection
Dim sConString As String, sSQL As String
Dim lRecCount As Long, l As Long
Dim sngQty As Single, sItemCat As String
sConString = "DRIVER={Microsoft Text Driver (*.txt; *.csv)};Dbq=" & gsImportFolderPath & ";Extensions=asc,csv,tab,txt;Persist Security Info=False;"
con.Open sConString
sSQL = "SELECT * FROM [" & sFileToImport & "]"
rstTxtFile.Open sSQL, con, adOpenKeyset, adLockPessimistic
If Not (rstTxtFile.EOF And rstTxtFile.BOF) = True Then
rstTxtFile.MoveFirst
lRecCount = rstTxtFile.RecordCount
Do Until rstTxtFile.EOF = True
l = l + 1
sItemCat = fItemCat(Trim(rstTxtFile(0)))
If sItemCat <> "[item not found]" Then
sngQty = fItemQty(Trim(rstTxtFile(0)))
End If
Call subProgress(l, lRecCount, sngQty & " - " & sItemCat & " - " & rstTxtFile(0))
sngQty = 0
rstTxtFile.MoveNext
Loop
End If
Call fEndProgress("Finished running speed test.")
Cleanup:
rstTxtFile.Close
Set rstTxtFile = Nothing
My Functions:
Private Function fItemCat(sItem_cd As String) As String
'Returns blank if nothing found
If sItem_cd <> "" Then
With gXRstFind
.CursorLocation = adUseClient
.CursorType = adOpenKeyset
.LockType = adLockReadOnly
.Open "SELECT item_cd, ccategory FROM xmsalinv WHERE item_cd = '" & fPQ(sItem_cd) & "'", gXCon
End With
If Not (gXRstFind.EOF And gXRstFind.BOF) = True Then
'An item can technically have a blank category although it never should have
If gXRstFind!ccategory = "" Then
fItemCat = "[blank]"
Else
fItemCat = gXRstFind!ccategory
End If
Else
fItemCat = "[item not found]"
End If
gXRstFind.Close
End If
End Function
Private Function fIsStockItem(sItem_cd As String, Optional bConsiderItemsInStockAsStockItems As Boolean = False) As Boolean
If sItem_cd <> "" Then
With gXRstFind
.CursorLocation = adUseClient
.CursorType = adOpenKeyset
.LockType = adLockReadOnly
.Open "SELECT item_cd, bal_qty, sug_qty FROM xmsalinv WHERE item_cd = '" & fPQ(sItem_cd) & "'", gXCon
End With
If Not (gXRstFind.EOF And gXRstFind.BOF) = True Then
If gXRstFind!sug_qty > 0 Then
fIsStockItem = True
Else
If bConsiderItemsInStockAsStockItems = True Then
If gXRstFind!bal_qty > 0 Then
fIsStockItem = True
End If
End If
End If
End If
gXRstFind.Close
End If
End Function
Private Function fItemQty(sItem_cd As String) As Single
'Returns 0 if nothing found
If sItem_cd <> "" Then
With gXRstFind
.CursorLocation = adUseClient
.CursorType = adOpenKeyset
.LockType = adLockReadOnly
.Open "SELECT item_cd, bal_qty FROM xmsalinv WHERE item_cd = '" & fPQ(sItem_cd) & "'", gXCon
End With
If Not (gXRstFind.EOF And gXRstFind.BOF) = True Then
fItemQty = CSng(gXRstFind!bal_qty)
End If
gXRstFind.Close
End If
End Function
First can try creating an in-memory index on item_cd with gXRst!item_cd.Properties("OPTIMIZE").Value = True which will speed up both Find and Filter.
For ultimate speed in searching initialize helper index Collection like this
Set cIndex = New Collection
On Error Resume Next
Do While Not gXRst.EOF
cIndex.Add gXRst.Bookmark, "#" & gXRst!item_cd.Value
gXRst.MoveNext
Loop
On Error GoTo ErrorHandler
And insetad of Find use some function like this
Public Function SearchCollection(Col As Object, Index As Variant) As Boolean
On Error Resume Next
IsObject Col(Index)
SearchCollection = (Err.Number = 0)
On Error GoTo 0
End Function
3 hours just for a few hundred thousands of records!!! You are doing it the wrong way. Simply:
-append text file to a VFP table,
-then insert the ones that do not exist in existing table with a single SQL
-and update the ones that exist with another Update sql.
That is all and should take less than a minute (a minute is even very slow). You can do all these using the VFPOLEDB driver and it doesn't matter that you have VFP6 database, VFPOLEDB has VFP9 engine built-in.

Resources