I am using the follwowing code to delete a table style from an excel file:
Set objTS = ActiveWorkbook.TableStyles("MyTableStyle")
If Not objTS Is Nothing Then
objTS.Delete
End If
I am able to delete it using a macro on my local excel sheet. However, when the above code is encapsulated in a function in an XLA code on the server, the objTS.Delete line throws an error.
What additional change should I make to the avoid so that the table style is deleted without an error?
Thanks :)
Edit: I am unable to even delete the pivot table style from the workbook directly (not just programmatically). Can some one please tell me what is causing this?
Related
I have a large dataset where I do data validation using a syntax. For each validation a variable is created and set to 1 if there is a problem with data I need to check out.
For each validation I then create a subset of the data holding only the relevant variables for the relevant cases. Still using the syntax I save these data files in excel in order to do the checks and correct the data (in a database).
Problem is that not all of my 50+ validations detect any problematic data every time I run the check, but 50+ files are saved because I save a file for each validation. I'd like to save the files only if there is data in them.
Current syntax for saving the files is:
DATASET ACTIVATE DataSet1.
DATASET COPY error1.
DATASET ACTIVATE error1.
FILTER OFF.
USE ALL.
SELECT IF (var_error1 = 1).
EXECUTE.
SAVE TRANSLATE OUTFILE='path + '_error1.xlsx'
/TYPE=XLS
/VERSION=12
/MAP
/REPLACE
/FIELDNAMES
/CELLS=VALUES
/KEEP=var1 var2 var3 var4.
This is repeated for each validation. If no case violates the validation for "error1" I will still get an output file (which is empty).
Any way to alter the syntax to only save the data if there are in fact cases that violate the validation?
The following syntax will write a new syntax that will contain the command to save the file to excel - only if there are actual cases in the file. You will run the new syntax every time, but the excel will be created only in relevant cases :
DATASET ACTIVATE DataSet1.
DATASET COPY error1.
DATASET ACTIVATE error1.
FILTER OFF.
USE ALL.
SELECT IF (var_error1 = 1).
EXECUTE.
do if $casenum=1.
write outfile='path\tmp\run error1.sps' /"SAVE TRANSLATE OUTFILE='path\var_error1.xlsx'"
/" /TYPE=XLS /VERSION=12 /MAP /REPLACE /FIELDNAMES /CELLS=VALUES /KEEP=var1 var2 var3 var4.".
end if.
exe.
insert file='path\tmp\run error1.sps'.
Please edit the "path" according to your needs.
Note that the new syntax will be written in all cases, but when there is no data in the file, the syntax will be empty, and so the empty file won't be written to excel.
When I issue a query like select * from city; using oracle sql developer in mac I get the output that is not aligned and it is very hard to read. How do I get the grid view and set it as default?
Sounds like you're getting the script output. You can have that formatted nicely by using SET SQLFORMAT ansiconsole, we'll make the columns line up as nice as possible based on the size of the display.
But if you want the data back in a grid, use Ctrl+Enter or F9 or the first execute button in the toolbar to execute.
This will get you the output in a grid, like this:
I talk about both executing as a script or statement here.
If your issue is with formatting, you may want to look at this link
If your issue is with records not getting inserted, please note these.
Records inserted in one session will not be visible in another session unless they are committed.
If you are checking the count in the same session where you inserted the records, then check for errors in insert. Add a show errors command at the end of your script, "path_to_file.sql" to check if any errors occurred while inserting the records.
Hope this helps.
I have 2 tables: One of them is named 'ImageDrive', and the other is named 'Person'. In the Person table, there is a field called 'Primary Image Path' and it stores part of the path, i.e., "\DefaultPicture\default.jpg". The ImageDrive table has a field Drive which stores "C:\MyDocuments\Pictures".
I created a Person form, with a form type of "multiple items form". I then removed the textbox and replaced it with an image control for the Primary Image Path field.
Now the problem is, how do I go about extracting the information from the Drive field in ImageDrive and combining it with the Primary Image path? I need to combine them in order to set the Image control picture.
I have tried using Expression Builder and came up with the expression [ImageDrive]![Drive]&[Primary Image Drive] for the image control source. However, when I click on Form View, it shows nothing.
What is the right way to do this?
I'm not sure if you wanted to do this in VBA or not, but here is a small bit of code that may help for getting the path of that user. You can combine that with a Recordset to reference your other table of pictures. Or use two recordsets, one for each table.
Dim myR as Recordset
Dim myR2 as Recordset
Dim filePath as String
Set myR = CurrentDb.OpenRecordset("ImageDrive", dbOpenDyanset)
Set myR2 = CurrentDB.OpenRecorset("Person", dbOpenDynaset)
'When you open these two Recordsets they will open on the first record of each FYI
'You can either use myR.FindFirst or a Select statment to sort through the records
filePath = myR![Field_name_of_path] & myR2![Field_name_from_other_table]
'Now filePath contains what you need.
'You can also use Environ$("USERPROFILE") & "\My Documents"
'to get something like C:\username\My Documents and set that as a string as well
Set myR = Nothing
Set myR2 = Nothing
After including the image control, go to the control source of the image control. and using the dlookup method to get the drive, as the form is created by using the fields the Person table. Below will be the line of code to be put at the control source.
"=DLookUp("[Drive]","ImageDrive","[Drive] IS NOT NULL") & [Primary Image Path]"
I'm working with CSV transaction data files that are 350 mb+ and 1,100,000+ lines each.
I was wondering how can I perform some simple fast queries on these files through the VBA IDE, save the CSV, and then open the result as a workbook in Excel.
For example, I want to do this:
Load the CSV into RAM as a table
Remove all rows where the field called transaction_type is recorded as "failed"
Save the result as a new CSV
Open the result as a workbook in Excel
My goal is to do this operation with the highest performance possible. I think that this functionality is provided by the Extensible Storage Engine (ESE), but I'm not sure how to use it through the Excel VBA IDE.
Thanks!
You could use a 'text database' and use ADO (or DAO) to query the files. See this article for more information: http://msdn.microsoft.com/en-us/library/ms974559.aspx
That way you can just create a schema.ini file for the file you wish to query and query the file using standard SQL. You would then simply write your result recordset to file.
I'm not sure how to use it through the Excel VBA IDE.
Even me :)
Load the CSV into RAM as a table
Remove all rows where the field called transaction_type is recorded as "failed"
Save the result as a new CSV
Open the result as a workbook in Excel
Here is an alternative.
Load the CSV into Access Database from Excel using ".TransferText"
Example Code
Option Explicit
'~~> Set reference to Microsoft Access Object Library
Sub Sample()
Dim oacApp As Access.Application
Set oacApp = New Access.Application
oacApp.OpenCurrentDatabase "C:\MyDatabase.mdb"
oacApp.DoCmd.TransferText acImportDelim, "", _
"Table1", "C:\Mycsv.csv", True
oacApp.CloseCurrentDatabase
oacApp.Quit acQuitSaveNone
Set oacApp = Nothing
End Sub
Remove all rows where the field called transaction_type is recorded as "failed"
You can do that by running a query from excel.
Save the result as a new CSV
Again use ".TransferText" to export it to CSV
Example Code
oacApp.DoCmd.TransferText acExportDelim, "Standard Output", _
"Table1", "C:\MyNewcsv.csv"
Open the result as a workbook in Excel
HTH
Do anyone come across a performance issue when deleting a first row in a 20,000+ rows Excel file using OpenXML SDK v2.0?
I am using the delete row coding suggested in the Open XML SDK document. It takes me several minutes just to delete the first row using Open XML SDK, but it only takes just a second in Excel applicaton.
I eventually found out that the bottle-neck is actually on the bubble-up approach in dealing with row deletion. There are many rows updating after the deleted row. So in my case, there are around 20,000 rows to be updated, shifting up the data row by row.
I wonder if there is any faster way to do the row deletion.
Do anybody have an idea?
Well, the bad news here is: yep, that's the way it is.
You may get slightly better performance moving outside of the SDK itself into System.IO.Packaging and just creating an IEnumerable/List in like Linq-to-XML of all the rows, copy that to a new IEnumerable/List without the first row, rewrite the r attribute of <row r="?"/> to be it's place in the index, and the write that back inside <sheetData/> over existing children.
You'd need to kind of do the same for any strings in the sharedStrings.xml file - i.e. removing the <ssi>.<si> elements that were in the row that was deleted, but in this case they are now implicitly indexed, so you'd be able to get away with just outright removing them.
The approach of unzipping the file, manipulating it and repacking it is very error-prune.
How about this: If you say, that it works fine in Excel: Have you tried to use the Interop? This starts a new instance of Excel (either visible or invisible), then you can open the File, delete the line, save and close the application again.
using System;
using System.IO;
using Microsoft.Office.Interop.Excel;
using Excel = Microsoft.Office.Interop.Excel;
public void OpenAndCloseExcel()
{
Excel.Application excelApp = new Excel.Application();
// Open Workbook, open Worksheet, delete line, Save
excelApp.Quit();
}
The Range-object is qualified for many purposes. Also for deleting elements. Have a look at: MSDN Range-Description. One more hint: Interop uses Excel, so all Objects have to be adressed with a 1-based index!
For more resources take a look at this StackOverflow-thread.