In our PICK database all of the sudden select/list/sort statements have become extremely slow in one particular file. At first I thought it was a file resize issue but I've resized the file and no luck. Any ideas or suggestions?
i.e SELECT <FILENAME> WITH LOAD.DATE = "TODAY"
This type of select would almost be instant and out of nowhere that will take over 5 minutes to actually process.
Any ideas will help.
Related
I noticed that my PowerQuery report can run very slow, and it's not only when refreshing, but when I 'Launch PowerQuery Editor' - it can take ~30 mins to get to the last last query in the report to see what has been loaded and then calculated.
The report is using 8 different .csv files as inputs (not very large, <1000 rows and <15 columns each).
Then on these inputs I make joins and grouping multiple times, but apart from that - there's no other 'heavy' calculations (only some sums, averages, percentages, some nested ifs).
So, I would have thought it shouldn't be too complex for PowerQuery to deal with it, but sometimes (not always???!!!) it takes really long to get 'inside'.
Yesterday I worked on it all day (almost :) as of course had also other jobs to do at work :)), and in the morning it took <1min to refresh, and then after launching PowerQuery Editor it was rather quick to get to every query in the report.
In the afternoon, with same inputs, it took ~3mins to refresh, and when I launched PowerQuery editor it took almost 30 mins to get to the last query in the report (my record wait time :O).
Do you know why this is happening?
I have a feeling that it will be something related to some settings of Excel/PowerQuery maybe?
But not sure where to start?
I also had a strange (???) situation when turninng off the annoying pop up message about the Privacy levels when using native databases (Data - Get data - Query options - Security sections - the first box needs to be unchecked):
I had it unchecked already, but after I showed it to my colleague - it took for me very long for the message about ignoring Privacy Levels to pop up (it shouldn't pop up, as I had the relevant bit unchecked???), then I had to tick that I wanted to ignore the Privacy Levels, and then it refreshed in normal time (that's all on the same PowerQuery report, just few days earlier).
Thanks in advance for your help on this.
Ania
I have come across these file corruption issues with PowerQuery in the past,
PowerQuery should not take that long to launch. Similar situation happened to me and I performed a MS Office Repair and rebuilt a new excel file which solved the issue.
I thought for sure this would be an easy issue, but I haven't been able to find anything. In SQL Server SSMS, if I run a SQL Statement, I get back all the records of that query, but in Oracle SQL Developer, I apparently can get back at most, 200 records, so I cannot really test the speed or look at the data. How can I increase this limit to be as much as I need to match how SSMS works in that regard?
I thought this would be a quick Google search to find it, but it seems very difficult to find, if it is even possible. I found one aricle on Stack Overflow that states:
You can also edit the preferences file by hand to set the Array Fetch Size to any value.
Mine is found at C:\Users<user>\AppData\Roaming\SQL
Developer\system4.0.2.15.21\o.sqldeveloper.12.2.0.15.21\product-preferences.xml on Win 7 (x64).
The value is on line 372 for me and reads
I have changed it to 2000 and it works for me.
But I cannot find that location. I can find the SQL Developer folder, but my system is 19.xxxx and there is no corresponding file in that location. I did a search for "product-preferences.xml" and couldn't find it in the SQL Developer folder. Not sure if Windows 10 has a different location.
As such, is there anyway I can edit a config file of some sort to change this setting or any other way?
If you're testing execution times you're already good. Adding more rows to the result screen is just adding fetch time.
If you want to add fetch time to your testing, execute the query as a script (F5). However, this still has a max number of rows you can print to the screen, also set in preferences.
Your best bet I think is the AutoTrace feature. You can tell it to fetch all the rows, you'll also get a ton of performance metrics and the actual execution plan.
Check that last box
Then use this button to run the scenario
i try to tune an existing View. I'm sorry for not posting an example but i failed to replicate the Problem. And i can't understand the behavior.
My View (View A) based on another view (View b) and a Table (Table C). In the select list some fields of these and some Package calls are used.
The runtime of a specific Select is nearly 32 seconds.
I analyzed the Statement and startet to optimize View b. I droped all columns i don't need in View a and reduced the overhead of the View b.
After this the select on View a was 5 seconds faster. I executed the select multiple times to get a valid average execute time, to be sure.
A Few minutes later i axecuted the Statement again and i got 32 Seconds. I executed this multiple times but it won't become faster.
There is no trafic on this database, the amount of data didn't change. And this is, until now, the first statement i have problems with getting reasonable results will analyzing the statment.
The Explainplan, I watched first, looks fine. No Full Table Scan (I know FTS is not overall bad). I have no idea why the executetime is so unstable, this made it hard to optimize the View and compare the results.
I think this sounds like a dump question, but i can't see the problem or have an idea what to do next.
Thanks and sorry for my bad english.
Unstable execute time
1) Did you clean up buffer cache between select statements?
2) In some situation you can improve your function and views with result_cacheing. But you have to figure out if it is applicable for your problem? (long story about resultchahe)[http://www.oracle.com/technetwork/articles/datawarehouse/vallath-resultcache-rac-284280.html]
Today I'm wondering why (AX2009) LedgerTransVoucher form only seems to load a part of query results at the time. If the results include, say, 35K rows, only 10k are loaded at once. And if the user decides to print the results to Excel, they would only get 10k rows.
Why is this? The 10k is such a clean number I'm thinking a parameter somewhere but I have no idea where it could be hidden.
And yes, I know they should be using a report instead :)
Alas, this apparently had nothing to do with AX as such, but is some conflict with Citrix. False alarm it seems.
this is my first question, I've searched a lot of info from different sites but none of them where conslusive.
Problem:
Daily I'm loading a flat file with an SSIS Package executed in a scheduled job in SQL Server 2005 but it's taking TOO MUCH TIME(like 2 1/2 hours) and the file just has like 300 rows and its a 50 MB file aprox. This is driving me crazy, because is affecting the performance of my server.
This is the Scenario:
-My package is just a Data Flow Task that has a Flat File Source and an OLE DB Destination, thats all!!!
-The Data Access Mode is set to FAST LOAD.
-Just have 3 indexes in the table and are nonclustered.
-My destination table has 366,964,096 records so far and 32 columns
-I haven't set FastParse in any of the Output columns yet.(want to try something else first)
So I've just started to make some tests:
-Rebuild/Reorganize the indexes in the destination table(they where way too fragmented), but this didn't help me much
-Created another table with the same structure but whitout all the indexes and executed the Job with the SSIS package loading to this new table and IT JUST TOOK LIKE 1 MINUTE !!!
So I'm confused, is there something I'm Missing???
-Is the SSIS package writing all the large table in a Buffer and the writing it on Disk? Or why the BIG difference in time ?
-Is the index affecting the insertion time?
-Should I load the file to this new table as a temporary table and then do a BULK INSERT to the destination table with the records ordered? 'Cause I though that the Data FLow Task was much faster than BULK INSERT, but at this point I don't know now.
Greetings in advance.
One thing I might look at is if the large table has any triggers which are causing it to be slower on insert. Also if the clustered index is on a field that will require a good bit of rearranging of the data during the load, that could cause an issues as well.
In SSIS packages, using a merge join (which requires sorting) can cause slownesss, but from your description it doesn't appear you did that. I mention it only in case you were doing that and didn't mention it.
If it works fine without the indexes, perhaps you should look into those. What are the data types? How many are there? Maybe you could post their definitions?
You could also take a look at the fill factor of your indexes - especially the clustered index. Having a high fill factor could cause excessive IO on your inserts.
Well I Rebuild the indexes with another fill factor (80%) like Sam told me, and the time droped down significantly. It took 30 minutes instead of almost 3hours!!!
I will keep with the tests to fine tune the DB. Also I didnt have to create a clustered index,I guess with the clustered the time will drop a lot more.
Thanks to all, wish that this helps to someone in the same situation.