SQL Profiler CPU / duration units - sql-server-profiler

The output from a SQL Server trace in profiler contains the columns CPU and Duration (amongst others). What units are these values in?

CPU is in milliseconds.
In sql server 2005 and later, duration is in microseconds when saved to a file or a table, and milliseconds in the user interface. In sqlserver 2000, it is always in milliseconds. From MSDN.
User jerryhung gives a more accurate version-specific information in a comment:
Beginning with SQL Server 2005, the server reports the duration of an event in microseconds (one millionth, or 10-6 of a second) and the amount of CPU time used by the event in milliseconds (one thousandth, or 10-3 of a second). In SQL Server 2000, the server reported both duration and CPU time in milliseconds. In SQL Server 2005 and later, the SQL Server Profiler graphical user interface displays the Duration column in milliseconds by default, but when a trace is saved to either a file or a database table, the Duration column value is written in microseconds.

According to the documentation (for SQL Server Profiler 2016) the default unit for the Duration column is milliseconds.
Show values in Duration column in microseconds
Displays the values in microseconds in the Duration data column of traces. By default, the Duration column displays values in milliseconds.
It can be changed to microseconds in the General Options:
Tools->Options
Nothing wrong with using 2016 profiler on older versions of the DBMS.

I found in SQL Server 2017, that duration showed as milliseconds in the Profiler view, but when I exported to a table it showed in microseconds. A bit confusing at first.

Related

Fetching rows with Snowflake JDBC while the query is running in the server

I have a complex query that runs a long time (e.g 30 minutes) in Snowflake when I run it in the Snowflake console. I am making the same query from a JVM application using JDBC driver. What appears to happen is this:
Snowflake processes the query from start to finish, taking 30 minutes.
JVM application receives the rows. The first receive happens 30 minutes after the query started.
What I'd like to happen is that Snowflake starts to send rows to my application while it is still executing the query, as soon as data is ready. This way my application could start processing the rows in the first 30 minutes.
Is this possible with Snowflake and JDBC?
First of all, I would request to check the Snowflake warehouse size and do the tuning. It's not worth waiting for 30 mins when by resizing of the warehouse, the query time can be reduced one fourth or less than that. By doing any of the below, your cost will be almost the same or low. The query execution time will be reduced linearly as you increase the warehouse size. Refer the link
Scale up by resizing a warehouse.
Scale out by adding clusters to a warehouse (requires Snowflake
Enterprise Edition or higher).
Now coming to JDBC, I believe it behaves the same way as for other databases as well

How to validate the results VSTS 2013 ultimate

I am doing Load testing using VSTS 2013. I need to validate the results which was received. For example If, 800 report for created for 200 Users. Results are getting display with some values in Min(0.009), Avg(1.2), Max(9.5), 95%(7.03) etc... I need to know that the values are mentioned here is Transaction time in seconds for each user?
Visual Studio test results are displayed in seconds, you can see how to read the results here. This time is calculated considering all requests, not users (min, max, average, 95%...). Basically, if your average is 1.2s, then a user will most likely have to wait 1.2 seconds per request on average.

Azure SQL Data IO 100% for extended periods for no apparent reason

I have an Azure website running about 100K requests/hour and it connects to Azure SQL S2 database with about 8GB throughput/day. I've spent a lot of time optimizing the database indexes, queries, etc. Normally the Data IO, CPU and Log IO percentages are well behaved in the 20% range.
A recent portion of the data throughput is retained for supporting our customers. I have a nightly maintenance procedure that removes obsolete data to manage database size. This mostly works well with the exception of removing image blobs in a varbinary(max) field.
The nightly procedure has a loop that sets 10 records varbinary(max) field to null at a time, waits a couple seconds, then sets the next 10. Nightly total for this loop is about 2000.
This loop will run for about 45 - 60 minutes and then stop running with no return to my remote Sql Agent job and no error reported. A second and sometimes third running of the procedure is necessary to finish setting the desired blobs to null.
In an attempt to alleviate the load on the nightly procedure, I started running a job once every 30 seconds throughout the day - it sets one blob to null each time.
Normally this trickle job is fine and runs in 1 - 6 seconds. However, once or twice a day something goes wrong and I can find no explanation for it. The Data I/O percentage peaks at 100% and stays there for 30 - 60 minutes or longer. This causes the database responsiveness to suffer and the website performance goes with it. The trickle job also reports running for this extended period of time. If I stop the Sql Agent job, it can take a few minutes to stop but the Data I/O continues at 100% for the 30 - 60 minute period.
The web service requests and database demands are relatively steady throughout the business day - no volatile demands that would explain this. No database deadlocks or other errors are reported. It's as if the database hits some kind of backlog limit where its ability to keep up suddenly drops and then it can't catch up until something that is jammed finally clears. Then the performance will suddenly return to normal.
Do you have any ideas what might be causing this intermittent and unpredictable issue? Any ideas what I could look at when one of these events is happening to determine why the Data I/O is 100% for an extended period of time? Thank you.
If you are on SQL DB V12, you may also consider using the Query Store feature to root cause this performance problem. It's now in public preview.
In order to turn on Query Store just run the following statement:
ALTER DATABASE your_db SET QUERY_STORE = ON;

Why when using MS Access to query data from remote Oracle DB into local table takes so long

I am connecting to a remote Oracle DB using MS Access 2010 and ODBC for Oracle driver
IN MS Access it takes about 10 seconds to execute:
SELECT * FROM SFMFG_SACIQ_ISC_DRAWING_REVS
But takes over 20 minutes to execute:
SELECT * INTO saciq_isc_drawing_revs FROM SFMFG_SACIQ_ISC_DRAWING_REVS
Why does it take so long to build a local table with the same data?
Is this normal?
The first part is reading the data and you might not be getting the full result set back in one go. The second is both reading and writing the data which will always take longer.
You haven't said how many records you're retrieving and inserting. If it's tens of thousands then 20 minutes (or 1200 seconds approx.) seems quite good. If it's hundreds then you may have a problem.
Have a look here https://stackoverflow.com/search?q=insert+speed+ms+access for some hints as to how to improve the response and perhaps change some of the variables - e.g. using SQL Server Express instead of MS Access.
You could also do a quick speed comparison test by trying to insert the records from a CSV file and/or Excel cut and paste.

OpenRowSet, OpenQuery, OpenDataSource - which is better in terms of performance

This can be a debatable answer, but I'm looking for the case where a local Excel file needs to be exported to a local SQL Server 2008' table.
Has anyone ever had the chance to check execution time to compare OpenRowSet/OpenQuery/OpenDataSource for a very large file import in SQL Server 2008?
I'm able to use any of the 3 options, and the query can be executed from anywhere. However, the data source (Excel) is in the same server as the SQL Server.
Any pointers would be helpful.
It's been nearly 12 years since this question was asked and it's viewed 2k times, which means people are interested in knowing the answer... even today, I wanted to get to this answer...
So I created an excel spreadsheet with 100k rows and registered it as a linked server then compared the average result of the duration of four different type of open queries to this data. Here are the results:
There's a bit of setup to do that requires administrator privileges on the SQL server, registering an OLEDB provider, and acquiring permissions on the file.
This test was run on a 2016 version of SQL Server Enterprise (64-bit).
Each equation was run through 12 cycles and averaged and rounded.
1. Test for OpenRowset:
SELECT *
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
'Excel 12.0 Xml;Database=C:\temp\sample100k.xlsx;',
Sample100k$);
CPU Time: 4705 ms
Elapsed Time: 7894 ms
2. Test for OpenDatasource
SELECT *
FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0',
'Data Source=C:\temp\sample100k.xlsx;Extended Properties=EXCEL 12.0')
...[Sample100k$];
CPU Time: 4794 ms
Elapsed Time: 7918 ms
3. Test for a direct query on a Linked Server
/* Configuration. Only run once for setting up the linked server */
/* Note that this step needs to take place for the third and fourth tests */
EXEC sys.sp_addlinkedserver #server = N'SAMPLE100K',
#srvproduct = N'Excel',
#provider = N'Microsoft.ACE.OLEDB.12.0',
#datasrc = N'C:\temp\sample100k.xlsx',
#provstr = N'Excel 12.0'
SELECT * FROM [SAMPLE100K]...[sample100k$];
CPU Time: 4919 ms
Elapsed Time: 7934 ms
4. Test for OpenQuery on a Linked Server
/* Assume linked server has been registered, as mentioned in the third test */
SELECT * FROM OPENQUERY(SAMPLE100K, 'SELECT * FROM [sample100k$]');
CPU Time: 3569 ms
Elapsed Time: 5643 ms
I did not expect these results; it appears that test 4 (SELECT * FROM OPENQUERY...) performed 20% faster than the average and over 25% faster than the linked server query in test 3 (SELECT * FROM SAMPLE100K...)
I'll let the OP and other readers determine whether or not they should really use any of these methods compared to doing a table import, a BCP, an SSIS ETL package or some other method.
I'm simply providing an answer to the question for stack overflow visitors who visit this page every other day.

Resources