SQL Server Express performance problems - performance

Initiation
I have a SQL Server Express 2008 R2 running. There are ten users who read / write permanently to the same tables using Stored Procedures. They do this day and night.
Problem
The performance of the Stored Procedures is getting lower and lower with increasing database size.
A Stored Procedure call needs avg 10ms when the database size is about 200MB.
The same call needs avg 200ms when the database size is about 3GB.
So we have to cleanup the database once a month.
We already did index optimization for some tables with positive effects but the problem still exists.
Finally im not a SQL Server expert. Could you give me some hints to start getting rid of this performance problem?

Download and read Waits and Queues
Download and follow the Troubleshooting SQL Server 2005/2008 Performance and Scalability Flowchart
Read Troubleshooting Performance Problems in SQL Server 2005
The SQL Server Express Edition limitations (1GB memory buffer pool, only one socket CPU used, 10GB database size) are unlikely to be the issue. Application design, bad queries, excessive locking concurrency and poor indexing are more likely to be the problem. The linked articles (specially the first one) include methodology on how to identify the bottleneck(s).

This is MOST likely simple a programmer mistake - sounds like you simply do either have:
Non proper indexing on some tables. THis is NOT optimization - bad indices is like broken HTML for web people, if you have no index then basically you are not using SQL as it is supposed to be used, you should always have proper indexes.
Not enough hardware, such as RAM. yes, it can manage a 10gb database, but if your hot set (the suff accessed all the time) is 2gb and you have only 1gb it WILL hit disc more often than it needs.
Slow discs, particularly a express problem because most people do not bother to get a proper disc layout. THen they run a sQL database againnst a slow 200 IOPS end user disc where - depending on need - a SQL database wants MANY spindles or an SSD (typical SSD these days has 40.000 IOPS).
That is it at the end - plus possibly really bad SQL. Typical filter error: somefomula(field) LIKE value, which means "forget your index, please, make a table scan and calculate someformula(field) before checking".

First, SQL Server Express is not the best edition to your requierement. Get a Developer's Edition to test it. Its exactly like the Enterprise but free if you dont use on "production".
About the performance, there are so many things involved here, and you can improve it using, since indexes until partitioning. We need more info to provide help

Before Optimizing your SQL queries, you need to find the hotspot of the queries. Usually you can use SQL Profiler to do this on SQL Server. For Express edition, there's no such tool. But you can walk around by using a few queries:
Return all renct query:
SELECT *
FROM sys.dm_exec_query_stats order by total_worker_time DESC;
Return only top time consuming queries:
SELECT total_worker_time, execution_count, last_worker_time, dest.TEXT
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
ORDER BY total_worker_time DESC;
Now you should know which query needs to be optimized.

May be poor indexes,Poor design of database, may not apply normalization,unwanted column indexes,poor queries which take much time to execute.

SQLExpress is built for testing purposes and the performance is directly limited by Microsoft, If you use it in a production environment you may want to get a license for SQL Server.
Have a look here SQL Express for production?

Related

How can I Improve Direct Query performance between Microsoft's Power BI Service and our SQL Virtual Machine?

We have a B4ms VM running a SQL server (as well as web server). We have installed Power BI Gateway on it to make reports with on-prem data.
Basically the user can sign to the server and view power bi reports in the browser.
I find it a bit dumb that the user has to query Power BI for the data, that in turn gets it from the machine, but perhaps there is no other way.
The issue we are running into is that some visuals take a huge performance hit when loading. Some even seem to exceed the resources.
I know it's somewhat of a broad question to ask, but maybe specifically - is there a way to improve the connection between the VM and the PBI server?
It will depend on the type of query that you are doing/sending down to the SQL Server, for a number of projects that I have deployed, I have used Direct Query to sit over data sources that have been at least 50-100GB, however these have been mostly standard Star Schema data warehouses, or a defined reporting table, both will have the relevant indexes, covering indexes, or Column Store Indexes to allow more efficient retrieval of data. These have been on Azure SQL and On-Prem SQL Instances.
Direct Query Mode will slow down due to the number of query's that it has the do on the data source based on the measure, relationships and the connection overhead. Another can be the number of visuals on page, as each visual is a query and each one has to run on the data source.
One other method to increase the speed of Direct Query would be to use Aggregations in Power BI, to store an imported subset of data in Power BI. If the query can be answered by the aggregation layer then it will be answered quicker. Microsoft demonstrated this with the 'Trillion Row Demo'
In terms of the Power BI Direct Query Issues, from the range of clients that I interact with, those that do have issues with Direct Query, have a mash up of tables in an inefficient schema, running sub optimal query's on the data source, with a number of data transformations in DAX, and DAX measures that have been badly written, for example lots of DISTINCT COUNTS & SWITCH.
For the connection make sure you have the latest Data Gateway Installed/Update as optimizations to the Mash Up engine can make it faster. Another option would be to shift the DB to Azure SQL Database and remove the need for the gateway.
For DirectQuery reports you need to examine the generated SQL and evaluate the execution at SQL Server. You can use the Performance Analyzer in Power BI Desktop to capture the DAX and SQL generated as your DirectQuery model interacts with SQL Server, and then use SQL Server Management Studio and the Query Store to examine the Execution Plans and indexing options.

Microsoft Access equivalent of explain in MySQL

I'm working on a very large query, in a inherited application. This is a large insert-query, that takes 4 tables with well over a million records. I know, I would also rather have this in SQL-server, but there is no infrastructure at this customer to do this :-)
This query has worked for over a year. However, the source-tables keep on growing, and last week it threw the dreaded 'out of system resources'-error. Bummer...!
I think it is possible to optimize this query. Working in MySQL, I would use the explain-command, to see where optimalisation might occur. Is there a equivalent of this in Access? I cannot seem to find it....
kind regards,
Paul
Probably Jet ShowPlan is closest to what you want. You will have to set a registry key. Then query plan information gets dumped to a text file named SHOWPLAN.OUT. You can read about the details in this article on TechRepublic: Use Microsoft Jet's ShowPlan to write more efficient queries
Also try the Performance Analyzer wizard. You can ask it to examine your query alone, or also ask it to examine table or other queries used by that query.
If you haven't compacted the database recently, see whether that improves performance. Compacting also updates index statistics which allows the engine to make better decisions for the query plan.

Doesn't Read-Only make a difference for SQL Server?

I’ve been tasked with optimizing a rather nasty stored procedure in a legacy system. It’s a database dedicated to search, and a new copy is being generate every day, with a lot of complex joins being de-normalized. No writes are being performed, only SELECTs, so I figured some easy improvements could be made by making the whole database read-only and changing the recovery model to “Simple”.
Much to my surprise, this didn’t help – at all! The stored procedure still takes the same amount of time of complete. If fact, I’m so surprised that I figured I did it wrong!
My questions:
Do I need to do anything other than setting “Database read-only” to “true”?
Am I wrong to expect significant performance improvement by making the database read-only?
Same for the recovery model: Shouldn’t “Simple” have some noticeable impact?
Are there other similar database-wide configurations that can improve performance in this scenario?
The stored procedure is huge, with temporary tables, 40+ tables joined in 20+ queries. But I’d like to optimize the database itself before I edit this proc.
Since no writes are performed by your SP, there is no reason to expect noticable performance improvement from changing recovery model and read-write mode.
As others mentioned, you should look into the query plan and optimize your queries.
Another hint: indexes in the database might get fragmented while the database is filled up. Since the data is not going to be modified any more, it might help to rebuild all the indexes with fillfactor 100 - this might help to get rid of fragmentation and to compact data.
Call this for each table in the database: ALTER INDEX ALL ON table_name REBUILD WITH (FILLFACTOR = 100).
Generally, I won't expect much of performance improvement from this, but it depends on the particular database.
Speaking of query optimization, there are very useful features in SQL Server 2005 and later: Execution Related and Index-Related Dynamic Management Views. In particular, sys.dm_exec_query_stats and missing indexes are of interest.
These give you almost the same information as Tuning Advisor, but using you real-life workload, so you don't need to simulate it and feed to the Advisor.
Have you tried using the Database Engine Tuning Advisor included in SQL Server? It will analyze your query and suggest new indexes that will improve the performance of the query. Some of them will be good, some will be bad (for example, I've seen it suggest adding every column in a table to an index, sometimes like 30 of them!), so I don't follow it blindly. Generally I'll add a few indexes and then retest, to find the suggestions that are the most important. I've used it to optimize many queries that I thought I had properly indexed, only to find I could get a lot more performance out of them.
I had a similar setup, large stored procedures with lots of large temp tables.
Our problem was that the joins with and between the temp tables was very slow.
I recommend that you look at your execution plan and try to add relevant indexes to the temp tables too if you have not already.

oracle metrics monitoring and reporting in real time

I am stress testing a database table
I am looking for any software that can connect to my database and show me some metrics like no of rows in a table, time for inserts , inserts/time, table fragmentation[logical/physical] etc .
It would be great if the reporting tool can do the following:
1] Report in real time or atleast after some interval so that I do not have to wait for test to finish to get first look at the data
2] Ability to do stuff with the data later, like get 99.99 percentile, avg etc.
Is mostly freely available :)
Does anyone have any suggestion of something I can use with my Oracle table. Any pointers would be great.
I can actually write scripts to logg stuff like select count(*) etc .. but then I will have to spend a lot of time parsing and changing the data reporting rather than the tests.
I think some intelligent thing might already be out there ??
Thanks
Edit:
I am looking at a piece of design for
a new architecture
The tests are
"comparison" tests for different
designs and hence as far as I do it
on same hardware and same schema etc
they are comparable to some
granularity.
I want to monitor index
fragmentation, and response times
etc.
If you think there are other
things that can change please let me
know. I am trying to roll back the
table to particular state[basically
truncate] for each new iteration of
the test
First, Oracle has built-in functionality for telling you the number of rows in a table (either use count(*) or search 'gather statistics oracle' for another option).
But "stress testing a table" sounds to me like you're going down the wrong path. Most of the metrics you're mentioning ("time for inserts , inserts/time, table fragmentation[logical/physical] etc") are highly dependent on many factors:
what OS Oracle's running on
how the OS is tuned (i.e. other services running)
how the specific Oracle instance is configured
what underlying storage architecture Oracle's using (and how tablespaces are configured)
what other queries are being executed in the database at the exact same time as your test
But NONE of them would be related to the table design itself.
Now, if you're wondering if your normalized (or de-normalized) table schema is hurting your application, that's another matter. As is performance being degraded by improper/unneeded/missing indexes, triggers, or a host of other problems.
But if you really want an app that will give you real-time monitoring, check out Quest Software's Spotlight on Oracle. But it's definitely not free.
Just to add to the other comments, I believe what you really want is to stress test the queries you're running and not the table. The table is just a bunch of data blocks on a disk and the query is what will make the difference in performance as far as development is concerned. That will tell you if you need different indexes or need to redesign the query.
On the other hand, if you're looking at it as a DBA or system administrator, you're probably more interested in OS level statistics especially disk latency, memory paging, and CPU utilization.
All this is available in the enterprise manager which is my primary tuning tool for development and DBA. If you don't have that, read up on using sql_trace to profile your queries and your OS specific documentation on how to get those stats.

Sql server vs MS Access performance

I have an existing application which uses Sql Server 2005 as a back-end. It contains huge records, I need to join tables which contain 50K-70K. Client m/c is a lower hardware.
So, can I improve its performance by using MS Access as a back-end? I also need to search operation on an Access file. So, which one is better for performance?
Querying Access is better than querying SQL in lower h/w?
Because SQL Server does run as a separate process, caches results, uses ram and processing power when not being queried, etc., IF the other computer has very little RAM or a very slow processor (or perhaps even more importantly a single-core processor), I could see a situation where SQL Server is actually SLOWER than MS Access use.
Without information about your hardware setup, approximately what percentage of your application relies on querying the database, etc., I'm not sure this question can be easily answered.
MS SQL Server 2005 Express requires at least 512 MB RAM (see http://www.microsoft.com/sqlserver/2005/en/us/system-requirements.aspx), so if your lower-end hardware doesn't have at least 512MB, I would certainly choose MS Access over SQL Server.
I should also add that you may want to consider SQLite (see http://www.sqlite.org/) which should be significantly less overhead than MS SQL Server. I'm not certain how it would stack up against MS Access use over something like Jet. My gut instinct is that it would perform better with less overhead.
70,000 records is really not that big for SQL server (or access for that matter). I would echo what has already been said and say that all things being equal SQL server will out perform Access.
I would go back to your query and look at the execution plan to see why it is so slow, maybe missing indexes, out of date statistics or a whole host of other reasons could explain your current performance problems.
SQL server also gives you the option of using materialised views to help with performance. The trade of is slower insert/update/delete performance but if you read more than you write it might be worth it.
I think Albert Kallal's comment is right, and the fact is that if you have a single-user app running on a single workstation (Access client with SQL Server running on the same workstation as a client), it will quite often be slower than if the setup on that workstation were Access client to Jet/ACE back end on the same machine. SQL Server adds a lot of overhead that delivers no benefit when there is no network in between the client and the SQL Server.
The performance equation flips when there's a network involved, even for a single-user app. If the Access client runs on a workstation, and the SQL Server on a server on the other end of a network connection (even a fast one), it will likely be faster than if the data is stored in a Jet/ACE file on a file server.
But it's not a given, in my opinion. It depends entirely on the engineering of the application and the excellence of the schema.
I've tried to use SQL Server Express 2005 VS MS Access 2010, many people said SQL Server would run faster than MS Access (I also thought so first). But what happened then made me surprised, run query with MS Access is faster than SQL Server with significant result (with the same data & structure because I'd converted from Access to SQL Server before).
But i don't know its performance in another processes like insert, update, and delete yet.
Local SQL Server Express 2014
1 Minute about 2200 records (2200x connect to DB and retrieve 1 record)
External SQL Server Express 2014 (different IP)
1 Minute about 2200 records (2200x connect to DB and retrieve 1 record)
External SQL Server 2000 (old server)
1 Minute about 10000 records (10000x connect to DB and retrieve 1 record)
Locale Access Database
1 Minute about 55000 records (55000x connect to DB and retrieve 1 record)
We are also surprised.
I'll answer the question directly, but first it is important to know a few things about Access and SQL.
In general, I have found that a small database with up to 10K records will perform equally well on both Access or SQL if all machines have reasonable hardware. Access has a benefit for simplicity for a small number of users, up to 4, but also has a size limitation of 2GB. So you need to be careful that the database size stays below this limit. Some databases start small, but then have a way of growing over time. Something to keep in mind when planning for the future of your program and/or database. If you are might approach the 2GB the limit, one option is to use Microsoft SQL Server 2014 Express edition which has a database size limit of 10GB. SQL Express is full SQL, but with size limitations. Full blown SQL Server 2014 has a max database size of 524PB (524,000,000GB). So it would be fair to say it has no practical limit.
If your database has more than 10K records and especially for larger databases of 100K records or more, SQL can demonstrate significant performance gains.
Some performance with MS Access can be achieved by using "Pass through queries" as can any program that uses SQL optimized queries.
Why? The answer comes from how the technology works under the hood. With Access, if it not using "Pass through queries" it will read an entire table, find which records it needs and then show the result. With a program using SQL optimized queries, the SQL engine returns just the results in a very efficient manner.
At the end of the day, if you have a small (<10K record) database used by up to 4 people, MS Access might make sense. If you have plans that the database could grow to more than 10K records or be used by more than 5 users, SQL would be the logical choice.
Specifically for the question posed about a 50-70K record database. I think if you have reasonable hardware, generally SQL will perform better, if you have a unique situation (such as lower hardware on the SQL server) a move to Access could see some improvements.
My take on this topic is that one should think of payload in terms of pickup truck versus an 18 wheeler. Better/worse/faster/slower somewhat misses the point. It is a matter of choosing the appropriate vehicle for the payload.
70k records is easily handled by today's PCs. So one may as well stick with the pickup truck, unless an organization already has an installed skill set of SQL Servers there would be no reason to use it for an on-premise Windows application of just 70k records. Obviously if it is a web/mobile app that requires a backend db technology then Access isn't a candidate.
SQL Server will always give you better performance because the query is executed on the server. Access on the back-end won't help because your client application will need to pull all the data from the tables, and then perform the join locally.
SQL Server has better indexing options... Filtered indexes, included columns, etc
There is ZERO chance that Access query is faster than a properly indexed SQL Server database query.

Resources