I've got an old classic asp/sql server app which is constantly throwing 500 errors/timeouts even though the load is not massive. Some of the DB queries are pretty intensive but nothing that should be causing it to fall over.
Are there any good pieces of software I can install on my server which will show up precisely where the bottlenecks are in either the asp or the DB?
Some tools you can try:
HP (formerly Mercury) LoadRunner or Performance Center
Visual Studio Application Center Test (Enterprise Editions only?)
Microsoft Web Application Stress tool (aka WAS, aka "Homer"; predecessor to Application Center Test)
WebLoad
MS Visual Studio Analyzer if you want to trace through the application code. This can show you how long the app waits on DB calls, and what the SQL was that was used. You can then use the SQL profiler to tune the queries.
Where is the timeout occurring? Is it at lines when ASP is connecting/executing sql? If so your problem is either with the connection to the db server or at the db itself. Load up SQL profiler in MSSQL to see how long the queries take. Perhaps it is due to locks in the database.
Do you use transactions? If so make sure they do not lock your database for a long time. Make sure you use transactions in ADO and not on the entire ASP page. You can also ignore lock in SQL Selects by using WITH (NOLOCK) hint on tables.
Make sure you database is optimized with indexes.
Also make sure you are conencted to the DB for as shortest time as possible i.e (example not working code): conn.open; set rs = conn.execute(); rs.close; conn.close. So store recordsets in a variable instead of looping through while holding the connection to the DB open. A good way is to use GetRows() function in ADO.
Always explicitly close and set ADO objects to nothing. This can cause the connection to the DB to remain open.
Enable connection pooling.
Load ADO constants in global.asa if you are using them
Do not store any objects in session or application scopes.
Upgrade to latest versions of ADO, MDac, SQL Server service packs etc.
Are you sure the server can handle the load? Maybe upgrade it? Is it on shared hosting? Maybe your app is not the problem.
It is quite simple to measure a script performance by timing it from the 1 line to the last line. This way you can identify slow running pages.
Have you tried running the SQL Server Profiler on the server? It will highlight any unexpected activity hitting the database from the app as well as help identifying badly performing queries.
If you're happy that the DB queries are needfully intensive then perhaps you need to set more appropriate timeouts on those pages that use these queries.
Set the Server.ScriptTimeout to something larger, you may also need to set the timeout on ADO Command objects used by the script.
Here's how I'd approach it.
Look at the running tasks on the server. Which is taking up more CPU time - SQL server or IIS? Most of the time, it will be SQL server and it certainly sounds that way based on your post. It's very rare that any ASP application actually does a lot of processing on the ASP side of things as opposed to the COM or SQL sides.
Use SQL Profiler to check out all the queries hitting the database server.
Deal with the low-hanging fruit first. Generally you will have a few "problem" queries that hit the database frequently and chew up a lot of time. Deal with these. (A truism in software development is that 10% of the code chews up 90% of the execution time...)
In addition to looking at query costs with SQL Profiler and Query Analyzer/SQL Studio and doing the normal SQL performance detective work you might also want to check if your database calls are returning inordinate amounts of data to your ASP code. I've seen cases where innocuous-looking queries returned HUGE amounts of unneeded data to ASP - the classic ("select * from tablename") kind of query written by lazy/inexperienced programmers that returns 10,000 huge rows when the programmer really only needed 1 field from 1 row. The reason I mention this special case is because these sorts of queries often have low execution times and low query costs on the SQL side of things and can therefore slip under the radar.
Related
I have an Access 2010 application. Access is only the Frontend. The Backend is a SQL-Server 2008. The connection between, is ODBC. The ODBC driver is „SQL Server“ (Version 6.01.7601.17514).
I Access is a table with over 500.000 rows. Every row has 58 columns. So the performance is very, very slow, at the most time. To search for one column is not possible, Access is freezing.
I know, that’s not a new Problem...
Now my questions:
Is the driver ok? Because, when I create an ODBC-Connection local (Windows 8), I can choose also the driver „SQL Server“. But here is the version 6.03.9600.17415.
Is there a difference between the speed? I've got a feeling, that, when I use the Acc local under Win8 with the newer driver, it is faster than Terminal Server and older driver.
Local under Win8 I can also choose the driver „SQL Server Native Client 10.0“ (Version 2009.100.1600.01). What ist he difference between those „Win8-ODBC-Drivers“? Which driver would you use and why?
What is with a newer SQL Server? For example 2014 vs 2008. Is 2014 faster than 2008 with ODBC?
What is about the Server-Hardware? When I use a SSD instead oft he HDD? Make a SSD the ODBC-Connection faster?
All users are working on the Terminal Servers. Main with Office 2010, but also with proAlpha (ERP-System). And also with the Access. Now one user told me, that sometimes, if not many users on the TS‘, Access is much faster. What do you mean? When take one TS and work on it, only with Access, not with other application. Is then the ODBC faster?
What can I try else?
Thank you very much.
I have noticed some performance improvements with SQL Server Native Client 10.0 also using Sql Server 2008 with Access 2010, over the original Native Client.
I would question why you need search/load all 500,000 rows of your table. Assuming this is in a form, it sounds a bit like poor form design. All your forms should only load the records you are interested in, not all records by default. In fact it's considered reasonably good practice to not load any records on form load, until you know what the user is looking for.
58 Columns also sounds a little excessive - are there memo (varchar(Max)) fields included in these columns? These should probably be moved into a separate table. examine your data structure and see if you have normalised it correctly.
Are your fields indexed correctly? If you are searching on them an index will considerably improve performance.
Creating views on sql server that only return a suitable subset of records, that can then be linked as tables within Access can also have performance benefits.
A table with 500,000 rows is small – even for Access. Any search you do should give results in WELL UNDER 1 SECOND!
The best way to approach this is to ask a 90 year old lady at a bus stop the following question:
When you use an instant teller machine does it make sense to download EVERY account and THEN ask the user for the account number? Even 90 year old ladies at bus stops will tell you it would be FAR better to ASK for the account number and then download 1 record!
And when you use Google, you don’t download the WHOLE internet and THEN ask the user what to search for. Or do you create one huge massive web page that you then say use ctrl+f to search that huge browser page.
So think about how near all software works. That software does not download and prepare all the data local and THEN ask you what you want to look for. You do the reverse!
So the simple solution here is to ask the user BEFORE you start pulling data from the server. Build a form that looks like this:
Then, to match the search (say on LastName), you use this code in
after update of the text box.
Dim strSQL As String
strSQL = "select * from tblCustomers where LastName like '" & Me.txtLastName & "*'"
Me.RecordSource = strSQL
That way the form ONLY pulls the data you require – this approach even with 10 million rows will run INSTANT on your computer. The above uses a "*" so only the first few chars of the LastName need be typed in. The result is a form of "choices" You can then jump or edit the one record by clicking on the "glasses" button in above. That simply launches + opens one detail form. the Code is:
docmd.OpenForm "frmCustomer",,,"id = " & me!id
Let’s address a few more of your questions:
Is there a difference between the speed? (ODBC drivers)
No, there really no difference in the driver’s performance wise – they all perform about the same and users likely will never see or notice the difference in performance when using different drivers.
For example 2014 vs 2008. Is 2014 faster than 2008 with ODBC?
Not usually. I mean think of ANY experience you have with computers (unless you are new to computers?). Every time you upgrade to new Word, or new Accounting program, that program is larger, takes longer to load, uses more memory, uses more disk space, and near always uses more processing. So given the last 30 years of desktop computers, in almost EVERY case, the next newer version of software requires more ram, more disk, more processing and thus runs slower than the previous version of that software (I willing to be that is YOUR knowledge and experience – so newer versions tend not to run faster – there are a few “rare” exceptions in computer history, but later versions of any software tends to require more computer resources and not less.
Now one user told me, that sometimes, if not many users on the TS‘, Access is much faster. What do you mean?
The above has nothing to do with ODBC drivers. In the above context when you are using Terminal Server, the both the database application and the front end (Access) are running on the same computer/server. What this means is that data transfer from the server to the application is BLISTERING fast and occurs not at network speed, but at computer speed (since both database and application are running on the SAME server). You could install Access on each computer, and then have Access pull data OVER the network from the server to the client workstation – this is slow since there is a network. With TS then the application and server run very fast without a network in-between. The massive processing and speed of the application and server can work together – once the data is pulled and the screen rendered, then ONLY the screen data comes down the network wire. The result is thus FAR FASTER than running Access on each workstation.
that sometimes, if not many users on the TS‘,
Correct, since the users application is running on the server, then no network exists between the application and the SQL server. However since each user has their application running on the server (as opposed to each workstation computer), then more load and resources are required on the server. If many users are using the server, then the server now has a big workload since the server has to run both SQL server and also allocate memory and processing for each copy of Access running on that server.
A traditional setup means that Access runs on each computer. So the memory and CPU to run Access occurs on each workstation – the server does not have to supply Access with CPU and memory, the server ONLY runs SQL server and facilities requests for data from each workstation. However because networks are FAR slower then processing data on one computer, then your bottle neck is not processing, but the VERY limited network speed. Since both Access and SQL and all processing is occurring on the server, then it is far easier to overload the resources and capacity of that server. However the speed of the network is usually the slowest link in computer setups. Since all processing and data crunching occurs server side, only the RESULTING screens and display is sent down the network wire. If the computer software has to process 1 million rows of data, and then display ONE total result, then only 1 total result comes down the network wire that is displayed. If you run Access local on each workstation and process 1 million rows, then 1 million rows of data must come down he network pipe (however, you can modify your Access design to have SQL server to FIRST process the data before it comes down the network pipe to avoid this issue. However with TS since Access is not running on your computer, then you don’t worry about network traffic – but you MUST STILL worry about how much data Access grabs from SQL server – thus the above tips about ONLY loading the data you require into a form. So don’t load huge data sets into the Access form, but simply ask the user BEFORE you start pulling that data from SQL server.
We delivered a successful project a few days back and now we need to make some performance improvements in our WCF Restful API.
The projects is using the following tools/technologies
1- LINQ
2- Entity Framework
3- Enterprise library for Logging/Exception handling
4- MS SQL 2008
5- Deployed on IIS 7
A few things to note
1- 10-20 queries have more than 7 table joins in LINQ
2- The current IIS has more than 10 applications deployed
3- The entity framework has around 60 tables
4- The WCF api is using HTTPS
5- All the API call return JSON responses
The general flow is
1- WCF call is received
2- Session is checked
3- Function from BL layer is called
4- Function from DA layer is called
5- Response returned in JSON
Currently, as per my little knowledge and research I think that the
following might improve performance
1- Implement caching for reference data
2- Move LINQ queries with more than 3 joins to stored procedure (and use hints maybe?)
3- Database table re-indexing
4- Use performance counters to know the problem area's
5- Move functions with more than 3 update/delete/inserts to stored procedure
Can you point out some issue with the above improvements ? and what
other improvements can i do ?
Your post is missing some background on your improvement suggestions. Are they just guesses or have you actually measured and identified them as problem areas?
There really is no substitute for proper performance monitoring and profiling to determine which area you should focus on for optimizations. Everything else is just guesswork, and although some things might be obvious, it's often the not-so-obvious things that actually improve performance.
Run your code through a performance profiling tool to quickly identify problem areas inside the actual application. If you don't have access to the Visual Studio Performance Analyzer (Visual Studio Premium or Ultimate), take a look at PerfView which is a pretty good memory/CPU profiler that won't cost you anything.
Use a tool such as MiniProfiler to be able to easily set up measuring points, as well as monitoring the Entity Framework execution at runtime. MiniProfiler can also be configured to save the results to a database which is handy when you don't have a UI.
Analyzing the generated T-SQL statements from the Entity Framework, which can be seen in MiniProfiler, should allow you to easily measure the query performance by looking at the SQL execution plans as well as fetching the SQL IO statistics. That should give you a good overview of what can/should be put into stored procedures and if you need any other indexes.
I am currently investigating a ASP.NET MVC web application which is reported to have poor performance under load. (But load is only a few requests per second).
We are using MySQL + NHibernate + Castle ActiveRecord for the mapping. A NHibernate session is opened at the beginning of every session and kept open in view.
I already optimized the data access pattern to avoid Select N+1 problems where possible.
Now what I'm thinking about is.. on each request a database transaction is opened and commited at the end. And in 99% of our requests (MVC actions) no data has to be written to the database.
Is it possible and do you see benefit in closing sessions/transactions earlier or even mark sessions as read-only?
Could database locking be a bottleneck and if so is it possible to explicitly avoid locking at least for the read-only transactions?
You should verify that your application is not loading huge amount of data from DB. Even with all select n+1 resolved you can load millions of records and it is going to be very slow.
Verify your pages with NHibernate profiler. It will come up with optimization suggestions. If not, probably NH is not your bottleneck.
If you only have few requests per second, then the overhead of opening transactions is not the reason for the poor performance. Try to let NHibernate log all the SQL that is sent to the server. This can give you some idea of why the thing i slow. Probably it is sending a billion queries for each HTTP request or else some well-chosen indices on your tables could probably help you.
I am now writing a report about MS Access and I can't find any information about its performance speed in comparison to other alternatives such as Micorsoft SQL Server, MySQL, Oracle, etc... It's obvious that MS Access is going to be the slowest among the rest, but there is no solid documents confirming this other than forums threads, and I don't have the time and resources to do the research myself.
Access isnt always the slowest. For fairly simple queries with one user, it is actually quite fast.
but throw a few extra users in there, or use complex joins and it will fall apart on you.
Here is what I could find quickly:
http://blog.nkadesign.com/2009/access-vs-sql-server-some-stats-part-1/
http://www.linuxtoday.com/news_story.php3?ltsn=2001-07-27-006-20-RV-SW
http://swik.net/MySQL/MySQL+vs+MS+SQL+Server
Oddly enough, few people compare access to the "real" databases since the user limit is such a limiting factor.
Here is Microsoft's reasons to upgrade to SQL Server from Access:
http://office.microsoft.com/en-us/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx
I've actually seen an Access address book of over 3 million records performing very very fast while being used by hundreds of users. This is however an exception. Access databases decrease in performance and stability as soon as the database is modified while in use and especially if it is modified by more than a couple of users.
yes, Access (Jet) is slow. It's best to move to Access Data Projects- this allows you to use Access forms and reports with SQL Server, a REAL database.
SQL Server is the most popular database anywhere.. has been installed more often than any other database, period.
I have an application that talks to several internal and external sources using SOAP, REST services or just using database stored procedures. Obviously, performance and stability is a major issue that I am dealing with. Even when the endpoints are performing at their best, for large sets of data, I easily see calls that take 10s of seconds.
So, I am trying to improve the performance of my application by prefetching the data and storing locally - so that at least the read operations are fast.
While my application is the major consumer and producer of data, some of the data can change from outside my application too that I have no control over. If I using caching, I would never know when to invalidate the cache when such data changes from outside my application.
So I think my only option is to have a job scheduler running that consistently updates the database. I could prioritize the users based on how often they login and use the application.
I am talking about 50 thousand users, and at least 10 endpoints that are terribly slow and can sometimes take a minute for a single call. Would something like Quartz give me the scale I need? And how would I get around the schedular becoming a single point of failure?
I am just looking for something that doesn't require high maintenance, and speeds at least some of the lesser complicated subsystems - if not most. Any suggestions?
This does sound like you might need a data warehouse. You would update the data warehouse from the various sources, on whatever schedule was necessary. However, all the read-only transactions would come from the data warehouse, and would not require immediate calls to the various external sources.
This assumes you don't need realtime access to the most up to date data. Even if you needed data accurate to within the past hour from a particular source, that only means you would need to update from that source every hour.
You haven't said what platforms you're using. If you were using SQL Server 2005 or later, I would recommend SQL Server Integration Services (SSIS) for updating the data warehouse. It's made for just this sort of thing.
Of course, depending on your platform choices, there may be alternatives that are more appropriate.
Here are some resources on SSIS and data warehouses. I know you've stated you will not be using Microsoft products. I include these links as a point of reference: these are the products I was talking about above.
SSIS Overview
Typical Uses of Integration Services
SSIS Documentation Portal
Best Practices for Data Warehousing with SQL Server 2008