classic asp site run very slow on IIS server 10 - performance

I am working on a classic asp project. I have deployed it on the window 10 IIS Server (version 10.0.17134.1) on my local computer it's running fine but when I deployed it on my test server which IIS Server (version 10.0.19041.1).it becomes very slow. I have tried some solutions to fix it like to increases the maximum worker processes.it was still slow.
when I disable RealtimeProtection from the window firewall it becomes fast . How can I increase its performance without disabling RealtimeProtection from the window firewall

If the database is an Access database, convert to MS-SQL as Access can cause such slowness on recent versions of Windows IIS. Access is not recommended today anyway because it has security issues.
If you are using MS-SQL with JOINED tables find another way because despite what some say, JOINED tables can be a resource killer. Run an initial query and then run a separate query based on the first results... very much quicker in most circumstances.

Related

Fetching operations from Ncache server is taking time than previously

In my office, we have a server that has Ncache installed for storing and retrieving data and our applications are also hosted there.
There was an issue where application was getting timed out. In depth, i found that getting cache method from Ncache is taking 8-9 seconds, which was previously taking 0.5 seconds. The application isn't changed recently and it was working fine previously. All of a sudden this issue has occurred. Some one told me that there was an issue where all of a sudden all clustered cache were deleted from ncache manager and we resolved it by setting basic values from tutorial available online. But this issue seems to be never getting solved. Can anyone through some light on it that we can do to overcome this time out issue?
This seems like some application/environment related issue where a working application is now showing slow fetch time while it was working fine previously. Also, if your console app is getting results in less than a second then it again shows that issue is not from NCache server end but isolated to the application.
I will suggest to review what has been changed in the application to start off. You can also profile your application on which calls are taking more time now. NCache client side windows performance counters can also be reviewed to rule out if it is slow because of NCache or due to some application related issue.
Moreover, caching an object which is huge in size is generally not recommended. You should always break your bigger objects to smaller objects and then cache them. This will reduce network and storage overhead for your application. If you have to use bigger object then consider using compression.
NCache default settings are already tuned for optimum performance and should not slow things down. You should check firewall between client and NCache server to rule out any environmental issues.

ODBC-Performance Access 2010 and SQL Server 2008 R2

I have an Access 2010 application. Access is only the Frontend. The Backend is a SQL-Server 2008. The connection between, is ODBC. The ODBC driver is „SQL Server“ (Version 6.01.7601.17514).
I Access is a table with over 500.000 rows. Every row has 58 columns. So the performance is very, very slow, at the most time. To search for one column is not possible, Access is freezing.
I know, that’s not a new Problem...
Now my questions:
Is the driver ok? Because, when I create an ODBC-Connection local (Windows 8), I can choose also the driver „SQL Server“. But here is the version 6.03.9600.17415.
Is there a difference between the speed? I've got a feeling, that, when I use the Acc local under Win8 with the newer driver, it is faster than Terminal Server and older driver.
Local under Win8 I can also choose the driver „SQL Server Native Client 10.0“ (Version 2009.100.1600.01). What ist he difference between those „Win8-ODBC-Drivers“? Which driver would you use and why?
What is with a newer SQL Server? For example 2014 vs 2008. Is 2014 faster than 2008 with ODBC?
What is about the Server-Hardware? When I use a SSD instead oft he HDD? Make a SSD the ODBC-Connection faster?
All users are working on the Terminal Servers. Main with Office 2010, but also with proAlpha (ERP-System). And also with the Access. Now one user told me, that sometimes, if not many users on the TS‘, Access is much faster. What do you mean? When take one TS and work on it, only with Access, not with other application. Is then the ODBC faster?
What can I try else?
Thank you very much.
I have noticed some performance improvements with SQL Server Native Client 10.0 also using Sql Server 2008 with Access 2010, over the original Native Client.
I would question why you need search/load all 500,000 rows of your table. Assuming this is in a form, it sounds a bit like poor form design. All your forms should only load the records you are interested in, not all records by default. In fact it's considered reasonably good practice to not load any records on form load, until you know what the user is looking for.
58 Columns also sounds a little excessive - are there memo (varchar(Max)) fields included in these columns? These should probably be moved into a separate table. examine your data structure and see if you have normalised it correctly.
Are your fields indexed correctly? If you are searching on them an index will considerably improve performance.
Creating views on sql server that only return a suitable subset of records, that can then be linked as tables within Access can also have performance benefits.
A table with 500,000 rows is small – even for Access. Any search you do should give results in WELL UNDER 1 SECOND!
The best way to approach this is to ask a 90 year old lady at a bus stop the following question:
When you use an instant teller machine does it make sense to download EVERY account and THEN ask the user for the account number? Even 90 year old ladies at bus stops will tell you it would be FAR better to ASK for the account number and then download 1 record!
And when you use Google, you don’t download the WHOLE internet and THEN ask the user what to search for. Or do you create one huge massive web page that you then say use ctrl+f to search that huge browser page.
So think about how near all software works. That software does not download and prepare all the data local and THEN ask you what you want to look for. You do the reverse!
So the simple solution here is to ask the user BEFORE you start pulling data from the server. Build a form that looks like this:
Then, to match the search (say on LastName), you use this code in
after update of the text box.
Dim strSQL As String
strSQL = "select * from tblCustomers where LastName like '" & Me.txtLastName & "*'"
Me.RecordSource = strSQL
That way the form ONLY pulls the data you require – this approach even with 10 million rows will run INSTANT on your computer. The above uses a "*" so only the first few chars of the LastName need be typed in. The result is a form of "choices" You can then jump or edit the one record by clicking on the "glasses" button in above. That simply launches + opens one detail form. the Code is:
docmd.OpenForm "frmCustomer",,,"id = " & me!id
Let’s address a few more of your questions:
Is there a difference between the speed? (ODBC drivers)
No, there really no difference in the driver’s performance wise – they all perform about the same and users likely will never see or notice the difference in performance when using different drivers.
For example 2014 vs 2008. Is 2014 faster than 2008 with ODBC?
Not usually. I mean think of ANY experience you have with computers (unless you are new to computers?). Every time you upgrade to new Word, or new Accounting program, that program is larger, takes longer to load, uses more memory, uses more disk space, and near always uses more processing. So given the last 30 years of desktop computers, in almost EVERY case, the next newer version of software requires more ram, more disk, more processing and thus runs slower than the previous version of that software (I willing to be that is YOUR knowledge and experience – so newer versions tend not to run faster – there are a few “rare” exceptions in computer history, but later versions of any software tends to require more computer resources and not less.
Now one user told me, that sometimes, if not many users on the TS‘, Access is much faster. What do you mean?
The above has nothing to do with ODBC drivers. In the above context when you are using Terminal Server, the both the database application and the front end (Access) are running on the same computer/server. What this means is that data transfer from the server to the application is BLISTERING fast and occurs not at network speed, but at computer speed (since both database and application are running on the SAME server). You could install Access on each computer, and then have Access pull data OVER the network from the server to the client workstation – this is slow since there is a network. With TS then the application and server run very fast without a network in-between. The massive processing and speed of the application and server can work together – once the data is pulled and the screen rendered, then ONLY the screen data comes down the network wire. The result is thus FAR FASTER than running Access on each workstation.
that sometimes, if not many users on the TS‘,
Correct, since the users application is running on the server, then no network exists between the application and the SQL server. However since each user has their application running on the server (as opposed to each workstation computer), then more load and resources are required on the server. If many users are using the server, then the server now has a big workload since the server has to run both SQL server and also allocate memory and processing for each copy of Access running on that server.
A traditional setup means that Access runs on each computer. So the memory and CPU to run Access occurs on each workstation – the server does not have to supply Access with CPU and memory, the server ONLY runs SQL server and facilities requests for data from each workstation. However because networks are FAR slower then processing data on one computer, then your bottle neck is not processing, but the VERY limited network speed. Since both Access and SQL and all processing is occurring on the server, then it is far easier to overload the resources and capacity of that server. However the speed of the network is usually the slowest link in computer setups. Since all processing and data crunching occurs server side, only the RESULTING screens and display is sent down the network wire. If the computer software has to process 1 million rows of data, and then display ONE total result, then only 1 total result comes down the network wire that is displayed. If you run Access local on each workstation and process 1 million rows, then 1 million rows of data must come down he network pipe (however, you can modify your Access design to have SQL server to FIRST process the data before it comes down the network pipe to avoid this issue. However with TS since Access is not running on your computer, then you don’t worry about network traffic – but you MUST STILL worry about how much data Access grabs from SQL server – thus the above tips about ONLY loading the data you require into a form. So don’t load huge data sets into the Access form, but simply ask the user BEFORE you start pulling that data from SQL server.

IIS maximum worker processes

I've been researching this quite a bit and just wanted some professional opinions on this. I am working on an eCommerce site that is really slow for submitting orders. Would creating a web farm be beneficial? If not, what would - server, or network wise (load balancers, etc...)?
Assume the app is optimized as much as can be for now and we need to look at other alternatives.
Environment:
Windows 8 RC 2
IIS 7.5
SQL Server 2008
Ideas
IIS and database on separate server
Load balancers
Maybe we can find a cheaper solution.
If it is only the submit process which is slow perhaps you can just improve the code and use some background workers that won't block the main thread.
Otherwise what about just upgrading your server?
Static file compression.
Create separate application pools for static and dynamic pages with in the website.
Get the heavy feature onto a separate app pool. Check if that helps.
Do optimization on SQL server end. Indexing?
Other than this I would look in to the code. IIS performance is highly dependent upon the pages being executed.
You may also want to check from browser developers tool as to which part of the request is actually clocking the most amount of time. That will give you a better idea of which aspect of performance you should really be concerned about.

Zend Framework Application Runtime Benchmarking

Been trying to understand the overall performance of our application by comparing the benchmark we get from our dev environment and our prod environment.
Interestingly, in our dev environment, which is our local machine, we get application run time as fast as 98ms.
The same application runs on avg at 400ms in our production server, which is a VPS with CentOS 5.8 running.
I'm assuming that this increase must be because of network connection lag between web server and database server, since we didn't have this gap in the dev environment, everything is local.
We're using Doctrine 2.0 as an ORM for our application, we haven't really gotten into optimizing it by caching.
Is there a way to optimize this lag time? Or am I completely wrong about the case?
Your best bet to measure actual database and query time used is to setup a database profiler.
You can read this: Profiling Doctrine 2 with Zend Framework
Just as a note, running a profiler should only be done for testing. You shouldn't run it all the time, especially if your production server is high volume. It will add some processing time, but it will give you more information about your queries and connection time.
Your assumption of it being latency between the boxes could be true and this will verify it for you.

Session variables for classic ASP running on IIS7

Seems that lot of people already know that issue but I can not find solution.
We transferred our web app from IIS6 to IIS7. For authentication purposes and some other functions we using session variables.
While on IIS6 we did not had any problem but now all users losing their time and patience because app variables are being lost somewhere between pages being submitted and as a result users get kicked out of the app.
Server is 2008 R2 with 64 bit OS.
Default installation by Dell so it should be running on 64 bit mode.
We do not have any third party elements or modules. All developed in-house.
Database obviously MS SQL 2008 as well, on the same server (I know it is bad but we limited in resources and money).
So does anyone know what is going on and how to fix this?
Resolution of that problem is simple: do not use any other port except 80.
As soon as I moved site to separate IP with port 80 (use of header on the same IP was not tested) -- all problem gone....
Have you deployed your application as a new website or a virtual directory on the IIS ?
Remember that for the methods in the global.asa to be executed by the server you need to deploy your application as a new website not just a virtual directory under an existing web site.
I think the best way is to add logging code to trace when exactly a session variable is lost; after a post back to the server, redirection, etc... to try to narrow down the causes.
good luck.
Look here for a solution...
the intrinsic checkbox fixed my problem

Resources