Do I have a memory leak in my website? [closed] - asp.net-mvc-3

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 11 years ago.
I have a website which prepared with ASP.NET MVC 3 and Entity Framework 4.1. This image below is my server's perfmon logs.
My problem is w3wp.exe is getting bigger and bigger at every minutes and never release. I'm using LINQ to Entities in my queries and all my entity framework codes are in using block.
I think it's a garbage collection problem, but I am not sure. What is my problem and how can I fix it?

It all depends on what your web site is doing. We have .NET servers running on 32 GB RAM, and the worker process gladly takes all it wants. It needs it really.
Are you running a lot of background threads, stored an abhorrent amount of data in the session/global application / static methods?
Are connection strings, readers, file I/O, etc. being closed properly?
.NET garbage collection works great, but you have to do your part. Garbage collectors (waste disposal engineers) are not going to go into your house and collect it for you, you have to at least walk it to the curb... or close / null / dispose of objects you're not using.
Update 1:
What is happening is the ASP.NET worker process is creating a buffer. When it hits a certain amount of memory used, it will decrease over time, but it likes to allocate memory so it doesn't have to go out of it's way to fetch it when it needs it.

You should run a profiler to see what process and threads are getting out of hand. Visual Studio 2010 has some nice profiling tools, and I'm sure there are some other third-party ones as well.
It very well could be poor coding from the developer, and garbage collection might not be doing its role as Ryan mentioned above. Especially if your application uses multiple threads.

Related

How to use miniprofiler to help us crush loading speeds?

As a developer and constant user of minipfoler, I use stakoverflow as the benchmark for my .NET sites. That is because the entire stack network is just a blazingly fast.
I know miniprofiler is used on stackexchange. There is a whole developers thing that can be used on stack but can we enable the stats to see how fast it really is?
I might be bit over obsessive here - but I am looking to improve permanences in milliseconds and the only viable benchmark is a large and complex site like stack exchange.
I know it might be a security issue to see live data but I just really want a benchmark (screenshot / guidelines) to see how far I can optimize my .NET MVC web application.
My actual IIS and MVC performance is fantastic and I think I am more concerned about server replies and client side stuff. So can I (and should I) put more effort into smashing down this response time?
This site is hosted in Azure Cloupapp and using Azure DB - I know about 60~180ms is used on connection times that are out of my control.
How can I improve times between Paint, Load and Complete?
I find that I answer my own question on StackExchange more often now a days. Not sure what that means. But this in interesting what I found while dealing with other Q&A's (And it answered this question)
Yes, you should avoid the obvious beginner mistakes of string
concatenation, the stuff every programmer learns their first year on
the job. But after that, you should be more worried about the
maintainability and readability of your code than its performance. And
that is perhaps the most tragic thing about letting yourself get
sucked into micro-optimization theater -- it distracts you from your
real goal: writing better code.
Posted by Jeff Atwood
There is no real problem in performance or serious delays. Its just an obsession that wont lead to much satisfaction.
The 'dudes' got a point. As long as my code is readable and it runs fast - what the heck more do I want?
PERFECTION! - Waste of time, lol#me!

Memory Leak Issue in Windows Phone Develoment - Silver Light Framework

I am creating one game in Windows phone using c# and silver light platform. I am new in this technology and currently facing memory leak issue.
As per research and study I have done, I have tried to do all the things including events, string and usage of garbage collector.
Can any one please give common tips to best utilize garbage collector and memory management since it seems issue right now. When my garbage collector reaches 5 lac size, it stop collecting new things and application is getting crash.
I also tried empty the garbage collectore passing parameter 0 in gc collect but it is crashing the app.
Can you please guide and help for basic things to take care, process to follow to avoid such issues and best use of GC collect?
Thanks in advance,
Jacob
In general, you should never have to call GC.Collect yourself as unused objects will be automatically collected every few seconds.
As for what can prevent objects from being collected, it comes down to them being "rooted". Roots include:
Any static references
Any references held by the run loop (your Application is the closest thing here)
Anything being displayed on the current page or any page behind it
Anything referenced by any of the above (including UI events), or referenced by anything that is referenced by any of the above (etc).
In the above scenarios, those objects and any objects they hold a reference to cannot be GC'd. So as for advice:
Avoid defining anything as static
Be careful how many objects are held by Application
Avoid a navigation model that allows your back stack to grow to ulimited levels
Potentially look at setting references to large data sets to null in your page/viewmodel's OnNavigatedFrom method and re-initialise them in OnNavigatedTo
I'd recommend using the Windows Phone Profiler, which comes with the 7.1 SDK. It will tell you what objects are in memory and why.
Without seeing any of your code, it is difficult to give specific advice.
However, I strongly suggest you run a memory profiling tool like ANTS Memory Profiler or .Net Memory Profiler. These tools will show you what portions of your code are never released and are very helpful in making the adjustments that you need.

Measuring Web Application Performance (Stress-Testing) and Bandwidth Requirements [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am in the process of measuring the bandwidth requirements and how a Web application behaves in terms of response time and memory requirements when the number of users is increased.
Is there a particular good tool that can help us here? I believe JMeter is the standard tool. But are there other tools considering that the site is IE only.
Your answer is extremely appreciated.
Well, the question is how do you want to profile? Do you want to simulate real-world activity? Or do you just want to bombard the heck out of the site?
Load Testing Individual Pages
I don't think you can go wrong with using Apache Bench (ab). It's dirt simple to use, and can really stress your application. My typical usage is:
ab -c 10 -n 1000 http://www.example.com/path/to/page
The -c parameter is the number of simultaneous requests to issue. I would suggest starting low (like 5 to 10) and working your way up. Watch the output for failed requests and falling response rate. You're limited to about 1000 connections on most linux machines, so don't go too crazy.
The -n parameter is how many requests to issue. I would suggest doing at least 100 times the number of concurent requests to get a good average...
Another great use for apache bench is to benchmark individual database queries. Just create a simple script that runs the query, and load away. This can be a really good way to detect fast but expensive queries that will take your server down in production yet seem fine in testing.
Load Testing The Whole Application
I've had good luck with WebLoad. There's an Open Source version if you don't have a good budget that will get you started. But I'd suggest springing for the pro version. With it, you can setup a distributed test environment (as simple as installing the client on every machine in the office, as complex as spinning up a bunch of VMs for it).
The cool thing, is that you can program it in javascript. So you can tell it to take random click paths through the site with random delays. This should simulate a user far better than you could do manually. Then, once you have it setup, push the tests to the distributed engine and hit go.
It supports many different load profiles (stair-step where it adds load little by little for the duration of the test, etc). So you can simulate a slashdot-effect profile, normal day-to-day usage, etc.
The reports it generates are immensely useful. It shows you the slow urls, where the bottlenecks are, etc.
There are plenty of other test platforms and systems out there. This was just one that I found that I felt worked pretty well at the time (I did a comparison about 2 to 3 years ago). I am not affiliated with the company in any way.
Load Testing parts of the application
This is a really useful technique called profiling. The how to and tools are fairly language specific, so I won't go into too much detail here (since you don't have a language tag on your question). But the point is that once you find a slow page, you'll need to profile it to figure out what's slowing it down. Then fix the low hanging fruit (the parts that are the slowest). Then re-test to see if you made a difference or not...
Conclusion
Since it's almost impossible to simulate real-world load, this is really more of an art than a science. Have at it, and have fun. Don't take the results to seriously though, even with the best testing, you're likely to miss something... So I wouldn't take them as gospel and go telling the CEO that you tested that it's capable of 100k concurent users. Since when the day it crashes happens (and if you are lucky it will crash), he will blame you since you told him it would work...
Just a thought, you say IE only so is it hosted on IIS? if so then you might want to look at Microsft's WCAT (Web Capacity Analysis Tool), more information is available here:
http://support.microsoft.com/kb/231282
Although it isn't open source but it is free - do you need the source.

Best practices and literature for web application load testing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
As a web developer I've been asked (a couple of times in my career) about the performance of sites that we've built.
Sometimes you'll get semi-vague questions like "will the site continue perform well, even during product launch week?", "can the site handle a million users?", and even "how is the site doing?"
Of course, these questions are very legitimate, and I have always tried to answer these questions to the best of my ability, using a combination of
historic data (google analytics / IIS logs)
web load test tools
server performance counters
experience
gut feeling
common sense
a little help from our sysadmins
my personal understanding of the software architecture in question
I have usually been able to come up with reasonable answers to these questions.
However, web app performance can be influenced by many things (database dependencies, caching strategies, concurrency issues, etcetera, user behaviour).
I'm a programmer and not a statician, and my approach to this problem has always felt deeply unscientific. So I did a little more research... and all of my google results seem to focus on tools and features and metrics (and MORE metrics) when I am really looking for a way to make sense of these things.
The question:
What are some good resources (books?) to read on the best practices for a developer to read on the subject of web load testing, that will help me answer these types of questions?
First your question proves you do understand the problem. It can sometimes be tricky enough creating the tools, scripts etc. to generate the load but the real challenge lies in evaluating the results and what to monitor.
A very easy answer to your question could be to Generate load on a production-like environment that is similar to current or expected usage. If it runs ok without any crashes or slow performance that is usually good enough. After that, increase load to see where your limits are.
When you reach your limit my experience is that this is purely a project budget question. Will we invest more time/money/resources etc to evaluate the cause.
I work as a test professional and I do recommend respect load testing as a vital part of the development process but unfortunately that is not always in line of what management decides.
So the answer to your question is that almost everyone needs to be involved in this process:
developers to monitor their code; system admins need to monitor CPU, memory usage etc.; DBA; networking guys; and so on. They all probably need their own source of knowledge to be able to get all this info recorded and analysed.
A few book tips:
The Art of Application Performance Testing: Help for Programmers and Quality Assurance
http://www.amazon.com/exec/obidos/ASIN/0596520662/
The Art of Capacity Planning: Scaling Web Resources
http://www.amazon.com/exec/obidos/ASIN/0596518579/
Performance Testing Guidance for Web Applications
http://www.amazon.com/exec/obidos/ASIN/0735625700/
Have you seen:
Performance Testing Guidance for Web Applications by J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
It's even available on the web for free.
http://msdn.microsoft.com/en-us/library/bb924375.aspx
You could emulate typical user behaviour and use one of the cloud services to simulate a huge number of users on the website to see how well your website handles with huge numbers of users. I heard Amazon's service is decent
I can recommend two books published in 2010:
The first is "ASP.NET SITE PERFORMANCE SECRETS" by Matt Perdeck, was published in late fall 2010. It is written more from performance optimization standpoint, but also has detail material on load testing. It is a free pdf eBook.
The second book is ".NET Performance Testing and Optimization - The Complete Guide", by Paul Glavich, Chris Farrell". It is pretty complete source on performance / load testing

Axosoft OnTime vs Countersoft Gemini [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We are "upgrading" the systems at the company, moving from SourceSafe/BugNet/... (yeahy!) to some more serious systems. TFS is too expensive. We have come down to comparing OnTime vs Gemini. They both seem OK with an "OK" price-tag. We will of-course download and try them out both, but it would be nice with comments from experienced users. To me, they seem quite equal.
Has anyone used both, and can compare the two against each-other?
If you would recommend one of these, which one, and why?
Any other experiences with these systems? (Especially Gemini, seems hard to find reviews regarding this-one..?)
(We are talking about a smaller dev-team, max 8 dev in a project at a time, a couple of testers and some stakeholders/managers etc... Several projects running simultaneously. Need to be able to integrate to Visual Studio, Subversion with feed-back to the issue tracker etc)
Thanks for your time!
We use OnTime here - it has a good workflow environment but be warned that it does not scale well at all. We currently have 35 licenses with only 10 to 15 users online at any one time and it struggles.
Also be careful of the sales pitch for using the web or "remote" servers for distributed environments - it works fine with the demo/eval database but slows to a crawl once you start getting a decent amount of items in the database. All you have to do is look at a SQL profiler and you'll see the number of calls made to the DB.
If you profile the web services you'll also see that the web and remote environments have not been optimized at all to batch calls, so as soon as you move to an environment where there is any kind of communication latency it crawls.
Axosoft's support has been less than helpful on this as well - they strangely do not view these as bugs and instead view this as something we should expect in these environments. We have contacted their support over a number of other things as well and it is surprising how poor it is over other things as well.
Axosoft's excuse is that we should have found these things out in the eval period, but I don't know how they expected us to scale the data to production environment levels within a 30 day eval period...
We have been forced to revert to using the WinForm client over Citrix for our distributed teams.
Overall - it is a nice application if you have a small team in a single location. But if you have larger teams or people spread out in multiple locations I would avoid it at all costs.
Gemini is great, we selected gemini over many other bug tracking systems...
main features we liked:
user interface
extensibility (API's, REST based)
addon products (visual studio, outlook plugins)
source control integration (subversion)
source code available (asp.net c#), easy to setup and great support.
For a broader comparison with Gemini bug tracker and others in the same space, wikipedias Bug tracking comparison page might be of use. Although I don't know of a direct, in depth Gemini / OnTime comparison.
Take a look at Project Kaiser (short demo available). It is fast, web-based, supports embedded wiki, forums and chats. And it is free for 5 users :)

Resources