Peformance Testing in VS2010 - visual-studio-2010

I have created a Performance test in VS2010 for an sharepoint webapplication.
I Got few idicators like KeyIndecators,Page Response time etc...
Question 1:Please Briefly explain what are those graphs and how can we get the performance of a sharepoint site from those?
Question 2:I perform the testing in my local machine and i got the CPU utilization of my machine....How can i get the server's Utilization whare the sharepoint site created?

Question 1: page response time is exactly what the name implies - how long the page took to respond. You'd be best off reading the user manual to get definitions and examples of what metrics are displayed and how to get the ones you really want.
Question 2: You'll need to talk with the sys admins who control the sharepoint server. They can either give you access to monitoring tools, or give you the log data.

Related

Measure average web app response time from the client side during a long period of time

My company has over a hundred users of a specific CRM web application, which is provided as a service by another company to us.
The users of this application are very dissatisfied with its average response time, and I need to find a way to gather metrics during a certain period of time (let's say .. a week) to prove the service provider that they are really providing a bad service.
If the application were mine, I would get the metrics from New Relic or some other equivalent monitoring service, but since it is not, I'm looking for something that could do some sort of client side monitoring.
I already checked Page Speed from Google and YSlow from Yahoo, but both are only useful when you want to test the application during a few seconds. They are not meant for the long term monitoring I need.
Would anybody know a way to get this kind of monitoring from a client side perspective?
LoadRunner is no charge for 50 users, but what you really need is not a test tool but a synthetic user monitor which runs every n number of minutes and pulls the stats. You can build it yourself using LoadRunner 12, Jmeter, or any other http sampling technology. You could also use a service like Gomez for sampling or mpulse from SOASTA for tracking every page component across all users.
Keep in mind that your developer tools will time all of the components of the request to give you some page times. As will Dynatrace for the web client.
If you have access to the web server then consider configuring the web server logs to capture the w3c time-taken field, which will track every request. Depending upon the server the level of granularity can be to the millionth of a second on each and every request.
You could also look at a service like LiteSquare which can process those web logs and provide ammunition for changes to the server to improve performance on a no-gain, no-charge model.
One (expensive) solution would be using LoadRunner endurance test feature. Check here for a demonstration.
Another tool is Oracle OATS.
JMeter is a free tool, though I'm not sure if it's reliable enough to run for a whole week.
These are load generator tools, so if you are testing as a single client, you should carefully chose your load amount (e.g. one user).
Last but not least, you could create your own webservice client, and create a cron job to run it on your specified time of day and log the access time.
If what you want is to get data from their server, this is impossible ... without hacking into it. All you can do is monitor the website as a client, using some of the above tools, make a report and present that to them. But even so they could challenge your bandwidth, your test method etc.
I recommend that you negotiate with them to give you their logs and to prove that their system can support a certain amount of load. If you are a customer to them, you can file a complain or test additional offers.
Dynatrace was already mentioned in combination with Load Testing. As you said that you want to monitor your live system I want to bring Dynatrace up again. Most of the time it is used to do live system monitoring to understand what end users are actually doing. It is also available as a 30 day trial - so - no need to buy it - but - use it for your sanity check: http://bit.ly/dttrial

how does one identify why a website is slow? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was asked this question once at an interview:
"Suppose you own a website where the server is at some remote location. One day, some user calls/emails you saying the site is abominably slow. How would you identify why the site is slow? Also, when you check the website yourself as any user would (using your browser), the site behaves just fine."
I could think of only one thing (which was shot down):
Check the server logs to analyse incoming traffic. Maybe a DoS attack or exceptionally high traffic. Interviewer told me to assume the server has normal traffic and no DoS.
I was kind of lost because I had never thought of this problem. I have almost no idea how running a server/website works. So if someone could highlight a few approaches, it would be nice.
While googling around, I could find only this relevant, wonderful article. That article is kind of too technical for me now, but I'm slowly breaking it down and understanding it.
Since you already said when you check the site yourself the speed is fine, this means that (at least for the pages you checked) there is nothing wrong with the server and it can serve those pages at a good speed. What you should be figuring out at this point is what the difference is between you and the user that reports your site is slow. It might be a lot of different things:
Is the user using a slow network connection (mobile for example)?
Does the user experience the same problems with other websites hosted at the same webhoster? If so, this could indicate a network problem. Normally this could also indicate a resource problem at the webserver, but in that case the site would also be slow for you.
If neither of the above leads to an answer, you could assume that the connection to the server and the server itself are fine. This means the problem must be in the users device. Find out which browser/OS he uses and try to replicate the problem. If that fails find out if he uses any antivirus or similar software that might cause problems.
This is a great tool to find the speed of web pages and tells you what makes it slow: https://developers.google.com/speed/pagespeed/insights
I think one of the important thing that is missing from above answers is the server location, which can play a vital in web performance.
When someone is saying that it is taking a longer time to open a web page that means high latency. High latency can be caused due to server location.
Let's assume as you are the owner of the web page then the server and client are co-located, so it will have a low latency.
But, now if client is across the border, then latency time will increase drastically. And hence a slow perfomance.
Another factor is caching which drastically affects the latency time.
Taking the example of facebook, they have server all over the world to reduce the latency time (and also to provide several other advantages) and they use huge caching system to cache their hot data (trending topics) whereas cold data (old data) are stored in hard disk so it takes a longer time to load an older photo or post.
So, a user might would have complained about this as they were trying load up some cold data.
I can think of these few reasons (first two are already mentioned above):
High Latency due to location of client
Server memory might need to be increased
Number of service calls from the page.
If a service could be down at the time of complaint, it could prevent page from loading.
The server load might be too high at the time of the poor experience. The server might need to increase the resources (e.g. adding another server/web server to the cluster).
Check if there was any background job running on the server at that time.
It is important to check the logs and schedules of the batch jobs to determine what all was running at that time.
Hope this help.
Normally the user takes the page loading time as a measure to find out that the site is slow. But if you really want to know that what is taking the maximum time the you can open the browser debugger by pressing f12. if your browser is chrome the click on network and see what calls your application is making and which are taking maximum time. If you are using Firefox the you need to install firebug. If you have that, then again press f12 and click on Net.
One reason could be the role of the user is different of your role. You might be having suppose an administrator privilege (some thing like super user role) and the code might be just allowing everything for such role that means it does not really do much of conditional checking to see what is allowed or not. Some times, it's a considerable over ahead to get all the privileges of the user and have the conditions checking, how course depends how how the authorization is implemented. That means, the page might be really slow for specific roles. Hence, you should find out the roles of the user and see if that is a reason.
Obviously an issue with the connection of the person connecting to your site OR it's possible it was a temporary issue and by the time you checked your site, everything was dandy. You could check your logs or ask your host if there was an issue at the time the slow down occured.
This is usually a memory issue and it can be resolved by increasing the Heap Size of the Web Server hosting the application. In case the application is running on Weblogic Server. Heap size can be increased in "setEnv" file located in Application Home.
Goodluck!
Michael Orebe
Though your question is quite clear, web site optimisation is a very extensive subject.
The majority of the popular web developing frameworks are for some reason, extremely processor inefficient.
The old fashioned way of developing n-tier web applications is still very relevant and is still considered to be best practice according the W3C. If you take a little time to read the source code structure of the most popular web developing frameworks you will see that they run much more code at the server than is necessary.
This may seem a bit of a simple answer but, the less code you run at the server and the more code you run at the client the faster your servers will work.
Sometimes contrasting framework code against the old fashioned way is the best way to get an understanding of this. Here is a link to a fully working mini web application which represents W3C best practices and runs the minimum amount of code at the server and the maximum amount of code at the client: http://developersfound.com/W3C_MVC_EX.zip this codebases is also MVC compliant.
This codebase comes with a MySQL database dump, php and client side code. To see this code in action you will need to restore the SQL dump to a MySQL instance (sql dump came from MySQL 8 Community) and add the user and schema permissions that are found in the php file (conn_include.php); setting the user to have execute permissions on the schema.
If you contrast this code base against all of the most popular web frameworks, it will really open your eyes to just how inefficient these frameworks are. The popular PHP frameworks that claim to be MVC frameworks aren’t actually MVC compliant at all. This is because they rely on embedding PHP tags inside HTML tags or visa-versa (considered very bad practice according the W3C). Also most popular node frameworks run way more code at the server than is necessary. Embedded tags also stop asynchronous calls from working properly unless the framework supports AJAX dumps such as Yii 2.
Two of the most important rules to follow with MVC compliance is: never embed server side tags (such as PHP tags) in HTML tags or visa-versa (unless there is a very good excuse such as SEO) and religiously never write code to run at the server if it can be run at the client. Also true MVC is based on tier separation, where as the MVC frameworks are based on code separation. True MVC compliance is very processor efficient. Don’t get me wrong MVC frameworks are very useful for a lot of things, but if you’re developing a site that is going to get millions of hits, they are quite useless, or at least they will drive your cloud bills so high that it will really eat into your company’s profits.
In summary frameworks don’t give much control over what code runs at the client or server and are very inefficient but you can get prototypes up and running quicker with less code.
In contrast the old fashioned way takes a bit more elbow grease but you have complete control over what runs at the server and what runs at the client.
As an additional bit of advice for optimisation avoid using pass-through queries and triggers and instead opt for stored procedures. Historically stored procedures weren’t invented at the time MVC was present as a paradigm but it definitely increases separation of concerns between the tiers and is much more processor efficient.
Hope this advice helps.

can I monitor performance in Azure application after it is deployed (something like profiling)

I have a REST api.
It offers the services get person, get price, get route
how can I determine how long does each call on each of this services take?
For example get person is very fast=ms 5; get route takes 2sec as it needs to make a remote call to Google API.
I could get the time at the beginning of the request and just before the response is submitted, compute the difference and log that to a database.
But that would be pretty much overhead, so how would you do it? would you do it at all, or just rely on on-machine profiling? what tools would you use that minimize overhead?
What I want is to determine if there is any component that in production could have low availability.
Thank you
So it looks like you want 2 things:
Minimal impact on your production environment
Figuring out how much each request takes
In that case I would go for the IIS logs. Windows Azure Diagnostics you can get this out-of-the-box by adding the module and configuring it. As a result your IIS logs will be stored in your storage account.
After that you can download these logs and use Log Parser to execute some interesting queries which allow you to find the slowest pages, pages with most hits, pages with most exceptions... Log Parser can be a little hard to work with if you never used it before. Take a look at the blog post by Scott Hanselman covering the Log Parser Lizard GUI tool: Analyze your Web Server Data and be empowered with LogParser and Log Parser Lizard GUI:
This powerful tool can give you all the information you need with minimal impact on your production instances.

Very high page load times?

I have a Drupal site built on a shared host and I'm finding that the site is very slow to respond. I susepect it's the host and not my Drupal/database configurations but I don't know how to decipher the results from Pingdom.
I have also read Explanation of Pingdom Results but am unsure of how to resolve my problems.
Pingdom results show a Load Time of 60 seconds.
Performance Grade tab shows results of all items at or near 100.
According to the Page Analysis tab, most of the time is spent on the Wait state.
Does the above indicate a problem with my hosting or perhaps domain name provider or is there something that I can do to improve performance of my website?
I should also mention that I've used other tools like Google's Page Speed Chrome plugin and Firefox's Yslow plugin and both give an above average rating to my webpages which leads me to believe it's an issue with my host.
Drupal has this issues of abusing database queries especially if you use a lot of modules on one page and do not cache anything. That may slow down your site considerably. I use Pressflow Drupal`s profile to reduce some load times I also ad Varnish to server (you can look at Memcache too) I also add Boost module to the site itself. But the most important thing is to get query per page load number right. If have written some custom code optimize it. Look for ways to get same data without sending queries to the server maybe some data was already loaded to the page and you do not need tome queries.
In your particular case I think that some lose loop which does not end but has safety trigger which kills it after certain amount of time. I can bet that the reason is somewhere in your custom code or some underdeveloped module. Try to enable display of all the error.
P.S. Example of such page would be the best way determine what is wrong.

Best traffic / performance / usage monitoring module?

Are there any open source (or I guess commercial) packages that you can plug into your site for monitoring purposes? I'd like something that we can hook up to our ASP.NET site and use to provide reporting on things like:
performance over time
current load
page traffic
SQL performance
PU time monitoring
Ideally in c# :)
With some sexy graphs.
Edit: I'd also be happy with a package that I can feed statistics and views of data to, and it would analyse trends, spot abnormal behaviour (e.g. "no one has logged in for the last hour. is this Ok?", "high traffic levels detected", "low number of API calls detected") and generally be very useful indeed. Does such a thing exist?
At my last office we had a big screen which showed us loads and loads of performance counters over a couple of time ranges, and we could spot weird stuff happening, but the data was not stored and there was no way to report on it. Its a package for doing this that I'm after.
It should be noted that google analytics is not an accurate representation of web site usage. This is because the web beacon (web bug) used on the page does not always load for these reasons:
Google analytics servers are called by millions of pages every second and can not always process the requests in a timely fashion.
Users often browse away from a page before the full page has loaded and thus there is not enough time to load Googles web beacon to record a hit.
Google analytics require javascript to be installed which can be disabled.
Quite a few (but not substantial amount) of people block google-analytics.com from their browsers, myself included.
The physical log files are the best 'real' representation of site usage as they record every request. Alternatively there are far better 'professional' packages, of which Omniture is my favourite, which have much better response times, alternative methods for recording actions and more functionality.
If you're after things like server data, would RRDTool be something you're after?
It's not really a webserver type stats program though, I have no idea how it would scale.
Edit:
I've also just found Splunk Swarm, if you're interested in something that looks "cool".
Google Analytics is free (up to 50,000 hits per month I think) and is easy to setup with just a little javascript snippet to insert into your header or footer and has great detailed reports, with some very nice graphs.
Google Analytics is quick to set up and provides more sexy graphs than you can shake a stick at.
http://www.google.com/analytics/
Not Invented here but it's on my todo list to setup.
http://awstats.sourceforge.net/
#Ian
Looks like they've raised the limit. Not very surprising, it is google after all ;)
This free version is limited to 5 million pageviews a month - however, users with an active Google AdWords account are given unlimited pageview tracking.
http://www.google.com/support/googleanalytics/bin/answer.py?hl=en&answer=55543
http://www.serverdensity.com/
One option is to use external monitoring tools, which will monitor the web performance from outside the firewall by simulating end user activities.
Catchpoint Systems has an interesting approach that requires very little coding and gives you the performance stats from outside the datacenter and from inside the asp.net (like processing time, etc)
http://www.catchpoint.com/products.html

Resources