we are trying to figure out with my friend why we are have all the time over 15s idle time on load as you can see in screenshot
We are using Joomla version 3.5.1 and Land Box theme. Any idea why we have such a huge idle time on loading?
Here's a webpage test view of the same page.
There's a 10sec wait until until the first byte, my first suspicion for this is that the server may be overloaded.
Then there's close to another 10 seconds until the page is loaded, and the reason for this is the large number of files -- about 100 in total (40 JS files, 37 images, and 40 CSS and others)
Hope this helps!
Related
I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)
My understanding of the parse.com API rate limit is that it’s not a concurrent-job limit, it’s just the number of requests started in a given second. So if a user is, say, uploading a file from a slow network and it takes 30 seconds, that’s not 1 of my 30 req/s taken up that whole time. It’s just one request, the first second.
On my team, though, is a wonderful security guy whose job it is to worry. He thinks that if 30 users upload a file each, for 30 seconds, at a 30 r/s limit, no one else will be able to use our app until they are done.
Which one is correct?
Your understanding was correct. It's the number of requests started per second. The duration of the request does not come in to play.
Source: I work at Parse.
I think you are right. I've made some experiments with Parse, for example i reloaded a UITableview 10 or 20 times in one second (can't remember) for 3-4 minutes and checked the requests in the admin panel. The maximum value was always less than 30, but it doesn't matter, the point is that you can test it this way and get more informations.
Just create some test project and reload the SampleViewController.m (which contains a Parse query) 30 times in one second, after this you can check the data browser which will display the traffic by req/sec.
As a second option you can upload a bunch of images by current user in every second, since the upload time is longer than 1 sec, you can check what happens when you start uploading a bunch of images (or other data) in every second.
I'm trying to test some custom page timing code which times PHP execution time, SQL query time and the time the browser actually takes to render the page to the client.
How can I slow XAMPP down so page loads take a few seconds, so I can more easily measure timing?
I am trying to increase performance of my website.
Looking at the IE Network tab, I see:
wait: < 1 ms
start: 31 ms
request 390 ms
response 31 ms
gap 472 ms
I'm especially confused about the gap. What's going on here? Is this the actual time to render the page once everything has been received? It's hard to improve performance when I don't know what each time represents.
MSDN says:
Gap: The offset value that is taken when the response has been received. The duration is the time between that start time and when the end of the last request is associated with the original HTTP request.
That does not help me at all.
It's about as clear as mud but what it means is that the end of that particular request occurred 472ms before the page was considered loaded. This is usually because there are resources loaded after that one taking up the remaining time.
A simplification to illustrate it, if I have a page that loads in 5 ms and has four resources loaded sequentially each taking 5 ms to load. The gap for the initial page request will be 5 x 4 = 20ms, the next request will have a gap of 15ms, the next 10 ms etc. I'm not sure how it would be a useful a metric though...
I have the exact same html sitting on two different servers. Both pages call things like stylesheets and images from the same servers (not each from their local server). In other words, these pages are identical except they exist on two different servers. It's all static html. The only DNS lookups are for images.
On one server it takes 25 seconds to load, and it appears most of that is waiting on the html page itself
http://tools.pingdom.com/fpt/#!/CmGSycTZd/http://205.158.110.184/contents/mylayout/2
On another server it takes under 2 seconds to load
http://tools.pingdom.com/fpt/#!/rqg73fi7V/http://socialmediaphyte.com/TEST/image-dns-testing-ImageON.html
The only difference I can ID from Pingdom is "Connection." The slow server responds with "close" and the fast server responds with "Keep-Alive". Is THAT the most likely issue? Or is it possibly something else? (And if you know the remedy for your suspected cause, that would be wonderful.)
Thanks!
Not using keep-alive will slow the overall load time a bit because you incur the additional overhead of having to establish a new connection for each resource, rather than re-using one or more connections. This shouldn't equate to 23 seconds difference though.
Using the FireBug Net Panel for Firefox can be of great assistance in seeing what is taking so long. It shows you how long each requested resource from the page took to load, and how long each phase of requesting the resource took.
Other factors could include one server is using gzip compression on pages and the other is not, or it could just be overloaded.