i am developing an Angular 4 application. One of the HTTP requests returns a large amount of data - approximately 15 MB. I have noticed that downloading that takes more than 20 seconds on IE11 (sometimes more than 35 seconds); whereas on Chrome it happens under 500ms.
I copied the http response to local .json files and tried pointing my code to those.
The result is the same.
Please see attached screenshots.
Moreover, if i debug with Fiddler, I see that Fiddler receives the response nearly 20 seconds before code in IE is invoked. See below. It shows that the alert "mapper invoked" gets called 20 seconds after ClientDoneResponse in Fiddler.
Any idea why IE11 is taking so long whereas Chrome finishes in a few milliseconds?
Related
When a site receives high traffic, the tab in Mozilla Firefox stops loading and displays this error message "PR_CONNECT_RESET_ERROR" as shown in screen shot.
What I want is, the tab should continuously keep loading until it receives the response from the site's server without displaying this error or I want to increase the duration(time or number) for which it keeps trying to connect to that site's server by default.
I don't know whether it is possible or not but what I tried is, I changed some of the configurations from about:config. I increased network.http.keep-alive.timeout from 120 to 215 and network.http.max-persistent-connections-per-server from 6 to 20. But I don't find any noticeable change.
Are there better possibilities for achieving my expectations ?
Note: I approximately make 30-40 requests to same url(www.example.com) from different tabs with different sessions.
I have a scenario wherein i have to export the values(the details of a group of people) from a webpage either in CVS or PDF format in my desktop. The response was success When i recorded this scenario in jmeter. When i added recorded samplers in the thread group and ran them, i received success response with 302 response code. But the sample time is very less when compared with F12 time(manually captured it using F12 developer tool).
It was a POST request when i recorded it. In the results it was showing 3 different child samples out of that 1 is POST and the remaining 2 are GET requests. And 1 additional request with blank request.
The below is the structure:
1 /WebPages/Common/abc.aspx?mhsghgsjfgjsdg
-child1 (POST request)
-child2(GET request)
-child3(GET request)
1 /WebPages/Common/abc.aspx?mhsghgsjfgjsdg (blank request)
It's a C# application. Even for some other requests i am getting 302 response code with correct sample time. So i have no issues with those samples.
Could someone assist me to find out what could be the issue for the incorrect sample time? and how can i resolve it to get the correct sample time?
Appreciate your inputs or resolution.
Most probably you are not handling so called "embedded resources". Almost each HTML page contains some images, styles, scripts, fonts, etc.
In the "Advanced" tab of the HTTP Request Defaults configuration element tick:
Retrieve All Embedded Resources
Parallel Downloads
This way you will "tell" all JMeter HTTP Request samplers to download images, scripts, styles, etc. like browsers do therefore you should start getting comparable response time.
For more information on tuning JMeter so your test would be more realistic check out How to make JMeter behave more like a real browser guide.
I have a question regarding about Varnish serving expired "graced" items. Suppose the following scenario:
My backend takes 5 seconds to generate index.php
I set my beresp ttl to 1 minute
My beresp grace to 1 hour.
When the first client fetches index.php he will be waiting for 5 seconds. Because there's no cached index.php item, the client will wait until the backend server generates the content.
For the following minute, the next clients will not wait at all for index.php, the cached version will be served.
After the minute passes, the following client will wait again 5 seconds. (All subsequent requests in this 5 seconds window will get the cached content due to the 1 hour grace period).
Rather than letting the client wait 5 seconds while the content is generated, is it possible for Varnish to serve the expired (graced) index.php while varnish fetches the new content? This way index.php will be updated always every 1 minute without making the clients wait.
Update
I found this: http://lassekarstensen.wordpress.com/2012/10/11/varnish-trick-serve-stale-content-while-refetching/
Seems a bit ugly to me though.
As far as i know this isn't possible in the current stable version, BUT Varnish 4 will support background fetches. You can find more information about Varnish 4 in the keynote slides of VUG8.
You seem to be right Arjan.
From: https://www.varnish-cache.org/releases/varnish-cache-4.0.0-tp1
Full streaming support, including asynchronous backend fetches. This enables Varnish to serve stale objects while it is fetching or revalidating objects from the backend.
I'm currently using Team Foundation Server and WSS 3.0 as the Team Portal. After the installation and configuring, I noticed the application was very slow sometimes, taking minutes to load a page. Then I googled it and found n solutions, none solved my problem.
Using Firebug I noticed I was getting a lot of 401 errors, mostly in _layouts and _themes folders.
Error image: http://i.stack.imgur.com/SmurI.jpg
Authentication method is NTLM
Any clue on what's happening? The page loads, it just takes forever before showing up.
EDIT: Here's fiddler statistics:
Request Count: 161
Bytes Sent: 144.851 (headers:133249; body:11602)
Bytes Received: 400.222 (headers:69769; body:330453)
ACTUAL PERFORMANCE
Requests started at: 09:47:55.449
Responses completed at: 09:50:03.311
Aggregate Session time: 00:03:11.542
Sequence (clock) time: 00:02:07.8627850
TCP/IP Connect time: 239ms
RESPONSE CODES
HTTP/401: 84
HTTP/200: 74
HTTP/302: 2
HTTP/404: 1
RESPONSE BYTES (by Content-Type)
application/x-javascript: 218.048
~headers~: 69.769
text/html: 37.837
image/gif: 31.467
text/css: 27.506
image/png: 10.133
image/jpeg: 3.937
text/javascript: 1.007
text/xml: 518
We have had exactly this problem with a sharepoint site.
The root cause is the way that NTLM works. The NTLM handshake is a 401.2 401.1 followed by a 200. Allways 3 requests for each file.
For each request, the web server will send a request to the AD server. The problem is that by default there is only 2 connections to the AD server. So the request get backed up and retried.
There are two things that you can do:
Make sure that you are caching the gif files (then you will not have to get them all the time)
Switch to Kerberos
Edit
For setting up Kerberos have a look at this blog post http://blogs.msdn.com/b/martinkearn/archive/2007/04/23/configuring-kerberos-for-sharepoint-2007-part-1-base-configuration-for-sharepoint.aspx
Did you look at this common SharePoint performance fix?
Can you verify this happens on all clients? If you access the page from a browser on the server itself, do you still get this result?
If you haven't yet, turn off IPv6 in your network settings. Also, verify your DNS settings. Slow AD authentication + RPC Server Unavailable leads me to believe you may have addressing issues. Does everything seem responsive from a ping? When you log on to your machine, does it take a long time to log in (another symptom of DNS setting problems)?
if you are 401 errors on CSS / js / Images / .axd. with sharepoint /
NTLM authentification
You must configure the anonymous access on the webapplication and if you have a publishing portal activate the anonymous on :
- style library
- sitecollectionimage
I came across this problem with css files I downloaded. For some reason, the windows setting "encrypt contents to secure data" was checked for some files. After removing this setting, everything went fine. Be sure to unblock them if necessary.
(just to be sure: I'm talking about (file)->properties->advanced->encrypt data...)
I'm seeing an odd behavior in WebKit (on Android) where my server process is sending it a response that it needs to handle immediately (rather than wait for readyState 4). In Firefox and Safari this works as expected, but on webkit, not only does it not respond to the readyState but instead it appears to fire off a repeat request to the server!
This only seems to happen when the server takes a little while to react to the request. I'm still poking around to see what the exact circumstances are that bring this about, but am curious if this is a known bug and what, if anything, is a workaround.
[EDIT] This is just getting weirder and weirder. As long as the server responds within about 10 seconds, everything is fine. But if it takes longer than that, then the request is resubmitted. However, the browser appears to not be aware of this re-submission, or if it is, it's not reporting it in any way. I attached a unique ID to the request and when it arrives for the second time on the server the id is the same. But it's definitely spawning off an additional call to the server. I'm sort of at a loss as to how to debug this further.
No one has piped in, so. I have fixed the problem by killing the connection from the server side.
So, my solution:
a) client makes call to server (which is a perl.cgi)
b) server code:
print (some JSON for the broswer);
close(STDOUT); #this sends a readyState 4 to the browser and closes the connection.
&methodThatTakesAWhile();
This doesn't explain WHY the browser is misbehaving, but it does get around this particular bug.