We're having some really strange problems with IE7, and only IE7. When trying to replicate the issue it only happens with native IE7, not when running IE7 mode in IE8/9, so please remember that if you try to replicate this issue.
The problem is the following:
We're polling for a response from the server with AJAX. The user posts something that the server may have to work on for a bit, so every 5 seconds or so a request is sent check if the server is done. This works fine in every browser, except native IE7. The problem is that it never stops "loading". When checking the requests with Fiddler2 we see that it does two requests and then nothing more. It stops doing anything when still in PENDING mode. In a sain browser it keeps on polling, and then stops when it gets "CONFIRMED". The really weird thing is that it does its final request and returns as normal, only, and really only, when you open a new tab.
It's not that the page needs focus or anything, clicking around randomly does nothing. I'm asking here because I can't even reproduce the issue using a local instance of the project.
Here are the headers sent back by the server:
These are the headers for the response on the production machine. This was the last response gotten. It loads indefinitely until you open a new tab (just an empty new tab!), then the final request is made and everything works out.
HTTP/1.1 200 OK
Accept-Ranges: bytes
Age: 0
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Tue, 04 Oct 2011 07:37:45 GMT
ETag: "867dafc628c43b6ca8a73d1977669250"
P3P: CP="ALL DSP COR CURa ADMa DEVa OUR IND COM NAV"
Server: nginx/1.0.6
Set-Cookie: _web_session=COOKIE; path=/; expires=Tue, 04-Oct-2011 10:37:45 GMT; HttpOnly
Vary: Accept-Encoding
Via: 1.1 varnish
X-Cache: MISS
X-Runtime: 0.062794
X-UA-Compatible: IE=Edge,chrome=1
X-Varnish: 55900984
Content-Length: 145
Connection: keep-alive
{"direct_publishing_settings_id":9970,"confirmed":"PENDING","errors":{},"username":"************","url":"","blog_id":44606,"platform":"blogg_se"}
These are the headers for the same request on my local server. This does not stall up the requests.
HTTP/1.1 200 OK
X-Ua-Compatible: IE=Edge
Etag: "253c934246a69c9ca821464f80f400b3"
P3p: CP="ALL DSP COR CURa ADMa DEVa OUR IND COM NAV"
Connection: Keep-Alive
Content-Type: application/json; charset=utf-8
Date: Tue, 04 Oct 2011 07:34:22 GMT
Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-01-10)
X-Runtime: 0.459232
Content-Length: 137
Cache-Control: max-age=0, private, must-revalidate
Set-Cookie: _web_session=COOKIE; path=/; expires=Tue, 04-Oct-2011 10:34:22 GMT; HttpOnly
{"direct_publishing_settings_id":10,"confirmed":"PENDING","url":"","blog_id":29,"errors":{},"username":"fsasaffas","platform":"blogg_se"}
If you want to try it you can go to videofy.me, get a new account (it's really easy), when logged in go to videofy.me/blogger/settings/direct_publishing. Choose a blog platform in the first dropdown, press "activate" and write something into the username/password fields that appear, then press the green button and wait forever. After 45 seconds or a random amount of time open a new tab, and see that the request is magically finished.
I'm guessing it has to do something with IE7 cacheing the request, and then something is released if a new tab is opened. But it's just a guess, and googling returns nothing related. I'm posting here because it's so obscure that I hope someone here knows anything about it.
Related
I'm a bit unsure what it happening here but ill try to explain what is happening and maybe write a better question once i figured out what i'm actually asking.
I have just installed Varnish which seems awesome for my request times. It is a Magneto 2 store which I have followed the default configuration within dev docs for varnish.
My Issue
Currently my issue is that the browser seems to be caching the page until i click refresh. I believe i am successfully flushing / purging the cache with magento / varnish. As when using Curl to request the page i can see a new page is generated each time i flush cache and just serves cached page if i don't.
Within chrome and firefox however on my client pc however the whole page markup seems to be cached (when clicking a link to page or pasting url in browser) until clicking refresh which seems to reload the real page. When deploying new static files etc as the old resources are still in the cached markup and the new location for resources is signed e.g. version1234/styles.css and not matching the markup i get CSS less pages until client clicks refresh and loads the actual markup from server?
How can i setup caching so that this does not happen?
Curl -IL result of URL:
HTTP/1.1 200 OK
Date: Fri, 24 Nov 2017 12:08:32 GMT
Strict-Transport-Security: max-age=63072000; includeSubdomains
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Expires: Sun, 26 Nov 2017 15:55:17 GMT
Cache-Control: max-age=186400, public, s-maxage=186400
Pragma: cache
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Vary: Accept-Encoding
X-UA-Compatible: IE=edge
Content-Type: text/html; charset=UTF-8
X-Magento-Cache-Control: max-age=186400, public, s-maxage=186400
X-Magento-Cache-Debug: HIT
Grace: none
age: 0
Accept-Ranges: bytes
Connection: keep-alive
Browser caching takes please because of these headers being sent:
Expires: Sun, 26 Nov 2017 15:55:17 GMT
Cache-Control: max-age=186400, public, s-maxage=186400
You should adjust your server configuration so that those are not sent for PHP. Most likely you have a configuration block in nginx or .htaccess that applies to the whole website, as opposed to just static files.
Issue:
We were having issue with one of the ajax request in webkit browsers ( Chrome, Firefox, and Safari).
scenarios working:
1. If we configure chrome/firefox with Fiddler and enable https decryption in fiddler it works as expected.
2. Works properly in internet explorer.
We were able to see response on page, but if you go to networking tab and check response it's null and request is not complete it's spinning. we were thinking it's more of webkit browser decoding issue.
Please share if you have any idea how to fix or what might be causing this issue, any input will be appreciated.
Raw HTTP Header
HTTP/1.1 200 OK
content-encoding: gzip
content-language: en-US
content-type: application/json;charset=UTF-8
date: Wed, 19 Aug 2015 18:38:23 GMT
p3p: CP="NON CUR OTPi OUR NOR UNI"
vary: X-Forwarded-Host
transfer-encoding: chunked
server-name: app2
cache-control: private, must-revalidate, max-age: 0
x-powered-by: Servlet/3.0
x-ua-compatible: IE=edge
strict-transport-security: max-age=31536000; includeSubDomains; preload
x-frame-options: SAMEORIGIN
expires: -1
The issue was server side, i was getting extra data in JSON response and data binding was not happening properly in chrome, firefox and safari.
Whereas IE is having loose binding policy i guess that's reason it was working in IE
I am working on a web application which has user management in place. I find a concerning issue in firefox related to Work Offline. Following are the steps describing the scenario:
User logs in to the application
User performs some action and logs out of the application
If the user now enables Work Offline mode in firefox, he/she can use browser back to access the last page. However, this page is supposed to be secure.
In my opinion this is a data security issue as any other user can apply this technique to fetch valuable information of the last user.
I have used cache control headers to communicate to the browser that HTML content should not be cached. Following are the response headers used:
HTTP/1.1 200 OK
Date: Tue, 05 May 2015 10:39:30 GMT
Server: Apache/2.4.9 (Unix) OpenSSL/0.9.8za
Cache-Control: no-cache, no-store
Expires: Wed, 31 Dec 1969 23:59:59 GMT
Content-Type: text/html;charset=UTF-8
Content-Language: en
Vary: Accept-Encoding
Content-Encoding: gzip
X-Frame-Options: SAMEORIGIN
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
I have used
Cache-Control: no-cache, no-store
Expires: Wed, 31 Dec 1969 23:59:59 GMT
I have noted this vulnerability in applications like Facebook. Is this resolvable? Thank you.
I am not using the SBT, but making direct REST calls with Abdera to the current version of Connections on IBM SmartCloud. REST URL in question: https://apps.na.collabserv.com/search/serviceconfigs
Observations
When testing from my laptop (using Firefox and the REST client add-on,) this works as expected. I get back an ATOM feed.
When testing from a server (on a different network,) using the same method (Firefox + REST client,) I get back HTML that is a log-in page.
In addition, I get this same result when I call the URL from a Java program on the same server.
In all cases, I am using the same credentials with basic authentication.
Update: If I log into SmartCloud first, on a separate tab in Firefox on the server, then call the URL as before, from another tab, it works. I get the ATOM feed as desired. Naturally, this is unsuitable as a solution, but I present it as additional information that could lead to an actual solution.
Update: Further testing shows that the local (laptop) log-in exhibits the same behavior as the server. A form-based log-in is required from the same browser, then subsequent REST calls work.
Update: Here is a relevant simplified code snippet:
private static Abdera ABDERA = new Abdera();
private static AbderaClient ABDERA_CLIENT = new AbderaClient(ABDERA);
...
String host = "https://apps.na.collabserv.com";
ABDERA_CLIENT.addCredentials(host, AuthScope.ANY_REALM, "basic", new UsernamePasswordCredentials("user", "password"));
...
ClientResponse response = ABDERA_CLIENT.get("https://apps.na.collabserv.com/search/serviceconfigs");
Summary
It appears that something about the originating server or the call is causing SmartCloud to respond with a log-in page. Whereas, the same call and credentials from my laptop, work as expected.
Question
Where should I start to trouble-shoot this? How can I build the client credentials to allow programmatic log-in?
Response Headers
If it helps, here are the response headers that I get back in each case.
Unsuccessful
Status Code: 200 OK
Cache-Control: no-cache
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 1850
Content-Type: text/html
Date: Tue, 08 Oct 2013 14:15:03 GMT
Pragma: no-cache
Server: WebSEAL/6.1.1.3 (Build 110428)
Set-Cookie: PD-H-SESSION-ID=4_0_IR4***masked***oRKlJI;secure; Path=/; HttpOnly BIGipServerE3A-WebSEAL-80-fe=2132806922.20480.0000;secure; path=/
Vary: Accept-Encoding
p3p: CP="NON CUR OTPi OUR NOR UNI"
Successful
Status Code: 200 OK
Cache-Control: public, max-age=86400, s-maxage=86400, no-cache=set-cookie, private, must-revalidate
Content-Encoding: gzip
Content-Language: en-US
Content-Length: 1164
Content-Type: application/atom+xml; charset=UTF-8
Date: Mon, 07 Oct 2013 17:21:12 GMT
Expires: Tue, 08 Oct 2013 17:21:12 GMT
Server: WebSphere Application Server/8.0
Vary: Accept-Encoding
p3p: CP="NON CUR OTPi OUR NOR UNI"
x-lconn-auth: true
x-powered-by: Servlet/3.0
#Grant is your login using SAML? I could see this redirect happening. also could be TFIM related... maybe you should grab the auth on a different page, store the cookies, and then try connecting to the endpoint above.
I'm building a webpage and I'm getting the following warning (and the page loads but when I change of section it never loads and displays the warning).
The warning:
HTTP/1.1 200 OK Cache-Control:
no-store, no-cache, must-revalidate,
post-check=0, pre-check=0 Pragma:
no-cache Content-Type: text/html
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET Date: Wed, 10
Mar 2010 12:04:11 GMT Content-Length:
4022
I think it didn't happen in my computer at home. The problem seems to happen just few times (apparently random). I'm using cookies and sessions (php) in this web page.
This is getting very strange, I just came back to my house and the problem disappeared (it was because in the another computer I was using Vista?).
Is this a problem with the webpage or the server?
It seems everything normal here.
So, Do you have proxy in that environment? In very rare case, some proxy could cause that issue.