I have a question involving a "timeout" when sending an HTTPS "GET" request using the ServerXMLHTTP object.
In order to fool the object to send the request with the logged in user's id and password, I set it up to use a dummy proxy and then excluded the domain of the URL (on the intranet). So variable url_to_get contains .mydomain.com, while the proxy address is actually "not.used.com".
// JScript source code
HTTP_RequestObject = new ActiveXObject("Msxml2.ServerXMLHTTP.6.0");
// Using logged in username authentication
HTTP_RequestObject.open("GET", url_to_get, false);
HTTP_RequestObject.setProxy(2, "not.used.com", "*.mydomain.com");
try
{
HTTP_RequestObject.send();
}
catch (e)
{
}
In the catch block, I log an exception of "(0x80072EE2) The operation timed out". This is timestamped 1 to 2 seconds after a log message right before the open.
A retry will work as expected, and it can do it over and over again. Is this something on the server side? Or is it a result of the proxy?
Well this was painful and embarassing. I figured out the root cause of the issue I was seeing with the timeout. I had set the timeouts to "defaults" that were 3 orders of magnitude smaller than the real defaults. So even when I increased them to what I thought were really large values, I was still shorter than the defaults.
I had skimmed through the wording on the Microsoft page describing the setTimouts() method and misinterpreted the timebase of the parameters. I assumed seconds while it was actually milliseconds.
During the process of debugging this issue, I duplicated the code using the alternative COM object, "WinHttp.WinHttpRequest.5.1", and in verifying the equivalent API, SetTimeouts(), discovered the faux pas.
I did learn a few things in the process, so all is not lost, "WinHttp.WinHttpRequest.5.1" has a SetAutoLogonPolicy() [3] method that allows me to skip the hokey "proxy" foolishness required with "Msxml2.ServerXMLHTTP.6.0" to force it to send the user's credentials to an intranet server. I also fiddled with Fiddler [4] and learned enough to be dangerous!
Hopefully someone else can learn from my mistake and find this useful in debugging their own issue in the future.
Here are some inline links since I don't have enough rep to post more than two:
[3] : msdn.microsoft.com/en-us/library/windows/desktop/aa384050%28v=vs.85%29.aspx
[4] : fiddler2.com
Msxml2.ServerXMLHTTP.6.0 can be used via a proxy. It took me a while but persistence paid off. I moved from WinHttp.WinHttpRequest.5.1 to Msxml2.ServerXMLHTTP.6.0 at the advice of Microsoft.
Where varHTTP is your object reference to Msxml2.ServerXMLHTTP.6.0 you can use
varHTTP.setproxy 2, ProxyServerName, "BypassList"
Hope this helps and you pursue onwards with Msxml2.ServerXMLHTTP.6.0.
Related
I placed this code inside a Route::get() method only to test it quicker. So this is how it looks:
use Illuminate\Support\Facades\Cache;
Route::get('/cache', function(){
$lock = Cache::lock('test', 4);
if($lock->get()){
Cache::put('name', 'SomeName'.now());
dump(Cache::get('name'));
sleep(5);
// dump('inside get');
}else{
dump('locked');
}
// $lock->release();
});
If you reach this route from two browsers (almost)at the same time. They both will respond with the result from dump(Cache::get('name'));. Shouldn't the second browser respond be "locked"? Because when it calls the $lock->get() that is supposed to return false? And that because when the second browser tries to reach this route the lock should be still set.
That same code works just fine if the time required for the code after the $lock = Cache::lock('test', 4) to be executed is less than 4. If you set the sleep($sec) when $sec<4 you will see that the first browser reaching this route will respond with the result from Cache::get('name') and the second browser will respond with "locked" as expected.
Can anyone explain why is this happening? Isn't it suppose that any get() method to that lock, expect the first one, to return false for that amount of time the lock has been set? I used 2 different browsers but it works the same with 2 tabs from the same browser too.
Quote from the 5.6 docs https://laravel.com/docs/5.6/cache#atomic-locks:
To utilize this feature, your application must be using the memcached or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 5.8 docs https://laravel.com/docs/5.8/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, dynamodb, or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 8.0 docs https://laravel.com/docs/8.x/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, redis, dynamodb, database, file, or array cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Apparently, they have been adding support for more drivers to make use of this lock functionality. Check which Cache driver you are using and if it fits the support list of your Laravel version.
There is likely an atomicity issue here where the cache driver you are using is not able to lock a file atomically. What should happen is that when a process (i.e. a php request) is writing to the lock file, all other processes requiring the lock file should at least wait until the lock file available to be read again. If not, they read the lock file before it has been written to, which obviously causes a race condition.
I saw this question I asked, well now I can say that the problem I was trying to solve here was not because of the atomic lock. The problem here is the sleep method. If the time provided to the sleep method is bigger than the time that a lock will live, it means when the next request it's able to hit the route the lock time will expire(will be released). And that's because let's say you have defined a route like this:
Route::get('case/{value}', function($value){
if($value){
dump('hit-1');
}else{
sleep(5);
dump('hit-0');
}
});
And you open two browser tabs with the same URL that hits this route something like:
127.0.0.1:8000/case/0
and
127.0.0.1:8000/case/1
It will show you that the first route will take 5sec to finish execution and even if the second request is sent almost at the same time with the first request, still it will wait to finish the first one and then run. This means the second request will last 5sec(from the first request) plus the time it took to run.
Back to the asked question the lock time will expire by the time the second request will get it or said differently run the $lock->get() statement.
I am using Nancy.Authentication.Forms. The users are vociferously complaining that their sessions are timing out too quickly (less than 5 minutes). I've searched high and low but cannot find how to set the session timeout time to one hour. Any help is greatly appreciated.
Thanks,
Ross
I'm not 100% sure what could be causing it because of the little information provided. But I think your problem is probably due to the encryption/decryption of the cookie.
Cookie Encryption differs on application startup
By default, a new key is created every time you start the application, which would cause decryption of the cookie to fail, causing authentication to fail, and requiring you to re-authenticate to create a new cookie.
By setting your own keys, every after restart the decryption will always be the same. This is VERY important if you have a load balancer.
https://github.com/NancyFx/Nancy/wiki/Forms-Authentication#a-word-on-cryptography
Since I also did not find an answer, I would like to post my research result:
WithCookie accepts an expiration DateTime.
https://github.com/NancyFx/Nancy/blob/a9989a68db1c822d897932b2af5689284fe2ceb8/src/Nancy/ResponseExtensions.cs
public static Response WithCookie(this Response response, string name, string value, DateTime? expires);
In my case it would be:
return Response.AsRedirect(redirectUrl)
.WithCookie(SecurityExtensions.CookieName, newSession.Id.ToString(), DateTime.Now.AddHours(1));
For the session duration itself I didn't find anything yet.
I have a 100% https web server that does TLS renegotiation. This is very useful so that users can come to the site and get some nice pages before clicking the login button and being asked for their client certificate. Below is the part of the code that does renegotiation line 213-236 of X509Cert class
import org.jboss.netty.handler.ssl.SslHandler
val sslh = r.underlying.context.getPipeline.get(classOf[SslHandler])
trySome(sslh.getEngine.getSession.getPeerCertificates.toIndexedSeq) orElse {
if (!fetch) None
else {
sslh.setEnableRenegotiation(true) // todo: does this have to be done on every request?
r match {
case UserAgent(agent) if needAuth(agent) => sslh.getEngine.setNeedClientAuth(true)
case _ => sslh.getEngine.setWantClientAuth(true)
}
val future = sslh.handshake()
future.await(30000) //that's certainly way too long.
if (future.isDone && future.isSuccess)
trySome(sslh.getEngine.getSession.getPeerCertificates.toIndexedSeq)
else
None
}
}
Now I was expecting that once someone authenticates with an X509 Client certificate the session would last a little while and remember that certificate chain - say 10 minutes or more, and in any case at the very least 1 minute. Indeed that is why I have the option of calling the above method with the variable "fetch" set to false. I am hoping that if someone authenticated the connection does not need to be renegotiated.
But I noticed on my that with most browsers it looks like I need to call sslh.handshake() every time if I want to get the session and return the X509 Certificates. If "fetch" is set to false, then I mostly get returned None.
Is this normal, or is there something I am doing wrong?
PS.
the code above is part of an implementation of the WebID protocol
This is using netty 3.2.5Final. I also tried with 3.2.7Final without more luck.
So I had to change the code of the current service running the above code so that it always forces a handshake (see the commit) But this does not give me as much flexibility as I hoped to have.
It turns out I was having a problem with unfiltered's use of netty. I found this when implementing the same on Play 2.0.
See bug report 215 on Play2.0, and on unfiltered bug 111: secure netty creates new TLS contexts on each connection
Using MS Visual Studio 2008 C++ for Windows 32 (XP brand), I try to construct a POP3 client managed from a modeless dialog box.
Te first step is create a persistent object -say pop3- with all that Boost.asio stuff to do asynchronous connections, in the WM_INITDIALOG message of the dialog-box-procedure. Some like:
case WM_INITDIALOG:
return (iniPop3Dlg (hDlg, lParam));
Here we assume that iniPop3Dlg() create the pop3 heap object -say pointed out by pop3p-. Then connect with the remote server, and a session is initiated with the client’s id and password (USER and PASS commands). Here we assume that the server is in TRANSACTION state.
Then, in response to some user input, the dialog-box-procedure, call the appropriate function. Say:
case IDS_TOTAL: // get how many emails in the server
total (pop3p);
return FALSE;
case IDS_DETAIL: // get date, sender and subject for each email in the server
detail (pop3p);
return FALSE;
Note that total() uses the POP3’s STAT command to get how many emails in the server, while detail() uses two commands consecutively; first STAT to get the total and then a loop with the GET command to retrieve the content of each message.
As an aside: detail() and total() share the same subroutines -the STAT handle routine-, and when finished, both leaves the session as-is. That is, without closing the connection; the socket remains opened an the server in TRANSACTION state.
When any option is selected by the first time, the things run as expected, obtaining the desired results. But when making the second chance, the connection hangs.
A closer inspection show that the first time that the statement
socket_.get_io_service().run();
Is used, never ends.
Note that all asynchronous write and read routines uses the same io_service, and each routine uses socket_.get_io_service().reset() prior to any run()
Not also that all R/W operations also uses the same timer, who is reseted to zero wait after each operation is completed:
dTimer_.expires_from_now (boost::posix_time::seconds(0));
I suspect that the problem is in the io_service or in the timer, and the fact that subsequent executions occurs in a different load of the routine.
As a first approach to my problem, I hope that someone would bring some light in it, prior to a more detailed exposition of the -very few and simple- routines involved.
Have you looked at the asio examples and studied them? There are several asynchronous examples that should help you understand the basic control flow. Pay particular importance to the main event loop started by invoking io_service::run, it's important to understand control is not expected to return to the caller until the io_service has no more remaining work to do.
It appears that when you make an XMLHttpRequest from a script in a browser, if the browser is set to work offline or if the network cable is pulled out, the request completes with an error and with status = 0. 0 is not listed among permissible HTTP status codes.
What does a status code of 0 mean? Does it mean the same thing across all browsers, and for all HTTP client utilities? Is it part of the HTTP spec or is it part of some other protocol spec? It seems to mean that the HTTP request could not be made at all, perhaps because the server address could not be resolved.
What error message is appropriate to show the user? "Either you are not connected to the internet, or the website is encountering problems, or there might be a typing error in the address"?
I should add to this that I see the behavior in FireFox when set to "Work Offline", but not in Microsoft Internet Explorer when set to "Work Offline". In IE, the user gets a dialog giving the option to go online. FireFox does not notify the user before returning the error.
I am asking this in response to a request to "show a better error message". What Internet Explorer does is good. It tells the user what is causing the problem and gives them the option to fix it. In order to give an equivalent UX with FireFox I need to infer the cause of the problem and inform the user. So what in total can I infer from Status 0? Does it have a universal meaning or does it tell me nothing?
Short Answer
It's not a HTTP response code, but it is documented by WhatWG as a valid value for the status attribute of an XMLHttpRequest or a Fetch response.
Broadly speaking, it is a default value used when there is no real HTTP status code to report and/or an error occurred sending the request or receiving the response. Possible scenarios where this is the case include, but are not limited to:
The request hasn't yet been sent, or was aborted.
The browser is still waiting to receive the response status and headers.
The connection dropped during the request.
The request timed out.
The request encountered an infinite redirect loop.
The browser knows the response status, but you're not allowed to access it due to security restrictions related to the Same-origin Policy.
Long Answer
First, to reiterate: 0 is not a HTTP status code. There's a complete list of them in RFC 7231 Section 6.1, that doesn't include 0, and the intro to section 6 states clearly that
The status-code element is a three-digit integer code
which 0 is not.
However, 0 as a value of the .status attribute of an XMLHttpRequest object is documented, although it's a little tricky to track down all the relevant details. We begin at https://xhr.spec.whatwg.org/#the-status-attribute, documenting the .status attribute, which simply states:
The status attribute must return the response’s status.
That may sound vacuous and tautological, but in reality there is information here! Remember that this documentation is talking here about the .response attribute of an XMLHttpRequest, not a response, so this tells us that the definition of the status on an XHR object is deferred to the definition of a response's status in the Fetch spec.
But what response object? What if we haven't actually received a response yet? The inline link on the word "response" takes us to https://xhr.spec.whatwg.org/#response, which explains:
An XMLHttpRequest has an associated response. Unless stated otherwise it is a network error.
So the response whose status we're getting is by default a network error. And by searching for everywhere the phrase "set response to" is used in the XHR spec, we can see that it's set in five places:
To a network error, when:
the open() method is called, or
the response's body's stream is errored (see the algorithm described in the docs for the send() method)
the timed out flag is set, causing the request error steps to run
the abort() method is called, causing the request error steps to run
To the response produced by sending the request using Fetch, by way of either the Fetch process response task (if the XHR request is asychronous) or the Fetch process response end-of-body task (if the XHR request is synchronous).
Looking in the Fetch standard, we can see that:
A network error is a response whose status is always 0
so we can immediately tell that we'll see a status of 0 on an XHR object in any of the cases where the XHR spec says the response should be set to a network error. (Interestingly, this includes the case where the body's stream gets "errored", which the Fetch spec tells us can happen during parsing the body after having received the status - so in theory I suppose it is possible for an XHR object to have its status set to 200, then encounter an out-of-memory error or something while receiving the body and so change its status back to 0.)
We also note in the Fetch standard that a couple of other response types exist whose status is defined to be 0, whose existence relates to cross-origin requests and the same-origin policy:
An opaque filtered response is a filtered response whose ... status is 0...
An opaque-redirect filtered response is a filtered response whose ... status is 0...
(various other details about these two response types omitted).
But beyond these, there are also many cases where the Fetch algorithm (rather than the XHR spec, which we've already looked at) calls for the browser to return a network error! Indeed, the phrase "return a network error" appears 40 times in the Fetch standard. I will not try to list all 40 here, but I note that they include:
The case where the request's scheme is unrecognised (e.g. trying to send a request to madeupscheme://foobar.com)
The wonderfully vague instruction "When in doubt, return a network error." in the algorithms for handling ftp:// and file:// URLs
Infinite redirects: "If request’s redirect count is twenty, return a network error."
A bunch of CORS-related issues, such as "If httpRequest’s response tainting is not "cors" and the cross-origin resource policy check with request and response returns blocked, then return a network error."
Connection failures: "If connection is failure, return a network error."
In other words: whenever something goes wrong other than getting a real HTTP error status code like a 500 or 400 from the server, you end up with a status attribute of 0 on your XHR object or Fetch response object in the browser. The number of possible specific causes enumerated in spec is vast.
Finally: if you're interested in the history of the spec for some reason, note that this answer was completely rewritten in 2020, and that you may be interested in the previous revision of this answer, which parsed essentially the same conclusions out of the older (and much simpler) W3 spec for XHR, before these were replaced by the more modern and more complicated WhatWG specs this answers refers to.
status 0 appear when an ajax call was cancelled before getting the response by refreshing the page or requesting a URL that is unreachable.
this status is not documented but exist over ajax and makeRequest call's from gadget.io.
Know it's an old post. But these issues still exist.
Here are some of my findings on the subject, grossly explained.
"Status" 0 means one of 3 things, as per the XMLHttpRequest spec:
dns name resolution failed (that's for instance when network plug is pulled out)
server did not answer (a.k.a. unreachable or unresponding)
request was aborted because of a CORS issue (abortion is performed by the user-agent and follows a failing OPTIONS pre-flight).
If you want to go further, dive deep into the inners of XMLHttpRequest. I suggest reading the ready-state update sequence ([0,1,2,3,4] is the normal sequence, [0,1,4] corresponds to status 0, [0,1,2,4] means no content sent which may be an error or not). You may also want to attach listeners to the xhr (onreadystatechange, onabort, onerror, ontimeout) to figure out details.
From the spec (XHR Living spec):
const unsigned short UNSENT = 0;
const unsigned short OPENED = 1;
const unsigned short HEADERS_RECEIVED = 2;
const unsigned short LOADING = 3;
const unsigned short DONE = 4;
from documentation http://www.w3.org/TR/XMLHttpRequest/#the-status-attribute
means a request was cancelled before going anywhere
Since iOS 9, you need to add "App Transport Security Settings" to your info.plist file and allow "Allow Arbitrary Loads" before making request to non-secure HTTP web service. I had this issue in one of my app.
Yes, some how the ajax call aborted. The cause may be following.
Before completion of ajax request, user navigated to other page.
Ajax request have timeout.
Server is not able to return any response.