I'm getting an "incomplete" HTTP_REQUEST when calling HttpReceiveHttpRequest in synchronous mode (i.e., pOverlapped = NULL) and a complete one if I instead wait, say, 100 milliseconds, before reading the structure.
For instance, HTTP_COOKED_URL has a FullUrlLength of 52 but all the 52 bytes pointed by pFullUrl are zero. However, if I wait some time after calling HttpReceiveHttpRequest, the returned structure is complete, meaning that the FullUrl as well as the rest of the fields come with data.
I've tried with both HTTP_RECEIVE_REQUEST_FLAG_FLUSH_BODY and HTTP_RECEIVE_REQUEST_FLAG_COPY_BODY.
Is this behavior normal or unexpected?
Related
I am currently learning to use golang as a server side language. I'm learning how to handle forms, and so I wanted to see how I could prevent some malicious client from sending a very large (in the case of a form with multipart/form-data) file and causing the server to run out of memory. For now this is my code which I found in a question here on stackoverflow:
part, _ := ioutil.ReadAll(io.LimitReader(r.Body, 8388608))
r.Body = ioutil.NopCloser(io.MultiReader(bytes.NewReader(part), r.Body))
In my code r is equal to *http.Request. So, I think that code works well, but what happens is that when I send a file regardless of its size (according to my code, the maximum size is 8M) my code still receives the entire file, so I have doubts that my code actually works. So my question is. Does my code really work wrong? Is there a concept that I am missing and that is why I think my code is malfunctioning? How can I limit the size of an http request correctly?
Update
I tried to run the code that was shown in the answers, I mean, this code:
part, _ := ioutil.ReadAll(io.LimitReader(r.Body, 8388608))
r.Body = ioutil.NopCloser(bytes.NewReader(part))
But when I run that code, and when I send a file larger than 8M I get this message from my web browser:
The connection was reset
The connection to the server was reset while the page was loading.
How can I solve that? How can I read only 8M maximum but without getting that error?
I would ask the question: "How is your service intended/expected to behave if it receives a request greater than the maximum size?"
Perhaps you could simply check the ContentLength of the request and immediately return a 400 Bad Request if it exceeds your maximum?
func MyHandler(rw http.ResponseWriter, rq *http.Request) {
if rq.ContentLength > 8388608 {
rw.WriteHeader(http.StatusBadRequest)
rw.Write([]byte("request content limit exceeded"))
return
}
// ... normal processing
}
This has the advantage of not reading anything and deciding not to proceed at the earliest possible opportunity (short of some throttling on the ingress itself), minimising cpu and memory load on your process.
It also simplifies your normal processing which then does not have to be concerned with catering for circumstances where a partial request might be involved, or aborting and possibly having to clean up processing if the request content limit is reached before all content has been processed..
Your code reads:
r.Body = ioutil.NopCloser(io.MultiReader(bytes.NewReader(part), r.Body))
This means that you are assigned a new io.MultiReader to your body that:
reads at most 8388608 from a byte slice in memory
and then reads the rest of the body after those 8388608 bytes
To ensure that you only read 8388608 bytes at most, replace that line with:
r.Body = ioutil.NopCloser(bytes.NewReader(part))
Making an asynchronous request with Wininet, when the status callback function is called with INTERNET_STATUS_REQUEST_COMPLETE, I get the http status code.
result = HttpQueryInfo(
this->requestHandle,
HTTP_QUERY_STATUS_CODE | HTTP_QUERY_FLAG_NUMBER,
&value,
&sizeofDword,
&index);
The status code returned is 200. After that, I call InternetReadFile().
result = InternetReadFile(
this->requestHandle,
((char*)(this->buffer)) + this->totalBytesReceived,
this->bufferSize - this->totalBytesReceived,
&bytesRead);
this->totalBytesReceived += bytesRead;
It returns true and sets lpNumberOfBytesRead to zero. GetLastError() returns ERROR_IO_PENDING, then I wait the callback function is called again with INTERNET_STATUS_REQUEST_COMPLETE.
When that occurs, InternetReadFile() returns true and again sets lpNumberOfBytesRead to zero.
If I debug the application, I can see after the first InternetReadFile() that the response data is already on the lpBuffer. Moreover, if I call Sleep() for one second before InternetReadFile(), InternetReadFile() works correctly.
Sleep(1000);
result = InternetReadFile( ...
Am I missing any step?
I've encountered similar problem, and still the same even using Sleep(1000). I was connecting to a 3rd party camera stream, I've tried it on debug version, which worked perfectly. But when I went back to release version, it just wouldn't work.
I fixed it by changing HttpOpenRequest's lplpszAcceptTypes to NULL, then everything started working to work.
Seems like when WinINet dealing with internet connection, with different type of mobile phone / os /debug or release binary version/ have some kind of different operation which we can not possibly know.
I'm looping through several items and making an ajax request for each of them (using jQuery). I want them to execute independently, but populate into the DOM in the order they were called, not the order they are returned (for some reason some requests are taking longer than others). Any tips on the best practice for this type of thing?
Well the results can come back in any undefined order, they are asynchronous and subject to the vagaries of the internet and servers.
What you can do is deal with the problem in the same way TCP does over UDP. You use sequence identifiers.
Keep a sequence identifier going, and increment it every time you send out a request. As requests come back, check them off in order and only process them as they come in. Keep a list of what has returned with the data in order, and have a routine fire to check that list after each update to it. When the first expected is in, it should process the whole list down to the first gap.
Bare in mind that you could lose a request, so a suitable timeout before you ignore a given sequence identifier would be in order.
The answer to this ended up being a jQuery plugin called ajaxManager. This did exactly what I needed:
https://github.com/aFarkas/Ajaxmanager
You could send all the success result objects to a queue. Have an index that was sent with the original request, and continually check that queue for the next index.
But generally browsers only allow two simultaneous ajax requests, so it might be worth it to just send the next ajax request on success of the previous request.
Here's a start at the code:
var results = {}, lastProcessedIndex = 0;
var totalLength = $('a.myselector').each(function(el, index){
$.ajax({
url: $(this).attr('href'),
success: function(result){
results[index] = result; // add to results object
}
});
}).length;
var intervalId = setInterval(function(){
if(results[lastProcessedIndex]){
// use object
lastProcessedIndex++;
}
else if(totalLength == lastProcessedIndex){
clearInterval(intervalId);
}
}, 1000); // every 1 second
I'll be taking a stab in the dark with this one but it might help. Maybe you could create a global buffer array and then whenever the AJAX returns you can add the result to the buffer. You could then set up a timer that, when triggered, will check the contents of the buffer. If they are in order it will output it accordingly.
I'm having some trouble with a Qt application; specifically with the QNetworkAccessManager class. I'm attempting to perform a simple HTTP upload of a binary file using the post() method of the QNetworkAccessManager. The documentation states that I can give a pointer to a QIODevice to post(), and that the class will transmit the data found in the QIODevice. This suggests to me that I ought to be able to give post() a pointer to a QFile. For example:
QFile compressedFile("temp");
compressedFile.open(QIODevice::ReadOnly);
netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), &compressedFile);
What seems to happen on the Windows system where I'm developing this is that my Qt application pushes the data from the QFile, but then doesn't complete the request; it seems to be sitting there waiting for more data to show up from the file. The post request isn't "closed" until I manually kill the application, at which point the whole file shows up at my server end.
From some debugging and research, I think this is happening because the read() operation of QFile doesn't return -1 when you reach the end of the file. I think that QNetworkAccessManager is trying to read from the QIODevice until it gets a -1 from read(), at which point it assumes there is no more data and closes the request. If it keeps getting a return code of zero from read(), QNetworkAccessManager assumes that there might be more data coming, and so it keeps waiting for that hypothetical data.
I've confirmed with some test code that the read() operation of QFile just returns zero after you've read to the end of the file. This seems to be incompatible with the way that the post() method of QNetworkAccessManager expects a QIODevice to behave. My questions are:
Is this some sort of limitation with the way that QFile works under Windows?
Is there some other way I should be using either QFile or QNetworkAccessManager to push a file via post()?
Is this not going to work at all, and will I have to find some other way to upload my file?
Any suggestions or hints would be appreciated.
Update: It turns out that I had two different problems: one on the client side and one on the server side. On the client side, I had to ensure that my QFile object stayed around for the duration of the network transaction. The post() method of QNetworkAccessManager returns immediately but isn't actually finished immediately. You need to attach a slot to the finished() signal of QNetworkAccessManager to determine when the POST is actually finished. In my case it was easy enough to keep the QFile around more or less permanently, but I also attached a slot to the finished() signal in order to check for error responses from the server.
I attached the signal to the slot like this:
connect(&netManager, SIGNAL(finished(QNetworkReply*) ), this, SLOT(postFinished(QNetworkReply*) ) );
When it was time to send my file, I wrote the post code like this (note that compressedFile is a member of my class and so does not go out of scope after this code):
compressedFile.open(QIODevice::ReadOnly);
netManager.post(QNetworkRequest(QUrl(httpDestination.getCString() ) ), &compressedFile);
The finished(QNetworkReply*) signal from QNetworkAccessManager triggers my postFinished(QNetworkReply*) method. When this happens, it's safe for me to close compressedFile and to delete the data file represented by compressedFile. For debugging purposes I also added a few printf() statements to confirm that the transaction is complete:
void CL_QtLogCompressor::postFinished(QNetworkReply* reply)
{
QByteArray response = reply->readAll();
printf("response: %s\n", response.data() );
printf("reply error %d\n", reply->error() );
reply->deleteLater();
compressedFile.close();
compressedFile.remove();
}
Since compressedFile isn't closed immediately and doesn't go out of scope, the QNetworkAccessManager is able to take as much time as it likes to transmit my file. Eventually the transaction is complete and my postFinished() method gets called.
My other problem (which also contributed to the behavior I was seeing where the transaction never completed) was that the Python code for my web server wasn't fielding the POST correctly, but that's outside the scope of my original Qt question.
You're creating compressedFile on the stack, and passing a pointer to it to your QNetworkRequest (and ultimately your QNetworkAccessManager). As soon as you leave the method you're in, compressedFile is going out of scope. I'm surprised it's not crashing on you, though the behavior is undefined.
You need to create the QFile on the heap:
QFile *compressedFile = new QFile("temp");
You will of course need to keep track of it and then delete it once the post has completed, or set it as the child of the QNetworkReply so that it it gets destroyed when the reply gets destroyed later:
QFile *compressedFile = new QFile("temp");
compressedFile->open(QIODevice::ReadOnly);
QNetworkReply *reply = netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), compressedFile);
compressedFile->setParent(reply);
You can also schedule automatic deletion of a heap-allocated file using signals/slots
QFile* compressedFile = new QFile(...);
QNetworkReply* reply = Manager.post(...);
// This is where the tricks is
connect(reply, SIGNAL(finished()), reply, SLOT(deleteLater());
connect(reply, SIGNAL(destroyed()), compressedFile, SLOT(deleteLater());
IMHO, it is much more localized and encapsulated than having to keep around your file in the outer class.
Note that you must remove the first connect() if you have your postFinished(QNetworkReply*) slot, in which you must then not forget to call reply->deleteLater() inside it for the above to work.
I have created a UI thread. I m posting message to the UI thread which will write data in a file.
I am using PostThreadMessage API to post the message to User thread. My Problem is it's not writing all the data that I have posted. For Instance, if i post 100 data, it writes randomly 3 or 98 varies for every execution. The handler for Postdata is not getting called for every message.
CWriteToFile *m_pThread = (CWriteToFile *)AfxBeginThread(RUNTIME_CLASS (CWriteToFile));
PostThreadMessage(m_pThread->m_nThreadID , WM_WRITE_TO_FILE, (WPARAM)pData,NULL);
WaitForSingleObject(m_pThread, INFINITE);
The Return value of PostThreadMessage is success.
The PostMessage family of functions can fail if the message queue is full. You should check whether or not the function call succeeds.