I am using curl lib and easy_curl extension for this.
I am going to leave out code for now as i do not think it is necessary for explaining the issue. I am using c++ and curl lib in order to retrieve a serialized google prototype from a server. The serialized protobuff contains an integer and a static array of objects. The compiled protobuff looks like
typedef struct _ExperimentRunner_ExperimentList_RES {
int32_t pollFrequency;
pb_size_t activeExperiments_count;
ExperimentRunner_ExperimentInfo activeExperiments[5];
/* ##protoc_insertion_point(struct:ExperimentRunner_ExperimentList_RES) */
} ExperimentRunner_ExperimentList_RES;
When tested everything works fine and the protobuff is retrieved from the server and parsed correctly. The get request is for data not a file from the server.
The code is setup in such a way that the experiment list is retrieved every poll frequency. The issue is the following scenario
app start and retrieves the experiment list, which currently
has one entry
remove the entry from the server database wait for the
app to re-poll the server
The app sees the server response still containing the entry that was removed. I confirm its removed by doing a curl from the command line
There seems to be an issue with the curl library caching the data result from the server, and then returning it when I make a request. since when I restart the application it gets the correct data. I have implemented CURLOPT_DEBUGFUNCTION and see the old data being returned by the request when i know the server has it deleted from the database. Any suggestions what options or caching might be going on to cause this?
This ended up being a dump mistake by me caused by miss-understanding how curl lib works (i think). I was using CURLOPT_WRITEFUNCTION to capture the data in a char array. I was not fully clearing this buff inbetween requests as I was assuming that curllib would terminate the new received data with a "\0", invalidating the old data. but I think this assumption is untrue. Once i cleared the entire buffer before the next req everything worked great. bellow is the data capture function for incase
size_t CURL_RECIEVE_DATA_BUFF(void *buffer, size_t size, size_t nmemb, void *userp)
{
CURL_DATA_BUFF* curlData = (CURL_DATA_BUFF*)userp;
if (curlData)
{
if (curlData->amountWriten >= curlData->maxSize) {
LogIt.Add(Error, "%s:%s Server sending more data than expected, max is: %d bytes\n", __FILE__, __FUNCTION__, curlData->maxSize);
return 0;
}
memcpy(&(curlData->buff[curlData->amountWriten]), buffer, size * nmemb);
curlData->amountWriten += size * nmemb;
return size * nmemb;
}
return 0;
}
Related
I have a protobuf message like this
message ImgReply {
bytes data = 1;
}
And I want to assign its contents with set_allocated method:
string *buf = new string();
GRPC_CALL_BACK_FUNCTION() {
.....
reply->set_allocated_data(buf);
return Status::OK;
}
Now each time the grpc function is call, the buf will be released automatically. I would like to reuse it such that I do not need to reallocate memory each time. I tried to call the reply->release_data(); method will just clear the data field and the client would receive no data at all. So how could I reuse this buf variable and do not let protobuf delete it automatically please ?
The gRPC C++ sync API doesn't provide any feature for custom memory allocation. The callback API has been planned with a message allocator feature, but that hasn't been de-experimentalized yet, so it isn't ready to use publicly. That should be available within the next month or two.
I'm trying to use Schannel SSPI to send/receive data over SSL connection, using sockets.
I have some questions on DecryptMessage()
1) MSDN says that sometimes the application will receive data from the remote party, then successfully decrypt it using DecryptMessage() but the output data buffer will be empty. This is normal and the application must be able to deal with it. (As I understand, "empty" means SecBuffer::cbBuffer==0)
How should I deal with it? I'm trying to create a (secure) srecv() function, a replacement for the winsock recv() function. Therefore I cannot just return 0. Because the calling application will think that the remote party has closed the connection. Should I try to receive another encrypted block from the connection and try to decrypt it?
2) And another question. After successfully decrypting data with DecryptMessage (return value = SEC_E_OK), I'm trying to find a SECBUFFER_DATA type buffer in the output buffers.
PSecBuffer pDataBuf=NULL;
for(int i = 1; i < 4; ++i) { // should I always start with 1?
if(NULL == pDataBuf && SECBUFFER_DATA == buffers[i].BufferType) {
pDataBuf = buffers+i;
}
}
What if I don't find a data buffer? Should I consider it as an error? Or should I again try to receive an encrypted block to decrypt it? (I saw several examples. In one of them they were retrying to receive data, in another one they were reporting an error)
It appears that you are attempting to replicate blocking recv function for your Schannel secure socket implementation. In that case, you have no choice but to return 0. An alternative would be to pass a callback to your implementation and only call it when SecBuffer.cbBuffer > 0.
Yes, always start at index 1 for checking the remaining buffers. Your for loop is missing a check for SECBUFFER_EXTRA. If there is extra data and the cbBuffer size > 0, then you need to call DecryptMessage again with the extra data placed into index 0. If your srecv is blocking and you don't implement a callback function (for decrypted data sent to application layer), then you will have to append the results of DecryptMessage for each SECBUFFER_DATA received in the loop before returning the aggregate to the calling application.
I am using GWAN (v4.3.14) and facing a strange issue. I am trying to pass some long text in the query string. I have figured out that GWAN does not allow me to pass query parameters beyond a total request size of 537 characters.
It responds with a 400 Bad Request
An example string is :
http://xxx.xxx.xxx.xxx:yyyy/?t.cpp&c=DbE9kdOJGMm9yr7aypGlQBY1a9rZuiaMDAAnTJSbOBRJZo45YHbpAO5VENLa6IcmlSadZnTucpKBKb0E0G15pFHCgB4oNxqQ3m1K0CX8K15RQkawb8MThuoIHKp02vk9WwJFU5NkBJtwu80onudOkwWPUiGxKKcJiSwJJNcgDY1LQIJ1GnvgRGgomthoxppsZ1cl7zxIf5CjWggzsbUnADDTq5W4pBXveVnugOBHryqdTylhI4tudeae2jUnswezxtQM1qKG3ezGkM2dN68R7YxpCEfZ2N1nXggUkYdGn6em7veq5G5LpTVrdexn0fSozGbeNfHXS2OLjWGhffcEdGeu1dFKnFxNac6IETbIiVvTjv55wcZI7WBiTA0r60KJkUZYNn59W6XhnAwTk0zCYN2Rq8LraOjHzjXHjcyL9Sk6jw4D9K0wWLsiZHDfTOlnPr9jYp2SesyHlUJsCHPiHOR4fCBVwQMwh5YOddcpl2Kbr6CjSjWabaac
The code in my C++ file is:
# include "gwan.h"
# include <iostream>
using namespace std;
int main (int argc, char * argv[])
{
if(argc)
{
cout<<argv[0];
xbuf_cat(get_reply(argv), argv[0]);
}
else
{
xbuf_cat(get_reply(argv), "pass something to me to see it on your screen.");
}
return 200;
}
Can someone help me to make GWAN accept a query parameter of 1000 characters or more?
The error with G-WAN v4.5+ is "414: Request URI too large".
Many production HTTP servers disable PUT/POST Entities to avoid abuse.
G-WAN first used a limit slightly larger than 4KiB, but most requests do not need so much room so we have made it possible for developers to decide.
The example below (see entity_size.c for a working example) shows how to modify the G-WAN (server-global) PUT/POST Entity size limit from a servlet but this can also be done in the init() or the main() calls of a connection handler, and from the gwan/init.c script available in v4.10+:
u32 *max_entity_size = (int*)get_env(argv, MAX_ENTITY_SIZE);
*max_entity_size = 200 * 1024; // new size in bytes (200 KiB)
You can change the limit at any time (even while a given user is connected) by using IP filtering in a connection handler.
Your servlets will decide what to do with the entity anyway so you can dispose or store on disk or do real-time processing, see the entity.c example.
Beyond this, there are a few things to keep in mind:
to avoid DoS attacks letting everybody send huge entities to your server (in the GBs), you might enlarge the request size of authorized users only;
when dealing with requests without a PUT/POST Entity you may also dynamically enlarge the read buffer by allocating more memoy to the READ_XBUF by using xbuf_growto().
Now you know how to accept requests of any length. Make sure you do it only when needed.
You may want to check other related values like:
KALIVE_TMO // time-out in ms for HTTP keep-alives
REQUEST_TMO // time-out in ms waiting for request
MIN_SEND_SPEED // send rate in bytes/sec (if < close)
MIN_READ_SPEED // read rate in bytes/sec (if < close)
All of them can be setup from the gwan/init.c script - before any request can hit the server. This can also be done from G-WAN handlers and servlets, as shown in the examples cited above.
Based on the following code, I built a version of an echo server, but with a threaded delay. This was built because I've noticed that upon initial connection, my first send is sent back to the client, but the client does not receive it until a second send. My real-world use case is that I need to send messages to the server, do a lot of processing, and then send the result back... say 10-30 seconds later (could be hours in some cases).
http://www.wangafu.net/~nickm/libevent-book/Ref8_listener.html
So here is my code. For brevity's sake, I have only included the libevent-related code; not the threading code or other stuff. When debugging, a new connection is set up, the string buffer is filled properly, and debugging reveals that the writes go successfully.
http://pastebin.com/g02S2RTi
But I only receive the echo from the send-before-last. I send from the client numbers to validate this and when I send a 1 from the client, I receive nothing from the server via echo... even though the server is definitely writing to the buffer using evbuffer_add ( I have also tried this using bufferevent_write_buffer).
From the client when I send a 2, I then receive the 1 from the previous send. It's like my writes are being cached.... I have turned off nagle.
So, my question is: Does libevent cache sends using the following method?
evbuffer_add( outputBuffer, buffer, length );
Is there a way to flush this cache? Is there some other method to mark the cache as finished or complete? Can I force a send? It never sends on it's own... I have even put in delays. Replacing evbuffer_add with "send" works perfectly every time.
Most likely you are affected by Nagle algorithm - basically it buffers outgoing data, before sending it to the network. Take a look at this article: TCP/IP options for high-performance data transmission.
Here is an example how to disable buffering:
int flag = 1;
int result = setsockopt(sock, /* socket affected */
IPPROTO_TCP, /* set option at TCP level */
TCP_NODELAY, /* name of option */
(char *) &flag, /* the cast is historical
cruft */
sizeof(int)); /* length of option value */
I'm having some trouble with a Qt application; specifically with the QNetworkAccessManager class. I'm attempting to perform a simple HTTP upload of a binary file using the post() method of the QNetworkAccessManager. The documentation states that I can give a pointer to a QIODevice to post(), and that the class will transmit the data found in the QIODevice. This suggests to me that I ought to be able to give post() a pointer to a QFile. For example:
QFile compressedFile("temp");
compressedFile.open(QIODevice::ReadOnly);
netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), &compressedFile);
What seems to happen on the Windows system where I'm developing this is that my Qt application pushes the data from the QFile, but then doesn't complete the request; it seems to be sitting there waiting for more data to show up from the file. The post request isn't "closed" until I manually kill the application, at which point the whole file shows up at my server end.
From some debugging and research, I think this is happening because the read() operation of QFile doesn't return -1 when you reach the end of the file. I think that QNetworkAccessManager is trying to read from the QIODevice until it gets a -1 from read(), at which point it assumes there is no more data and closes the request. If it keeps getting a return code of zero from read(), QNetworkAccessManager assumes that there might be more data coming, and so it keeps waiting for that hypothetical data.
I've confirmed with some test code that the read() operation of QFile just returns zero after you've read to the end of the file. This seems to be incompatible with the way that the post() method of QNetworkAccessManager expects a QIODevice to behave. My questions are:
Is this some sort of limitation with the way that QFile works under Windows?
Is there some other way I should be using either QFile or QNetworkAccessManager to push a file via post()?
Is this not going to work at all, and will I have to find some other way to upload my file?
Any suggestions or hints would be appreciated.
Update: It turns out that I had two different problems: one on the client side and one on the server side. On the client side, I had to ensure that my QFile object stayed around for the duration of the network transaction. The post() method of QNetworkAccessManager returns immediately but isn't actually finished immediately. You need to attach a slot to the finished() signal of QNetworkAccessManager to determine when the POST is actually finished. In my case it was easy enough to keep the QFile around more or less permanently, but I also attached a slot to the finished() signal in order to check for error responses from the server.
I attached the signal to the slot like this:
connect(&netManager, SIGNAL(finished(QNetworkReply*) ), this, SLOT(postFinished(QNetworkReply*) ) );
When it was time to send my file, I wrote the post code like this (note that compressedFile is a member of my class and so does not go out of scope after this code):
compressedFile.open(QIODevice::ReadOnly);
netManager.post(QNetworkRequest(QUrl(httpDestination.getCString() ) ), &compressedFile);
The finished(QNetworkReply*) signal from QNetworkAccessManager triggers my postFinished(QNetworkReply*) method. When this happens, it's safe for me to close compressedFile and to delete the data file represented by compressedFile. For debugging purposes I also added a few printf() statements to confirm that the transaction is complete:
void CL_QtLogCompressor::postFinished(QNetworkReply* reply)
{
QByteArray response = reply->readAll();
printf("response: %s\n", response.data() );
printf("reply error %d\n", reply->error() );
reply->deleteLater();
compressedFile.close();
compressedFile.remove();
}
Since compressedFile isn't closed immediately and doesn't go out of scope, the QNetworkAccessManager is able to take as much time as it likes to transmit my file. Eventually the transaction is complete and my postFinished() method gets called.
My other problem (which also contributed to the behavior I was seeing where the transaction never completed) was that the Python code for my web server wasn't fielding the POST correctly, but that's outside the scope of my original Qt question.
You're creating compressedFile on the stack, and passing a pointer to it to your QNetworkRequest (and ultimately your QNetworkAccessManager). As soon as you leave the method you're in, compressedFile is going out of scope. I'm surprised it's not crashing on you, though the behavior is undefined.
You need to create the QFile on the heap:
QFile *compressedFile = new QFile("temp");
You will of course need to keep track of it and then delete it once the post has completed, or set it as the child of the QNetworkReply so that it it gets destroyed when the reply gets destroyed later:
QFile *compressedFile = new QFile("temp");
compressedFile->open(QIODevice::ReadOnly);
QNetworkReply *reply = netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), compressedFile);
compressedFile->setParent(reply);
You can also schedule automatic deletion of a heap-allocated file using signals/slots
QFile* compressedFile = new QFile(...);
QNetworkReply* reply = Manager.post(...);
// This is where the tricks is
connect(reply, SIGNAL(finished()), reply, SLOT(deleteLater());
connect(reply, SIGNAL(destroyed()), compressedFile, SLOT(deleteLater());
IMHO, it is much more localized and encapsulated than having to keep around your file in the outer class.
Note that you must remove the first connect() if you have your postFinished(QNetworkReply*) slot, in which you must then not forget to call reply->deleteLater() inside it for the above to work.