408 error when sending large email through Gmail API with libcurl - google-api

I am trying to send an email through gmail from a c++ app using libcurl to access the rest-api.
Most of the time, this works flawlessly, but with big emails (>2MB) I randomly get an oddly formatted "Error 408 (Request Timeout)!!1" response. The bigger the Email, the more likely these errors seem to be. At 2MB it happens about 60% of the time, with 7MB I get almost no Mails through.
I am wondering how to best debug this issue and what could cause it, as libcurl doesn't report any timeout issues and seems to successfully finish sending the email data. The error response is received about a minute after libcurl finishes sending, so it looks like the server is waiting for more data. The amount I send is in line with what I report in the "Content-length" field and I always test with exactly the same email data (which randomly works/fails).
The response is not in JSON (despite my "Accepted: application/json" header), but in somewhat minimalistic html (missing body, head tags) and reads as if it was not intended for endusers:
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 408 (Request Timeout)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}#media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}#media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}#media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>408.</b> <ins>That’s an error.</ins>
<p>Your client has taken too long to issue its request. <ins>That’s all we know.</ins>
I am sending my message/rfc822 data to the endpoint at
https://www.googleapis.com/upload/gmail/v1/users/me/messages/send
and have tried both, using Simple and Resumable upload flows.
The only way I found to improve the situation is to reduce the amount of data that libcurl sends at a time, but this seems to only make the issue less likely, not to avoid it completely.
Sending the HTTP-request with 1000 bytes at a time seems to make the issue almost disappear...

Related

varnish cache real (body) size vs content-length

Sometimes, when an object is not in the cache, varnish will send an object that has a real size smaller than the size declared in the content-length header. For example - only part of the picture.
Is it possible to construct such a rule...?
if (beresp.http.content-lenght <> real_object_body_size) { return(retry); }
I wrote a script that tests the same request against the varnish and the backend. It compares the downloaded size with the content-lenght header. The backend, unlike varnish, sometimes ends up with a timeout but the size is always fine. The problem is rare but annoying because the objects are set to long user cache time.
After a few days I can say that the problem was in occasional backend problems with varnish's ability to send a chunked transfer if the object is not in the cache.
Thank you #Thijs Feryn for pointing this out. I knew about that property but until I read it here, I didn't connect it to my problem at all.
It seems that "set beresp.do_stream = false;" solved the problem.

ChromeCast - Stream Calls Failing when Stalled for long time

I'm attempting to play a live stream on ChromeCast. The stream is thrown fine and starts playback appropriately.
However when I play the stream longer: somewhere between 2-15 minutes, the player stops playing and I get MediaStatus.IDLE_REASON_ERROR in my RemoteMediaClient.Callback.
When looking at the console logs from ChromeCast I see that 3-4 calls are failed. Here are the logs:
14:50:26.931 GET https://... 0 ()
14:50:27.624 GET https://... 0 ()
14:50:28.201 GET https://... 0 ()
14:50:29.351 GET https://... 0 ()
14:50:29.947 media_player.js:64 [1381.837s] [cast.player.api.Host] error: cast.player.api.ErrorCode.NETWORK/3126000
Looking at Cast MediaPlayer.ErrorCode Error 312.* is
Failed to retrieve the media (bitrated) playlist m3u8 file with three retries.
Developers need to validate that their playlists are indeed available. It could be the case that a user that cannot reach the playlist as well.
I checked, the playlist was available. So I thought perhaps the server wasn't responding. So I looked at the network calls response logs.
Successful Request
Stalled Request
Note that the stall time far exceeds the usual stall time.
ChromeCast isn't making these calls at all, the requests are simply stalled for a long time until they are cancelled. All the successful requests are stalled for less than 14ms (mostly under 2ms).
The Network Analysis Timing Breakdown provides three reasons for stalling
There are higher priority requests.
There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and HTTP/1.1 only.
The browser is briefly allocating space in the disk cache
While I do believe the first one should not be the case, the later two can be. However in both cases I believe the fault lies with cast.player.
Am I doing something wrong?
Has anyone else faced the same issue? Is there any way to either fix it or come up with a work-around.

AJAX request abort on large query string Elixir Plug

I am sending 2 large query string in AJAX requests, which are basically, a Base64 encoding of a jpeg(s). When the Camera is not a high-resolution one, AJAX request doesn't abort.
At first, I thought its a Nginx issue, Because I was getting an error as request entity too large I resolved it, Then I made changes to my Plug as
plug Plug.Parsers,
parsers: [
:urlencoded,
{:multipart, length: 20_000_000},
:json
],
pass: ["*/*"],
query_string_length: 1_000_000,
json_decoder: Poison
After defining query_string_length, Now I am not getting any errors like above but ajax request still abort.
Base64 encoding string size is 546,591 bytes or max.
I have tried to increase the AJAX request timeout to a very large timespan as well but it still fails. And I don't have any clue where the problem is right now.
How can we receive long strings in Plug?
Some of few answers on StackOverflow about this issue where people used AJAX and PHP, suggesting to change post_max_size, How can we do that in Elixir Plug?
As you are sending AJAX request with JSON data, you should put the length config of json in the plug.
plug Plug.Parsers,
parsers: [
:urlencoded,
{:multipart, length: 20_000_000},
{:json, length: 80_000_000},
],
pass: ["*/*"],
json_decoder: Poison
I suppose you will not put the data in the query string of the post, so the query_string_length - the maximum allowed size for query strings is not needed.
---Original answer---
For plug version around 1.4.3 and have no query_string_length option.
When you post the data as string, you are using Plug.Parsers.
If you are willing to process larger requests, please give a :length
to Plug.Parsers.
You should change the code query_string_length: 1_000_000 to length: 20_000_000.

Firefox ignoring response header content-range and playing only the sample sent

I have built an audio stream for mp3 files, and each time client hits the audio it receives something like this:
But what it does is just plays 1 minute sample instead of 120 minute
What am I doing wrong here?
Not 100% sure because you didn't provide code or an example stream to test, but your handling of HTTP range requests is broken.
In your example request, the client sends Range: bytes=0-, and your server responds with a 1MiB response:
Content-Length: 1048576 (aka. 1 MiB)
Content-Range: 0-1048575/...
This is wrong, the client did not request this! It did request bytes=0-, meaning all data from position 0 to the end of the entire stream (See the http 1.1 RFC), i.e. a response equal to one without any Range. (IIRC, Firefox still sends the Range: bytes=0- to detect if the Server handles ranges in the first place).
This, combined with the Content-Length, leads the client (Firefox) to think the whole resource is just 1MiB in size, instead of the real size. I'd imagine the first 1 MiB of your test stream comes out as 1:06 of audio.
PS: The Content-Duration header (aka. RFC 3803) is something browsers don't usually implement at all and just ignore.
Just an idea. Did you tried some of the http 3xx header like:
'308 Resume Incomplete' or '503 Service Temporarily Unavailable' plus 'retry-after:2' or '413 Request Entity Too Large' plus 'retry-after:2'

COSM feed receiving updates but graph is flatlined at zero

context:
My first project with COSM is recording datapoints from my electric meter. When I look at the graph of the feed, it's flatlined at zero even though the datapoints appear to be correctly received.
Any idea what's wrong, or things I should look for in order to debug it?
more info:
When I debug my feed, I see it receiving approximately eight API requests per minute expected.
Here's an instance of a received datapoint as viewed by COSM's 'debug feed' interface. Note in particular that the response is 200 [ok], and the request body has a sensible timestamp and a non-zero value:
200 POST /api/v2/feeds/129722/datastreams/1/datapoints 06-05-2013 | 08:16:54 +0000
Request Headers
Version HTTP/1.0
Host api.cosm.com
X-Request-Start 1367828214422267
X-Apikey <expunged>
Accept-Encoding gzip, deflate, compress
Accept */*
User-Agent python-requests/1.2.0 CPython/2.7.3 Linux/3.6.11+
Origin
Request Body
{"at": "2013-05-06T08:16:57", "value": 164.0}
Response Headers
X-Request-Id 245ee3ca6bd99efd156bff2416404c33f4bb7f0f
Cache-Control max-age=0
Content-Type application/json; charset=utf-8
Content-Length 0
Response Body
[No Body]
update
Even though the docs specify that JSON is the default, I explicitly added a ".json" to the POST URL (/api/v2/feeds/129722/datastreams/1/datapoints.json) but that didn't appear to make any difference.
update 2
I enclosed the "value" value in strings, so the request body now reads (for example):
{"at": "2013-05-06T15:37:06", "value": "187.0"}
Still behaving the same: I see updates in the debug view, but only zeros are reported in the graph view.
update 3
I tried looking at the data using the API rather than the COSM-supplied graph. My guess is that the datapoints are not being stored for some reason (despite the 200 OK return status). If I put this URL in the web browser:
http://api.cosm.com/v2/feeds/129722.json?interval=0
I get this in response:
{"id":129722,
"title":"Rainforest Automation RAVEn",
"private":"false",
"tags":["power"],
"feed":"https://api.cosm.com/v2/feeds/129722.json",
"status":"frozen",
"updated":"2013-05-06T05:07:30.169344Z",
"created":"2013-05-06T00:16:56.701456Z",
"creator":"https://cosm.com/users/fearless_fool",
"version":"1.0.0",
"datastreams":[{"id":"1",
"current_value":"0",
"at":"2013-05-06T05:07:29.982986Z",
"max_value":"0.0",
"min_value":"0.0",
"unit":{"type":"derivedSI","symbol":"W","label":"watt"}}],
"location":{"disposition":"fixed","exposure":"indoor","domain":"physical"}
}
Note that the status is listed as "frozen" (last update received > 15 minutes ago) despite the fact that the debug tool is showing seven or eight updates per minute. Where are my datapoints going?
Resolved. As #Calum at cosm.com support kindly pointed out, I wasn't sending a properly formed request. I was sending the following JSON:
{"at": "2013-05-06T08:16:57", "value": 164.0}
when I should have be sending:
{
"datapoints":[
{"at": "2013-05-06T08:16:57", "value": 164.0}
]
}
Calum also points out that I could batch up several points at a time to cut down the number of transactions. I'll get to that, but for now, suffice it to say that fixing the body of the request made everything start working.
That sounds like a bug in the graphs, I have seen something very similar a few times.
I often use Cosm Feed Viewer Chrome extension, which displays the latest values in real-time using the WebSocket endpoint.
It should be not too hard to put together custom graphs with Rickshaw and CosmJS.

Resources