Xively server error when issuing <img> URL - cosm

I've used Pachube/Cosm for non-commercial home sensor data collection and have been quite happy with the service. Now I'm getting a Xively image indicating a server error when issuing this from a web page:
http://api.pachube.com/v2/feeds/9709/datastreams/0.png?width=730&height=250&colour=%23f75a22&duration=1day&legend=Temperature&title=Back%20Porch%20Temp&show_axis_labels=true&detailed_grid=true&scale=auto&min=40&max=90&timezone=Pacific%20Time%20(US%20&%20Canada)
I'm also seeing only 6 hours of my data in the charts. After reading the statement below, I would think that I would still be able to see my data (unlimited history) and would be able to see it in the chart as described in the above URL.
What am I missing?
From the Xively blog:
http://blog.xively.com/2013/05/15/accounts-data-devices-an-explanation/
"To reiterate, if you were a Cosm user your ‘legacy feeds’ are different to Development Devices, Production Devices and Channels, and therefore are not subject to these limitations, or having any history truncated. So, existing Cosm users have both ‘legacy feeds’ with unlimited history as well as up to 30 Channels of Production Devices with unlimited history."

I believe there's a problem with URL encoding of the Time Zone parameter.
Try:
http://api.xively.com/v2/feeds/9709/datastreams/0.png?width=730&height=250&colour=%23f75a22&duration=1day&legend=Temperature&title=Back%20Porch%20Temp&show_axis_labels=true&detailed_grid=true&scale=auto&min=40&max=90&timezone=Pacific%20Time%20(US%20%26%20Canada)
Ampersand encoding: Pacific%20Time%20(US%20&%20Canada) versus Pacific%20Time%20(US%20%26%20Canada)

Related

Google Drive API Console: Error saving Drive UI integration page

I have a webapp in production that interacts with Google Drive through Google Drive API.
I need to change some settings in Drive interaction but I can't save.
When I save the Drive UI integration page, I receive this error:
There's a problem at our end.
Please try again. If the problem persists, please let us know using
the "Send feedback" link below. Thanks!
(spying Network console: there is an Internal Server Error in a POST call)
I tried to send feedback for months: nobody answers and the bug is still there.
I tried also to create another project: I can save the first time but then the bug returns.
How can I do? Has someone the same problem?
Is there a way to receive a reply from Google? Is there some workaround?
Thank you.
i think that problem must be Client ID
before adding Client ID, go to the Credentials -> OAuth 2.0 Client IDs
then select edit your Client ID. after that your production site url add to Authorized JavaScript origins and Authorized redirect URIs.
then enter your Client ID in Drive UI integration page
For myself trying to get the Drive UI configured I noticed a couple of errors (that don't have any specific error messages)
When adding in an Open URL it has to be a valid domain, so for instance I tried to test it out with local host, to no avail. However something like https://devbox.app.com worked, but something like https://localhost:8888 does not. Even though https://localhost is a valid javascript origin in the client_id configuration (at least for the app I am working on, not sure about other apps), localhost doesn't work as an open URL.
When adding in the mimeTypes it needs to be in the format */* and can include custom mimeTypes like application/custom+xml and application/custom-name+json not sure for other custom types that are not in a particular format like xml or json. Also not sure about wildcards.
When adding in file extensions do not add in the '.' just the name of the file extension.
The app icon I found only failed to upload the image when the image wasn't the exact dimensions, I actually ended up editing some icons in photoshop to change the pixel x pixel values as a quick work around during dev.
That worked for me to get it to save and I tested it with a file that had a custom mimeType (application/custom-name+xml specifically) and custom file extension!

Firefox Extension : Unable to parse JSON data for extension storage

I have written a Firefox Extension using Web Extension APIs. It has passed the Preliminary review but the reviewer said that he cannot proceed with the full review cause when he installs it, he gets the following error -
"Unable to parse JSON data for extension storage"
Upon inspecting for quite sometime, I figured that Firefox creates a file called "storage.js" in the profile folder for each extension where it writes and reads from, all the local storage data for that particular extension. And if the extension tries to write to this file before this file is created, the error "Unable to write JSON data to extension storage" is thrown and if the extension code tries to read from this file before this file is created, the error "Unable to parse JSON data for extension storage" is thrown.
Now, my concern is how do I know for sure that the file has been created and that it can be written to or read from?
PS : This happens when the extension is just installed. For consequent sessions, this error wont come as that file is no longer missing.
This seems to be a bug in the current Firefox implementation, and your assessment is spot on:
The underlying ExtStorage module will always call read before get, set etc. even write and clear.
read will unconditionally try to access the underlying, extension specific storage file, that may not exist yet for freshly installed add-ons using the storage API for the first time.
This will therefore result in the logging of one such Unable to parse JSON data for extension storage message, no matter what you do with the storage API.
Therefore triggering the message cannot be avoided.
I suggest you do the following:
Contact the editors team, requesting they re-evaluate your add-on based on:
The message in question is really only a warning (when appearing after first access of the storage API by your addon).
Even when the message would be an actual error (the storage is corrupt), it would still not be your error, as the storage API implementation by mozilla needs to be more resilient then and there is nothing you can do anyway.
The message being issued on first regular use of the storage API, unrelated to what WebExtensions add-on uses that API and in what way, is a mozilla bug, and not something you caused or can fix yourself or at least work around.
Therefore denying a full review just because a mozilla bug erroneously logs a spurious message once without any other severe effects is... questionable.
File a bug about this so mozilla developers can address this issue. You'll wanna CC at least Bill McCloskey (:billm) since he wrote that code ;)

YouTube iframe API official example gives errors

As of 2013-04-16 WEST, the official getting started example of the YouTube iframe API results in error messages when run in Chrome 25 on WinXP/SP3:
I am accessing the example hosted on a web server, i.e. not via the local file system.
Am I doing something wrong? Is the example / the API broken? Can you reproduce the issue?
This error message has been around for quite a while -- as this Google Group post explains, it's due to an unresolved (4-year unresolved!) bug in WebKit:
https://groups.google.com/forum/?fromgroups=#!topic/youtube-api-gdata/VKGl4ahBGyk
It shouldn't affect anything -- it certainly hasn't for our projects, and in visiting the demo URL you posted, it looks like it isn't affecting the API functionality there (i.e. in that demo, the video still stops after 6 seconds per what the code states). There was an SO question a while back that thought it might be affecting analytics:
youtube "Unable to post message to..." causing video analytics to not be tracked
But it was never confirmed as being relevant.
It will be interesting to see if Chrome's move to the new Blink fork of WebKit leads to the bug itself being resolved.

How can I scrape an image that doesn't have an extension?

Sometimes I come across an image that I can't scrape so that it can be saved. An example of this is:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487
When I hit the url from Internet Explorer I see the image but when I try to get it from the code below I get the following error message "System.Net.WebException The remote server returned an error: (403) Forbidden" error with GetResponse:
string url = "https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487";
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Any ideas on how to get this image?
Edit:
I am able to get to save images that do have extensions. For example I can scrape the following image just fine:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12659/image/original.jpg?1326828951
Although HTTP is originally supposed to be stateless, there are a lot of implementations that rely on it being stateless. I could configure my webserver to only accept requests for "http://mydomain.com/sexy_avatar.jpg" if you provide a cookie proving you were logged in. If not, I send you a redirect 303 to "http://mydomain.com/avatar_for_public_use.jpg".
Amazon could be doing the same. Try to load the web page using Chrome, and look at the Network view in developer mode (CTRL+SHIFT+J) to see all headers supplied to the website. Maybe you even need to do a full navigation in the same session before you are allowed to see the image. This is certainly the case in many web applications I have developed :-)
Well, it looks like it's being generated from a script (possibly being retrieved from a database). The server should be sending a file/content type to go along with that... but it doesn't seem to be, which I believe is a violation of standards.
My Linux box knows full well that that's a JPEG image once it's on my hard drive, because it examines file headers rather than relying on extensions. Perhaps there is a tool to do the same in Windows?
Edit: Actually, on further contemplation, it seems odd that you'd get a 403 for that. Perhaps the server is actually blocking you from retrieving the file in that manner.

Team test: Failing load. Request failed: The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF

The folk in the QA department use visual studio team test (2008 IIRC) to run load tests against our web application.
The latest set of tests have failed on several pages. The error reported is
Request failed: The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF
Searching for this using google yields quite a few results. it would appear that this error message is generated from the .Net framework WebRequest class (i.e. it is not a visual studio specific message). The most useful result is this one, which details my exact problem and how to suppress the error.
But of course, I want to get to the bottom of why this error occurs in the first place. Here are some more facts: -
This error never used to occur when the tests were run against an older version of the web app. The web app. host OS and web server (Win 2003 and IIS 6) are identical in both cases.
Not all the pages generate this error - only some.
The only significant change to these pages (that I can think of) is that they now use some AJAX whereas before they did not (IIRC)
In order to narrow down the problem, I created the simplest page that I could to replicate the problem. Luckily, that was not too hard. I then inspected the bytes in the header using Fiddler but I could not find an occurrence of a CR (0x0D) that was not followed by a LF (0x0A).
The raw HTTP response (as stored from Fiddler by response saving bytes - so its encoding should not have been altered during the save) is here as text if you don't believe me!
So now I am left thinking that the supposed error might be a false alarm. Does anyone else have experience of this/can help shed light?
This is definitely not a false alarm - I've been getting this error in my app a lot while trying to communicate with Facebook API.
I've just stumbled upon this response from Steven Cheng - http://www.velocityreviews.com/forums/t302174-why-do-i-get-the-server-committed-a-protocol-violation.html - and let me quote him:
From your description, you're using
the HttpWebRequest component to send
some http request to some external web
resource in your ASP.NET web
application. However, you're always
getting the "The server committed a
protocol violation.
Section=ResponseStatusLine" error
unless you set the following section
in the web.config file:
<system.net>
<settings>
<httpWebRequest useUnsafeHeaderParsing="true" />
</settings>
</system.net>
And you're wondering the cause of this behavior, correct?
As for this issue, I've performed some
research on this and found that the
problem is actually caused by the
critical http header
parsing/validating of the
HttpWebRequest component. According to
the Http Specification(http1.1), the
HTTP header keys shoud specifically
not include any spaces in their names.
However, some web servers do not fully
respect standards they're meant to.
Applications running on the Dotnet
framework and making heavy use of http
requests usually use the
httpWebRequest class, which
encapsulates everything a web oriented
developer could dream of. With all the
recently issues related to security,
the "httpWebRequest" class provides a
self protection mechanism preventing
it to accept HTTP answers which not
fully qualify to the specifications.
The common case is having a space in
the "content-length" header key. The
server actually returns a "content
length" key, which, assuming no spaces
are allowed, is considered as an
attack vector (HTTP response split
attack), thus, triggering a "HTTP
protocol violation error" exception.
Will try if this helps right now and post results later

Resources