Firefox Extension : Unable to parse JSON data for extension storage - firefox

I have written a Firefox Extension using Web Extension APIs. It has passed the Preliminary review but the reviewer said that he cannot proceed with the full review cause when he installs it, he gets the following error -
"Unable to parse JSON data for extension storage"
Upon inspecting for quite sometime, I figured that Firefox creates a file called "storage.js" in the profile folder for each extension where it writes and reads from, all the local storage data for that particular extension. And if the extension tries to write to this file before this file is created, the error "Unable to write JSON data to extension storage" is thrown and if the extension code tries to read from this file before this file is created, the error "Unable to parse JSON data for extension storage" is thrown.
Now, my concern is how do I know for sure that the file has been created and that it can be written to or read from?
PS : This happens when the extension is just installed. For consequent sessions, this error wont come as that file is no longer missing.

This seems to be a bug in the current Firefox implementation, and your assessment is spot on:
The underlying ExtStorage module will always call read before get, set etc. even write and clear.
read will unconditionally try to access the underlying, extension specific storage file, that may not exist yet for freshly installed add-ons using the storage API for the first time.
This will therefore result in the logging of one such Unable to parse JSON data for extension storage message, no matter what you do with the storage API.
Therefore triggering the message cannot be avoided.
I suggest you do the following:
Contact the editors team, requesting they re-evaluate your add-on based on:
The message in question is really only a warning (when appearing after first access of the storage API by your addon).
Even when the message would be an actual error (the storage is corrupt), it would still not be your error, as the storage API implementation by mozilla needs to be more resilient then and there is nothing you can do anyway.
The message being issued on first regular use of the storage API, unrelated to what WebExtensions add-on uses that API and in what way, is a mozilla bug, and not something you caused or can fix yourself or at least work around.
Therefore denying a full review just because a mozilla bug erroneously logs a spurious message once without any other severe effects is... questionable.
File a bug about this so mozilla developers can address this issue. You'll wanna CC at least Bill McCloskey (:billm) since he wrote that code ;)

Related

google drive api returns 403 forbidden when downloading file content

I am trying to download content of a previous revision of a file using the google drive api for Ruby Google::Apis::DriveV3::DriveService#get_revision:
Tempfile.create('file_revision-', encoding: 'ASCII-8BIT') do |temp_file|
drive_api_service.get_revision(file_id, revision_id, download_dest: temp_file)
# work with the content
end
I keep getting Google::Apis::ClientError with status code 403, reason phrase "Forbidden". The scope https://www.googleapis.com/auth/drive is enabled and the error occurs only when I am trying to download the content of the revision. If I run the code without the parameter download_dest, the request succeeds without any problems. I am also able to fetch any meta data or revision, export files, upload files etc., the problem only occurs when I try to fetch the content, either by get_file or get_revision.
Has anyone encountered this issue?
Is there an additional permissions that specifically needs to be added to allow content downloading?
I have also looked at other questions on stack overflow and I have checked all possible configuration for google drive api I could find and everything seems to be enabled.
Right now, Google drive api does not support content downloading for native google drive files (google docs, sheets etc.), so you cant download content (current or of an old revision), you can only export it to a different mime type.

Google Drive API Console: Error saving Drive UI integration page

I have a webapp in production that interacts with Google Drive through Google Drive API.
I need to change some settings in Drive interaction but I can't save.
When I save the Drive UI integration page, I receive this error:
There's a problem at our end.
Please try again. If the problem persists, please let us know using
the "Send feedback" link below. Thanks!
(spying Network console: there is an Internal Server Error in a POST call)
I tried to send feedback for months: nobody answers and the bug is still there.
I tried also to create another project: I can save the first time but then the bug returns.
How can I do? Has someone the same problem?
Is there a way to receive a reply from Google? Is there some workaround?
Thank you.
i think that problem must be Client ID
before adding Client ID, go to the Credentials -> OAuth 2.0 Client IDs
then select edit your Client ID. after that your production site url add to Authorized JavaScript origins and Authorized redirect URIs.
then enter your Client ID in Drive UI integration page
For myself trying to get the Drive UI configured I noticed a couple of errors (that don't have any specific error messages)
When adding in an Open URL it has to be a valid domain, so for instance I tried to test it out with local host, to no avail. However something like https://devbox.app.com worked, but something like https://localhost:8888 does not. Even though https://localhost is a valid javascript origin in the client_id configuration (at least for the app I am working on, not sure about other apps), localhost doesn't work as an open URL.
When adding in the mimeTypes it needs to be in the format */* and can include custom mimeTypes like application/custom+xml and application/custom-name+json not sure for other custom types that are not in a particular format like xml or json. Also not sure about wildcards.
When adding in file extensions do not add in the '.' just the name of the file extension.
The app icon I found only failed to upload the image when the image wasn't the exact dimensions, I actually ended up editing some icons in photoshop to change the pixel x pixel values as a quick work around during dev.
That worked for me to get it to save and I tested it with a file that had a custom mimeType (application/custom-name+xml specifically) and custom file extension!

Chromecast sample sender application CastHelloText-chrome ends with error when trying to get session

I have problem with launching Google-Cast application similar to sample CastHelloText-chrome. I slightly modified example code for my specific purposes. The goal for creating this application is to send and show image data directly in Chromecast device.
Particularly the difference between official sample and my code is in message format and its content, sent by sender application. Sender application took png image coded by base64 and send through message bus with custom namespace. Receiver application get this message and use this as data source for html object <img>.
Error appears when I do this steps:
Reload sender page, checking console if any device found.
Send the form by just pushing enter on input box (text is ignored).
Now a popup from Chromecast extension shows. Next there are two scenarios:
3a) I confirm casting to device by choosing one from the list, then I get this error message in console:
onError: {"code":"channel_error","description":"Error: Timeout","details":null}
3b) I just click somewhere else, I get this error:
onError: {"code":"cancel","description":"User closed popup menu","details":null}
Both of errors are caused by calling function chrome.cast.requestSession in chromehellotext.html at line 161, but what's really wrong I don't know.
When I step sender script I realize that function sessionListener is never called. I know that something go wrong when code try to call chrome.cast.requestSession, where described error raises. So I need help if I missed about right way to use Google-Cast API or If this problem has something to do with networking issues.
Receiver application is registered on Google Cast SDK Developer Console and I'm testing on registered device with some serial number. I'm using Google Chrome in version 42.0.2300.2 canary (64-bit) and Chrome version 40.0.2214.111 (current stable I suppose). For testing I also tried to turn off Windows Firewall entirely but with no luck.
Edit:
There were some syntactic errors that caused error message described above.
It seems like you are trying to use the data/control channel to send an image; please don't do that; that channel is not meant to be used for large data communications; in fact it cannot send anything which approaches or exceeds 64k. If your goal is to send images from your local machine, you would need to run a local web server on your local machine and serve images through the web server.
For and easiest tutorial you can have a look to this tutorial.
It is well explained in this tutorial.
Chromecast Sender application
There is no need to maintain session by yourself.
just add button and enjoy casting
mCastManager.addMediaRouterButton(mediaRoutebtn);
I found a source of my problem. There was something wrong in receiver code - syntactic and runtime errors, so I must admit that my code wasn't functional. Now its working in terms of launching application and getting session.
Unfortunate thing is that the error message generated by Chromecast extension didn't match the actual error - at least it was a bit confusing when I didn't know what's really happening on receiver side without ability to debug the code.

Cannot re-create app due to error "This Firebase URL is not available"

I decided to try out Firebase hosting and wanted to start fresh so I deleted my one and only app, but when I tried to create a new app with the same name I was unable to due to the error:
"This Firebase URL is not available"
I can only guess this is because of caching of app names/URLs? Hopefully it will become available (unless someone else beats me to it) after some timeout? Any info from others who have experience with this issue or otherwise know the answer is appreciated!
Not sure whether this is the right place to ask although Firebase suggest coming to SO because they apparently monitor Firebase-related questions closely according to their website.
Thanks!
Once you delete a Firebase URL, it is permanently unavailable. It cannot be recovered.
During confirmation, you should see a message like this, which explains in detail:
This stems from a number of abuse vectors that are possible by misappropriating a project id that the prior owner believes is deleted and could still have apps/releases in the wild attached to the defunct backend. Since compliance requires that we purge all data related to the project, including information about ownership, there's not even a way to restore one you personally deleted.

MOSS search crawl fails with "Access is denied ..."

Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is
Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)
The default content account is an admin on the site collection that I am trying to crawl.
Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.
Is there anything else that could be causing this error? Something with file system or database permissions maybe?
Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.
It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.

Resources