I'm wondering where Windows Explorer gets it's error messages from. My situation is quite specific, in that I'm using a custom WebDAV implementation, but the question I think could apply more broadly (any mapped drive).
So let's say I've got a mapped drive to my WebDAV share. I open the mapped drive window in Windows Explorer and from there I try and create a new folder. Now in my custom WebDAV implementation, I'm looking for the MKCOL WebDAV verb which creates folders, and in this case, I want to prevent the folder from being created. So I'm returning a 400 (Bad Request) back as the HTTP response.
The problem is, now matter how I handle this, Windows Explorer will pop up an error message that says:
File Too Large. The file '<%1 NULL:NameDest>' is too large for the
destination file system.
What file is too large when the request is attempting to create a folder?
What I'm trying to figure out is where Windows Explorer got that? I can see all the details of how I'm handling the response using Fiddler (for example I can return custom exception details in the 400 response), so how does it connect my 400 to the message I'm getting above? Is there any way I can format the HTTP Response so that Windows Explorer will take the details I provide and use them in the error message?
Explorer treats ERROR_INVALID_PARAMETER as "file too large" since that's how some file systems report that error condition.
Please start the command prompt (cmd.exe) with admin rights [1] and run sfc [2]:
sfc.exe /scannow
Related
I am using VB6 and playing with msinet.ocx nowadays. All is fine. I can upload a file, download a file, get size of a file at remote ftp server etc.
Yesterday, I intentionally used a wrong path in "inet1.execute PUT" command. I was expecting to get an error message or error code (inet.responcecode / inet.responceinfo) in statechanged event routine. But all goes well without any error as if the path is valid. Of course nothing happens at server side, No file.
Is it inet expected behaviour or am I missing something?
Thanks,
I have a webapp in production that interacts with Google Drive through Google Drive API.
I need to change some settings in Drive interaction but I can't save.
When I save the Drive UI integration page, I receive this error:
There's a problem at our end.
Please try again. If the problem persists, please let us know using
the "Send feedback" link below. Thanks!
(spying Network console: there is an Internal Server Error in a POST call)
I tried to send feedback for months: nobody answers and the bug is still there.
I tried also to create another project: I can save the first time but then the bug returns.
How can I do? Has someone the same problem?
Is there a way to receive a reply from Google? Is there some workaround?
Thank you.
i think that problem must be Client ID
before adding Client ID, go to the Credentials -> OAuth 2.0 Client IDs
then select edit your Client ID. after that your production site url add to Authorized JavaScript origins and Authorized redirect URIs.
then enter your Client ID in Drive UI integration page
For myself trying to get the Drive UI configured I noticed a couple of errors (that don't have any specific error messages)
When adding in an Open URL it has to be a valid domain, so for instance I tried to test it out with local host, to no avail. However something like https://devbox.app.com worked, but something like https://localhost:8888 does not. Even though https://localhost is a valid javascript origin in the client_id configuration (at least for the app I am working on, not sure about other apps), localhost doesn't work as an open URL.
When adding in the mimeTypes it needs to be in the format */* and can include custom mimeTypes like application/custom+xml and application/custom-name+json not sure for other custom types that are not in a particular format like xml or json. Also not sure about wildcards.
When adding in file extensions do not add in the '.' just the name of the file extension.
The app icon I found only failed to upload the image when the image wasn't the exact dimensions, I actually ended up editing some icons in photoshop to change the pixel x pixel values as a quick work around during dev.
That worked for me to get it to save and I tested it with a file that had a custom mimeType (application/custom-name+xml specifically) and custom file extension!
I've got a working basic Java App that uploads some data to a google sheets file of mine.
I uploaded it to a git client, pulled it to my other computer, and it doesn't work on that with a 401
Exception in thread "main" com.google.api.client.auth.oauth2.TokenResponseException: 401 Unauthorized
at com.google.api.client.auth.oauth2.TokenResponseException.from(TokenResponseException.java:105)
at com.google.api.client.auth.oauth2.TokenRequest.executeUnparsed(TokenRequest.java:287)
at com.google.api.client.auth.oauth2.TokenRequest.execute(TokenRequest.java:307)
at com.google.api.client.auth.oauth2.Credential.executeRefreshToken(Credential.java:570)
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
at com.google.api.client.auth.oauth2.Credential.intercept(Credential.java:217)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:868)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at App.main(App.java:71)
Any idea what could be different between the two machines? I understand it that if I'm using the same client_secret.json, it should be irrelevant which machine I'm on?
UDPATE 1:
ok, some extra info - i just tried my project at work on my work laptop and it worked fine! On first run it opened a browser window and asked me which google account I wanted to use, I chose the correct one, and that worked. On the laptop I have that didn't work, I wasn't given that option (that I remember) so how can I reset the google account that has been used to authenticate against?
I saw this in my cmd line
Please open the following address in your browser:
https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=blah-notputtingmyrealid.apps.googleusercontent.com&redirect_uri=http://localhost:42299/Callback&response_type=code&scope=https://www.googleapis.com/auth/spreadsheets
Attempting to open that address in the default browser now...
Since it's working on your previous computer, the issue might be concerning the location of your client_secret.json. If you check the Java Quickstart setup, there's a part where you need to download the JSON file and place it on your working directory. Since, you're on a new machine, that file is now missing.
g. Click the file_download (Download JSON) button to the right of the
client ID.
h. Move this file to your working directory and rename it
client_secret.json.
Or the access token has expired.
I have written a Firefox Extension using Web Extension APIs. It has passed the Preliminary review but the reviewer said that he cannot proceed with the full review cause when he installs it, he gets the following error -
"Unable to parse JSON data for extension storage"
Upon inspecting for quite sometime, I figured that Firefox creates a file called "storage.js" in the profile folder for each extension where it writes and reads from, all the local storage data for that particular extension. And if the extension tries to write to this file before this file is created, the error "Unable to write JSON data to extension storage" is thrown and if the extension code tries to read from this file before this file is created, the error "Unable to parse JSON data for extension storage" is thrown.
Now, my concern is how do I know for sure that the file has been created and that it can be written to or read from?
PS : This happens when the extension is just installed. For consequent sessions, this error wont come as that file is no longer missing.
This seems to be a bug in the current Firefox implementation, and your assessment is spot on:
The underlying ExtStorage module will always call read before get, set etc. even write and clear.
read will unconditionally try to access the underlying, extension specific storage file, that may not exist yet for freshly installed add-ons using the storage API for the first time.
This will therefore result in the logging of one such Unable to parse JSON data for extension storage message, no matter what you do with the storage API.
Therefore triggering the message cannot be avoided.
I suggest you do the following:
Contact the editors team, requesting they re-evaluate your add-on based on:
The message in question is really only a warning (when appearing after first access of the storage API by your addon).
Even when the message would be an actual error (the storage is corrupt), it would still not be your error, as the storage API implementation by mozilla needs to be more resilient then and there is nothing you can do anyway.
The message being issued on first regular use of the storage API, unrelated to what WebExtensions add-on uses that API and in what way, is a mozilla bug, and not something you caused or can fix yourself or at least work around.
Therefore denying a full review just because a mozilla bug erroneously logs a spurious message once without any other severe effects is... questionable.
File a bug about this so mozilla developers can address this issue. You'll wanna CC at least Bill McCloskey (:billm) since he wrote that code ;)
Sometimes I come across an image that I can't scrape so that it can be saved. An example of this is:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487
When I hit the url from Internet Explorer I see the image but when I try to get it from the code below I get the following error message "System.Net.WebException The remote server returned an error: (403) Forbidden" error with GetResponse:
string url = "https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487";
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Any ideas on how to get this image?
Edit:
I am able to get to save images that do have extensions. For example I can scrape the following image just fine:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12659/image/original.jpg?1326828951
Although HTTP is originally supposed to be stateless, there are a lot of implementations that rely on it being stateless. I could configure my webserver to only accept requests for "http://mydomain.com/sexy_avatar.jpg" if you provide a cookie proving you were logged in. If not, I send you a redirect 303 to "http://mydomain.com/avatar_for_public_use.jpg".
Amazon could be doing the same. Try to load the web page using Chrome, and look at the Network view in developer mode (CTRL+SHIFT+J) to see all headers supplied to the website. Maybe you even need to do a full navigation in the same session before you are allowed to see the image. This is certainly the case in many web applications I have developed :-)
Well, it looks like it's being generated from a script (possibly being retrieved from a database). The server should be sending a file/content type to go along with that... but it doesn't seem to be, which I believe is a violation of standards.
My Linux box knows full well that that's a JPEG image once it's on my hard drive, because it examines file headers rather than relying on extensions. Perhaps there is a tool to do the same in Windows?
Edit: Actually, on further contemplation, it seems odd that you'd get a 403 for that. Perhaps the server is actually blocking you from retrieving the file in that manner.