I am having an erratic problem using Azure blob storage where my images do not load consistently. The problem is that sometimes when I load a web page, the browser will not show the image, but if I refresh it will load correctly.
When the image doesn't load, the browser shows the default image placeholder. Here is an example:
If I check the hyperlink for the image placeholder, I find that it is the same as the when the image loads successfully, except the Shared Access Signature is different.
Sometimes the same image will fail to load for one link but load successfully for another link even in the same page and same page load. The only difference in the URL is the Shared Access Signature.
Here is my code to build the URL with the shared signature
// Get reference to blob (file) that is to be downloaded
blob = blobContainer.GetBlobReference(blobURL.ToString());
// Get shared access signature to download file from azure blob (valid upto "active duration" minutes) from now
signature = blob.GetSharedAccessSignature(new SharedAccessPolicy()
{
SharedAccessStartTime = null,
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(60),
Permissions = SharedAccessPermissions.Read
});
// Append signature query string to blob / file that is to be downloaded
downloadURL = string.Format("{0}{1}", blob.Uri.AbsoluteUri, signature);
This is the final HTML image link on the web page, i.e. if I show source on the web page in the browser:
<img alt="Profile Picture" src="https://mystorageaccount2.blob.core.windows.net/abcdefg1-hi23-40b5-86de-a20b568f5626/1601/1234d664d1b74ce1aebf4403e5b74af7.jpg?se=2015-10-31T11%3A38%3A39Z&sr=b&sp=r&sig=SaiUToJg%5Ab3zcdef8EeOq84urHf6HQqS%2BAFt1dEQMNI%3D">
Has anyone else seen this problem? Any recommendations on what I might be doing wrong?
I suspect this could be related to the expiry period which you have set on your image blob's shared access signature (SAS). Is there any good reason where you need to set the SAS to 1 minute when you have set it's permission to read-only?
Related
I want to get image base64 from cahcedNetworkImageProvider widget without redownload it to share it in my App.
or if there's a way to share image as a URL and could be saved to the device in flutter
used flutter_cache_manager plugin to get cached data through this function
var file = await DefaultCacheManager().getSingleFile(url);
and works fine for cached images and if URL is new will download it then get back with file object
Im creating a web extension and porting from XUL. I used to be able to easily read files with
var dJsm = Components.utils.import("resource://gre/modules/Downloads.jsm").Downloads;
var tJsm = Components.utils.import("resource://gre/modules/Task.jsm").Task;
var fuJsm = Components.utils.import("resource://gre/modules/FileUtils.jsm").FileUtils;
var nsiPromptService = Components.classes["#mozilla.org/embedcomp/prompt-service;1"].getService(Components.interfaces.nsIPromptService);
....
NetUtil.asyncFetch(file, function(inputStream, status) {
if (!Components.isSuccessCode(status)) {
return;
}
var data = NetUtil.readInputStreamToString(inputStream, inputStream.available());
var data = window.btoa(data);
var encoded_data_to_send_via_xmlhttp = encodeURIComponent(data);
...
});
This above will be deprecated.
I can use the downloads.download() to know what was the last download but I can NOT read the file and then get the equivalent for encoded_data_to_send_via_xmlhttp
Also in Firefox 57 onwards, means that I have to try to fake a user action by a button click or something, or upload a file.
Access to file:// URLs or reading files without any explicit user input
isnt there an easy way to read the last downloaded file?
The WebExtension API won't allow extensions to read local files anymore. You could let the extension get CORS privilege and read the content directly from the URL via fetch() or XMLHttpRequest() as blob and store directly to IndexedDB or memory, then encode and send to server. This comes with many restrictions and limitations such as to which origin you can read from and so forth.
Also, this would add potentially many unneeded steps. If the purpose is, as it seem to be in the question at the moment, to share the downloaded file with a server, I would instead suggest that you obtain the last DownloadItem object, extract the URL (.url) from that object and send the URL back to server.
This way the server can load directly from that URL (and encode it on server if needed). The network load will be about the same (a little less actually since there is no Base64 encoding involved which adds 33% to the size), and much less load on the client. The server would read the data as a binary/byte data stream; about the same as if the data was sent directly from the extension.
To obtain the last downloaded file you would do the following from a privileged script:
browser.downloads.search({
limit: 1,
orderBy: ["-startTime"]
})
.then(getLastDownload);
function getLastDownload(downloads) {
if (downloads.length) {
var url = downloads[0].url;
// ... send url to the server and let server fetch the data from it directly
}
}
According to this support mozilla question.
(2) Local file security
Firefox limits access from pages on web servers to pages on local disk or UNC paths. [...]).
Which solution ?
Use local-filesystem-links firefox addon (not tested)
and/or
run a small local webserver on client side, supposing server was run with sufficient privileges, you may finally access any local content via http:// (but still cannot with file:///)
I am automating Internet Explorer using SHDocVW.dll and MSHTML with C#, and I wish to save an image from the page to the disk (JPEG format).
I can't use the WebClient class to download the image; if I do it, I end up downloading the site's login page. I can't print the screen either, because the browser has to remain invisible during this process, running in the background.
I have tried to do the following:
IHTMLImgElement imgElement = ...;
IHTMLControlRange imgRange = ...;
imgRange.add(imgElement as IHTMLControlElement);
imgRange.execCommand( "copy", false, null );
This does nothing. I am not able to extract anything from the clipboard. Every solution I found didn't work for me.
Your webclient approach is probably missing cookies... see How do I log into a site with WebClient? for an example that handles cookies.
your code looks fine except the user has to change the security setting to enable clipboard access. If the image is cached on disk you can dig the WinInet cache after parsing the page for the image location.
I was able to store png images in MongoDB by using a method similar to the following code:
// server.js
q.desc.data = fs.readFileSync(__dirname + '/logo.png');
q.desc.contentType = 'image/png';
However, I could not make the image to show up in a web page controlled by AngularJS. The image data can be retrieved inside the page, but the following line shows up as text "", presumably because BSON format is different from the raw data.
// core.js (on the client Angular side)
$scope.q.desc.data = '<img src="data:image/png;base64,' + $.base64.encode($scope.q.desc.data) + '" />';
The only way I have found so far to display the image in a browser is to send it standalone from the backend:
// server.js
res.contentType(doc.desc.contentType);
res.send(doc.desc.data);
But how do we embed this image inside a web page? The Web browser cannot download/cache the image directly because the image doesn't have a URL. It lives inside the MongoDB. Thanks!
I want to add validation in filefield of ExtJs4 , so that user can only browse .png , .jpeg image files..How should I do it ?
{
xtype: 'filefield',
id:'photoUpload',
buttonOnly:true,
buttonText: 'Photo'
}
I think it is important to understand how file upload works, so to prevent yourself from troubles in the future...
For security reasons, the following applies:
Browsers cannot access the file system unless the user has explicitly clicked on an upload field.
Browser has minimal access to the file being uploaded, in particular - you JS code may be able to see the file name (the browser has to display it in the field), but nothing else (the path itself on most browsers is not the correct one).
The upload process itself happens in these steps:
The user clicks on an upload field, initiating the file select dialog.
The browser implements access to the file system through the dialog, allowing the user to select a file.
Upon OK click, the browser sends the file to the server.
The server places the file in its temp directory (configured per server).
Once upload is complete, the upload script on the server is called with the file details, and that script will have full access to the uploaded file.
The last step is the only point where you have full access to the file details, including the real actual name, its size, and its content.
Anything the browser gives javascript is browser depended. Even the file name will vary between browsers although all the browsers I know do keep the actual file name (but not the real actual path), you cannot rely on this to work with future versions. The reason for this is that the file name is displayed on the client side.
So the recommendation is this:
Do all file upload checks on the server side.
Again, you may get away with the file name on the JS client side, particularly if you know and can test what browsers your clients will use, but I'd strongly recommend to to this test on the server.
The last thing you have to remember is that users might upload a file ending with .png, but the file itself is a .zip with the extension changed - so to really confirm that the file is .png you need to actually look into the file data, which only the server can do.
{
xtype: 'filefield',
id:'photoUpload',
buttonOnly:true,
vtype:'fileUpload',
buttonText: 'Photo'
}
And Vtype which I have use..
Ext.apply(Ext.form.VTypes, {
fileUpload: function(val, field) {
var fileName = /^.*\.(gif|png|bmp|jpg|jpeg)$/i;
return fileName.test(val);
},
fileUploadText: 'Image must be in .gif,.png,.bmp,.jpg,.jpeg format'
});
Try following snippet in your 'filefield' xtype config
regex : (/.(gif|jpg|jpeg|png)$/i),
regexText : 'Only image files allowed for upload',
msgTarget : 'under'