Firefox web extension - read local file (last downloaded file) - firefox

Im creating a web extension and porting from XUL. I used to be able to easily read files with
var dJsm = Components.utils.import("resource://gre/modules/Downloads.jsm").Downloads;
var tJsm = Components.utils.import("resource://gre/modules/Task.jsm").Task;
var fuJsm = Components.utils.import("resource://gre/modules/FileUtils.jsm").FileUtils;
var nsiPromptService = Components.classes["#mozilla.org/embedcomp/prompt-service;1"].getService(Components.interfaces.nsIPromptService);
....
NetUtil.asyncFetch(file, function(inputStream, status) {
if (!Components.isSuccessCode(status)) {
return;
}
var data = NetUtil.readInputStreamToString(inputStream, inputStream.available());
var data = window.btoa(data);
var encoded_data_to_send_via_xmlhttp = encodeURIComponent(data);
...
});
This above will be deprecated.
I can use the downloads.download() to know what was the last download but I can NOT read the file and then get the equivalent for encoded_data_to_send_via_xmlhttp
Also in Firefox 57 onwards, means that I have to try to fake a user action by a button click or something, or upload a file.
Access to file:// URLs or reading files without any explicit user input
isnt there an easy way to read the last downloaded file?

The WebExtension API won't allow extensions to read local files anymore. You could let the extension get CORS privilege and read the content directly from the URL via fetch() or XMLHttpRequest() as blob and store directly to IndexedDB or memory, then encode and send to server. This comes with many restrictions and limitations such as to which origin you can read from and so forth.
Also, this would add potentially many unneeded steps. If the purpose is, as it seem to be in the question at the moment, to share the downloaded file with a server, I would instead suggest that you obtain the last DownloadItem object, extract the URL (.url) from that object and send the URL back to server.
This way the server can load directly from that URL (and encode it on server if needed). The network load will be about the same (a little less actually since there is no Base64 encoding involved which adds 33% to the size), and much less load on the client. The server would read the data as a binary/byte data stream; about the same as if the data was sent directly from the extension.
To obtain the last downloaded file you would do the following from a privileged script:
browser.downloads.search({
limit: 1,
orderBy: ["-startTime"]
})
.then(getLastDownload);
function getLastDownload(downloads) {
if (downloads.length) {
var url = downloads[0].url;
// ... send url to the server and let server fetch the data from it directly
}
}

According to this support mozilla question.
(2) Local file security
Firefox limits access from pages on web servers to pages on local disk or UNC paths. [...]).
Which solution ?
Use local-filesystem-links firefox addon (not tested)
and/or
run a small local webserver on client side, supposing server was run with sufficient privileges, you may finally access any local content via http:// (but still cannot with file:///)

Related

Calls From External Web Components in PWAs [duplicate]

We are running 2 servers. Server 1 hosts a react application. Server 2 hosts a webcomponent exposed as a single javascript bundle along with some assets such as images. We are dynamically loading the webcomponent Javascript hosted on Server 2 in our react app hosted on Server 1. The fact that it is a webcomponent might or might not affect the issue.
What's happening is that the webcomponent makes uses of assets such as images that are located on Server 2. But when the react app loads the webcomponent, the images are not found as its looking for the images locally on Server 1.
We can fix this in many ways. I am looking for the simplest fix. Since Server 1 app and Server 2 apps are being developed by different teams both should be able to develop in the most natural way possible without making allowances for their app being potentially loaded by other apps.
The fixes that I could think of was:
Making use of absolute URLs to load assets - Need to know the deployed location in advance .
Adding a reverse proxy to Server 1 to redirect to Server 2 whenever a particular path is hit - Need to configure this. The React app could load hundreds of webcomponents, viz we need add a lot of reverse proxies.
Embed all assets into the single javascript on Server 2, like embed svgs into the javascript. - Too limiting. If the SVGs are huge and will make the bundle size bigger.
I was hoping to implement something like -
Since the react app knows where to hit Server 2, can't we write some clever javascript that will make the browser go to Server 2 whenever assets are requested by a Javascript loaded by Server 2.
If you download your Web Component via a classic script (<script> with default type="text/javascript") you can retrieve the URL of the loaded file by using document.currentScript.src.
If you download the file as a module script (<script> with type="module"), you can retrieve the URL by using import.meta.url.
Parse then the property to extract the base path to the Web Component.
Example of Web Component Javascript file:
( function ( path ) {
var base = path.slice( 0, path.lastIndexOf( '/' ) )
customElements.define( 'my-comp', class extends HTMLElement {
constructor() {
super()
this.attachShadow( { mode: 'open' } )
.innerHTML = `<img src="${base}/image.png">`
}
} )
} ) ( document.currentScript ? document.currentScript.src : import.meta.url )
How about uploading all required assets to a 3rd location, or maybe an AWS S3 bucket, Google Drive, Dropbox etc.? That way those assets always have a unique, known URL, and both teams can access them independently. As long as the names remain the same, so will the URLs.

Prevent external files in src of iframe from being cached (with CloudFlare)

I am trying to make a playground like plunker. I just noticed a very odd behavior on production (with active mode in Cloudflare), whereas it works well in localhost.
By iframe, the playground previews index_internal.html which may contain links to other files (eg, .html, .js, .css). iframe is able to interpret external links such as <script src="script.js"></script>.
So each time a user modifies their file (eg, script.js) on the editor, my program saves the new file into a temporary folder on the server, then refresh the iframe by iframe.src = iframe.src, it works well on localhost.
However, I realized that, in production, the browser always keeps loading the initial script.js, even though users modify it in the editor and a new version is written in the folder in the server. For example, what I see in Dev Tools ==> Network is always the initial version of script.js, whereas I can check the new version of script.js saved in the server by less on the left hand.
Does anyone know why it is like this? And how to fix it?
Edit 1:
I tried the following, which did not work with script.js:
var iframe = document.getElementById('myiframe');
iframe.contentWindow.location.reload(true);
iframe.contentDocument.location.reload(true);
iframe.contentWindow.location.href = iframe.contentWindow.location.href
iframe.contentWindow.src = iframe.contentWindow.src
iframe.contentWindow.location.replace(iframe.contentWindow.location.href)
I tried to add a version, it worked with index_internal.html, but did not reload script.js:
var newSrc = iframe.src.split('?')[0]
iframe.src = newSrc + "?uid=" + Math.floor((Math.random() * 100000) + 1);
If I turn Cloudflare to development mode, script.js is reloaded, but I do want to keep Cloudflare in active mode.
I found it.
We can create a custom rule for caching in CloudFlare:
https://support.cloudflare.com/hc/en-us/articles/200168306-Is-there-a-tutorial-for-Page-Rules-#cache
For example, I could set Bypass as Cache Level for the folder www.mysite.com/tmp/*.

How to convert IHTMLImgElement to image

I am automating Internet Explorer using SHDocVW.dll and MSHTML with C#, and I wish to save an image from the page to the disk (JPEG format).
I can't use the WebClient class to download the image; if I do it, I end up downloading the site's login page. I can't print the screen either, because the browser has to remain invisible during this process, running in the background.
I have tried to do the following:
IHTMLImgElement imgElement = ...;
IHTMLControlRange imgRange = ...;
imgRange.add(imgElement as IHTMLControlElement);
imgRange.execCommand( "copy", false, null );
This does nothing. I am not able to extract anything from the clipboard. Every solution I found didn't work for me.
Your webclient approach is probably missing cookies... see How do I log into a site with WebClient? for an example that handles cookies.
your code looks fine except the user has to change the security setting to enable clipboard access. If the image is cached on disk you can dig the WinInet cache after parsing the page for the image location.

Detect url the user is viewing in chrome/firefox/safari

How can you detect the url that I am browsing in chrome/safari/firefox via cocoa (desktop app)?
As a side but related note, are there any security restrictions when developing a desktop app that the user will be alerted and asked if they want to allow? e.g. if the app accesses their contact information etc.
Looking for a cocoa based solution, not javascript.
I would do this as an extension, and because you would like to target Chrome, Safari, and Firefox, I'd use a cross-browser extension framework like Crossrider.
So go to crossrider.com, set up an account and create a new extension. Then open the background.js file and paste in code like this:
appAPI.ready(function($) {
appAPI.message.addListener({channel: "notifyPageUrl"}, function(msg) {
//Do something, like send an xhr post somewhere
// notifying you of the pageUrl that the user visited.
// The url is contained within msg.pageUrl
});
var opts = { listen: true};
// Note: When defining the callback function, the first parameter is an object that
// contains the page URL, and the second parameter contains the data passed
// to the context of the callback function.
appAPI.webRequest.onBeforeNavigate.addListener(function(details, opaqueData) {
// Where:
// * details.pageUrl is the URL of the tab requesting the page
// * opaqueData is the data passed to the context of the callback function
if(opaqueData.listen){
appAPI.message.toBackground({
msg: details.pageUrl
}, {channel: "notifyPageUrl"});
}
}, opts ); // opts is the opaque parameter that is passed to the callback function
});
Then install the extension! In the example above, nothing is being done with the detected pageUrl that the user is visiting, but you can do whatever you like here - you could send a message to the user, you could restrict access utilizing the cancel or redirectTo return parameters, you could log it locally utilizing the crossrider appAPI.db API or you could send the notification elsewhere, cross-domain, to wherever you like utilizing an XHR request from the background directly.
Hope that helps!
And to answer the question on security issues desktop-side, just note that desktop applications will have the permissions of the user under which they run. So if you are thinking of providing a desktop app that your users will run locally, say something that will detect urls they access by tapping into the network stream using something like winpcap on windows or libpcap on *nix varieties, then just be aware of that - and also that libpcap and friends would have to have access to a network card that can be placed in promiscuous mode in the first place, by the user in question.
the pcap / installed desktop app solutions are pretty invasive - most folks don't want you listening in on literally everything and may actually violate some security policies depending on where your users work - their network administrators may not appreciate you "sniffing", whether that is the actual purpose or not. Security guys can get real spooky so-to-speak on these kinds of topics.
The extension via Crossrider is probably the easiest and least intrusive way of accomplishing your goal if I understand the goal correctly.
One last note, you can get the current tab urls for all tabs using Crossrider's tabs API:
// retrieves the array of tabs
appAPI.tabs.getAllTabs(function(allTabInfo) {
// Display the array
for (var i=0; i<allTabInfo.length; i++) {
console.log(
'tabId: ' + allTabInfo[i].tabId +
' tabUrl: ' + allTabInfo[i].tabUrl
);
}
});
For the tab API, refer to:
http://docs.crossrider.com/#!/api/appAPI.tabs
For the background navigation API:
http://docs.crossrider.com/#!/api/appAPI.webRequest.onBeforeNavigate
And for the messaging:
http://docs.crossrider.com/#!/api/appAPI.message
And for the appAPI.db stuff:
http://docs.crossrider.com/#!/api/appAPI.db
Have you looked into the Scripting Bridge? You could have an app that launches, say, an Applescript which verifies if any of the well known browser is opened and ask them which documents (URL) they are viewing.
Note: It doesn't necessarily need to be an applescript; you can access the Scripting Bridge through cocoa.
It would, however, require the browser to support it. I know Safari supports it but ignore if the others do.
Just as a quick note:
There are ways to do it via AppleScript, and you can easily wrap this code into NSAppleScript calls.
Here's gist with AppleScript commands for Safari and Chrome. Firefox seems to not support AE.
Well obviously this is what I had come across on google.
chrome.tabs.
getSelected
(null,
function
(tab) {
alert
(tab.url);
}) ;
in pure javascript we can use
alert(document.URL);
alert(window.location.href)
function to get current url

Validation Of Uploadfield in extjs4

I want to add validation in filefield of ExtJs4 , so that user can only browse .png , .jpeg image files..How should I do it ?
{
xtype: 'filefield',
id:'photoUpload',
buttonOnly:true,
buttonText: 'Photo'
}
I think it is important to understand how file upload works, so to prevent yourself from troubles in the future...
For security reasons, the following applies:
Browsers cannot access the file system unless the user has explicitly clicked on an upload field.
Browser has minimal access to the file being uploaded, in particular - you JS code may be able to see the file name (the browser has to display it in the field), but nothing else (the path itself on most browsers is not the correct one).
The upload process itself happens in these steps:
The user clicks on an upload field, initiating the file select dialog.
The browser implements access to the file system through the dialog, allowing the user to select a file.
Upon OK click, the browser sends the file to the server.
The server places the file in its temp directory (configured per server).
Once upload is complete, the upload script on the server is called with the file details, and that script will have full access to the uploaded file.
The last step is the only point where you have full access to the file details, including the real actual name, its size, and its content.
Anything the browser gives javascript is browser depended. Even the file name will vary between browsers although all the browsers I know do keep the actual file name (but not the real actual path), you cannot rely on this to work with future versions. The reason for this is that the file name is displayed on the client side.
So the recommendation is this:
Do all file upload checks on the server side.
Again, you may get away with the file name on the JS client side, particularly if you know and can test what browsers your clients will use, but I'd strongly recommend to to this test on the server.
The last thing you have to remember is that users might upload a file ending with .png, but the file itself is a .zip with the extension changed - so to really confirm that the file is .png you need to actually look into the file data, which only the server can do.
{
xtype: 'filefield',
id:'photoUpload',
buttonOnly:true,
vtype:'fileUpload',
buttonText: 'Photo'
}
And Vtype which I have use..
Ext.apply(Ext.form.VTypes, {
fileUpload: function(val, field) {
var fileName = /^.*\.(gif|png|bmp|jpg|jpeg)$/i;
return fileName.test(val);
},
fileUploadText: 'Image must be in .gif,.png,.bmp,.jpg,.jpeg format'
});
Try following snippet in your 'filefield' xtype config
regex : (/.(gif|jpg|jpeg|png)$/i),
regexText : 'Only image files allowed for upload',
msgTarget : 'under'

Resources