I'm updating some old CasperJS code that downloads a CSV report. The web interface recently changed. The old version had a link tag I could grab and then use casper.download() to retrieve the file.
However, the new version appears to be an Angular app and the download button triggers a handleDownload() function that does something under the hood, which results in a popup dialog in my browser.
Is there some way to intercept this dialog or otherwise extract the URL from the actual file?
A few options:
You can see what URL is requested (F12 > Network in Chrome). You could then try to deduce the URL.
Look at what handleDownload does - the logic should be available to
you. You may be able to pull data there.
Hard to help without seeing the code.
Related
I've added Roxy Fileman to my project and tied it in to CKEditor. It's a standard Durandal project with an MVC controller for routing and a web api controller for ajax/json data calls.
A typical working URL for a web api call in my app is http://localhost:63093/api/DurandalApi/getAssessmentQuestionnairePushMenu?id=1
When I try and upload a file from within CKEditor, I get:
Request URL:http://localhost:63093/fileman/index.html?type=image&CKEditor=ckeditor&CKEditorFuncNum=1&langCode=en
Request Method:POST
Status Code:405 Method Not Allowed
Remote Address:[::1]:63093
If, however, I directly go to http://localhost:63093/fileman/index.html?type=image&CKEditor=ckeditor&CKEditorFuncNum=1&langCode=en in my browser, the file upload works perfectly and I can then browse to the image from FileMan inside CKEditor.
The network tab in chrome dev tools indicates that the successful upload is done using this URL: http://localhost:63093/fileman/asp_net/main.ashx?a=UPLOAD which is significantly different to the one that CKEditor attampts to use, but that may be because in the second example, index.html is already loaded?
I'm not completely up to speed with what's going on, but the fact that the same URL works perfectly outside of Durandal if I go directly to the URL seems to indicate the FileMan plugin is working just fine and all permissions are set accordingly. Furthermore the CKEditor config is also fine as it can see the images I upload in the directory, but for some reason it's unable to "post" from within CKEditor (which is embedded in a standard Durandal view).
I'm trying to read up on routing to see if I need to do some kind of exception mapping in Durandal to tell it to let the 3rd party .ashx handler deal with the POST request and I'm not even sure if this problem is indicative of Durandal getting in the way or something else. Any suggestions gratefully received!
Ah. All has become clear. This is a half and half answer really as it doesn't really solve the problem, but equally the problem doesn't really exist!
The issue is that Roxy Fileman does NOT use the CK Editor inbuilt "upload" tab that is in the popup. It expects the user to "browse server" only and use the "add file" link in Roxy instead.
I was confused by the instructions, but now I understand!
I am developing an addon using Firefox's Addon SDK (v. 1.11). My extension dynamically creates an iframe on each website and then loads an html file which includes other resources such as images, font files, etc. from the add on's local directory.
Problem
When loading any of such local resources (i.e.: "resource://" schema), the iframe fails to display them and a message is thrown:
Security Error: Content at http: //www.XXX may not load or link to
resource://XXX
This is a security measure introduced on Firefox 3. When developing without the Addon SDK, the way around it is declaring a directory with "contentaccessible=yes", making the directory's contents accessible to anyone, including my add on. However, I have not been able to find similar functionality using the Addon SDK. Is there a better way of using local data on an iframe that my addon creates and inserts into a page?
I don't think you can directly load an iFrame that points to a resource inside your URL. The browser complains because it's either breaking same origin policy or cross site scripting one. I can't remember which one right now.
if it is html content you want to load you can always inject it into the DOM and then send a message to the document object using the events API to display your custom html. I've done this in the past and it works.
so from main.js send a message to content script which will then inject your iframe html into the DOM and then you can send the document object a message to display it.
I hope this helps.
Not sure if this was the case when you posted the question, but it appears that "resource://" should no longer be used with the Addon SDK.
If you're using the resource inside of an HTML file in the extension, you can reference it locally, otherwise you should use data.url('whatever.jpg') and pass around that value as needed.
Full info is here: http://blog.mozilla.org/addons/2012/01/11/sdk-1-4-known-issue-with-hard-coding-resource-uris/
I have never wrote a firefox addon so I am wondering if this can be done. Is it possible to continually scan a webpage for certain text, and then if that text appears, capture it and save it to a file?
For example
Say a user is on amazon and adds a few items to their shopping cart.
They click checkout and fill in their details and click submit order.
When the order is processed the user is shown the text 'Order complete' and given a receipt of their purchase.
In this example I would like to keep scanning the webpage until 'Order complete' appears. Then I want to capture the html of the receipt and save it to a file.
Is this possible with a firefox addon?
From my experience as a Firefox user, this is definitely possible. As a matter of fact there are add-ons that do far more than that.
For example, Greasemonkey can actually act as a filter and change the content of a viewed webpage as specified by a user script. Zotero and AlertBox are able to selectively watch specific HTML elements for interesting information and act upon it.
It is also quite possible that there is an existing add-on that either does what you need already, or can be used as a basis for a custom add-on of your own - what you are asking for is not all that unusual...
You probably want to create your add-on with the Add-on SDK. You can then use the page-mod package to attach a content script to Amazon pages. The content script should check whether it got loaded into an order confirmation page - and send the HTML code of that page (probably document.body.innerHTML) back to the extension.
The extension then needs to write the data to a file. You need to use internal API for that, something like this:
var file = require("file");
var writer = file.open("c:\\foo\\bar.html", "w");
writer.writeAsync(data);
If you want the user to choose the file name, you can do this using chrome authority and nsIFilePicker component.
I am cleaning up my website and i would like to see html errors and warnings on each page automatically. I use to use Html Validator for firefox (addon) but it doesnt appear to validate automatically anymore. I don't know if its because of the addon version or the fact i use firefox 4.
I need to check every page request until i get through the entire site w/o errors. What addon/tool might i use?
Try this tool
Specify your web-site address and enable "Validate entire site" checkbox.
I'm trying to download some data from a webpage that is dynamically generated, so using wget doesn't work. The page is http://gaceta.diputados.gob.mx/SIL/Legislaturas/Listados.html I want to download the list shown for each of the options that can be selected in the field "Legislatura" once downloaded I can process the data in ruby.
Just wanted to know what is the best way to download this, and if posible to select each of the options and download.
You can use the Web Inspector in Safari or Chrome or the Firebug extension in Firefox to look at how the data is loaded. The page is doing an AJAX POST request to a Perl script for this website, and the data is return as XML.
I would use cURL to grab the data.
You could use http://watir.com/ or webrat to simulate what you would do to view the data then use Nokogiri to parse the HTML.