Firefox addon - is it possible to capture text/html from the webpage? - firefox

I have never wrote a firefox addon so I am wondering if this can be done. Is it possible to continually scan a webpage for certain text, and then if that text appears, capture it and save it to a file?
For example
Say a user is on amazon and adds a few items to their shopping cart.
They click checkout and fill in their details and click submit order.
When the order is processed the user is shown the text 'Order complete' and given a receipt of their purchase.
In this example I would like to keep scanning the webpage until 'Order complete' appears. Then I want to capture the html of the receipt and save it to a file.
Is this possible with a firefox addon?

From my experience as a Firefox user, this is definitely possible. As a matter of fact there are add-ons that do far more than that.
For example, Greasemonkey can actually act as a filter and change the content of a viewed webpage as specified by a user script. Zotero and AlertBox are able to selectively watch specific HTML elements for interesting information and act upon it.
It is also quite possible that there is an existing add-on that either does what you need already, or can be used as a basis for a custom add-on of your own - what you are asking for is not all that unusual...

You probably want to create your add-on with the Add-on SDK. You can then use the page-mod package to attach a content script to Amazon pages. The content script should check whether it got loaded into an order confirmation page - and send the HTML code of that page (probably document.body.innerHTML) back to the extension.
The extension then needs to write the data to a file. You need to use internal API for that, something like this:
var file = require("file");
var writer = file.open("c:\\foo\\bar.html", "w");
writer.writeAsync(data);
If you want the user to choose the file name, you can do this using chrome authority and nsIFilePicker component.

Related

Ajax feedback from newsletter

I have created email contents as an html page.
I have placed 4 check boxes and every time one of them is checked I want to run ajax which calls a url with parameters. Is such a thing possible or the email client will not run it for security reasons or/and other reasons?
I can do it with a link to a page with the same contents but the recipient may not bother to click the link.
The whole idea is: Can we run ajax calls in html email contents as if the contents were in an autonomous web page?
No you can't. All JavaScript is stripped from emails for security reasons. The best you can do, as you've listed, is to have a link in the email with parameters to allow the landing page to do it.
You can achieve interactivity, however, with checkboxes - such as hiding/showing content (already placed in emails). If you're already inserting the parameters, I'm assuming then you have the information--so place that content hidden in the email, which when checked, shows. Mark Robbins shows how it's done: https://www.webdesignerdepot.com/2015/10/punched-card-coding-the-secret-of-interactive-email/ (if that's what you wanted, let us know in the comments and/or include your code and we can give a tailored example)

Create a Link To An External Page On Order Confirmation Page in Prestashop

I'm very naive. I am not able to create an external hyperlink on the order confirmation page. (lets say http://google.com)
I've modified the bank transfer module to create a manual payment method, but I am not able to add a simple link on the confirmation page. The screenshot depicts it well.
Also attaching the screenshot of the backend where I input the text. I've tried editing the tpl files, but it doesn't change the actual code on the browser. I have even tried disabling html filter and typing href on the text box itself, again the code just disappears in the browser.
The images will make things very clear.
Thanks.
Link
Backend
The solution is simple, like ébewè said, go to advanced settings > Performances, and clear cache or select no from the drop-down menu next to cache.
Then you can just use high school grade html commands in tpl file to wrap the requisite text inside the code. It works!
Thanks ébewè!

Download a file without URL

I'm updating some old CasperJS code that downloads a CSV report. The web interface recently changed. The old version had a link tag I could grab and then use casper.download() to retrieve the file.
However, the new version appears to be an Angular app and the download button triggers a handleDownload() function that does something under the hood, which results in a popup dialog in my browser.
Is there some way to intercept this dialog or otherwise extract the URL from the actual file?
A few options:
You can see what URL is requested (F12 > Network in Chrome). You could then try to deduce the URL.
Look at what handleDownload does - the logic should be available to
you. You may be able to pull data there.
Hard to help without seeing the code.

How can I do a Directory Listing of ebsites with Ez?

I need to do a specific task with Ez Publish but I don't have a clue on how to do it.
What I need to do is a list of Websites(Website directory). I will need to add basically two kinds of data:
Website Name
URL of the website
Then, when I click in the website link it will redirect to a page where I have the IFRAME with the link(URL of the website).
Can someone give me a clue about how to do this with Ez Publish? I'm a beginner.
Best Regards,
You haven't said anything about where you're getting the list from, so I assume you have the list already and just want to know the correct way to input this type of content.
Login to the site admin area, browse to the part of the site you want the list to be at (usually a folder). You'll be adding content items of type 'link' below the folder.
Select the 'Link' content type and click on the 'Create' here button. Enter the content (including the link URL to the page containing the IFRAME) then send for publishing.
I'm assuming the sites are your own, since many sites now take steps to prevent others placing their page in an IFRAME.
If you want this page to not just link to the iframe page, but to actually display the iframe content, then you'll need to override the default link template (copy it and tell eZ to use your version instead) and add a bit of html for the IFRAME.
If you're the main user imputing this content, you could also just allow eZ to accept literal HTML in the main description text areas (XML Block) and just paste in your IFRAME html. Ugly but quick to set-up.
You should be able to find many examples of entering literal html at the community web forums http://share.ez.no/forums
You should create a specific class for that with the two fields you need.
One for the name and another one with the URL.
Then you'll just have to make an override of node/view/full.tpl for your new class Where you will display the name and an iframe with URL that have been typped in your class instance.

Ajax generated content, crawling and black listing

My website uses ajax.
I've got a user list page which list users in an ajax table (with paging and more information stuff...).
The url of this page is :
/user-list
User list is created by ajax. When the user click on one user, he is redirected to a page which url is : /member/memberName
So we can see here that ajax is used to generate content and not to manage navigation (with the # character).
I want to detect bot to index all pages.
So, in ajax I want to display an ajax table with paging and cool ajax effetcs (more info...) and when I detect a bot I want to display all users (without paging) with a link to the member page like this :
JohnBob...
Do you think I can be black listed with this technique ? If you think so, could you please provide an alternative solution by keeping these clean urls and without redeveloping the user-list (without ajax) ?
Google support a specification to make AJAX crawlable:
http://code.google.com/web/ajaxcrawling/docs/specification.html
I did an experiment and it works:
http://seo-website-designer.com/SEO-Ajax-Google-Solution
As this is a Google specification, you won't get penalised (unless you abuse it).
Saying that, only Google support it at the moment (AFAIK).
Also, I believe following the concept of Progressive Enhancement is a better approach. That is, create a working html website then make the JavaScript enhance it
Maybe use the urls with an onclick to trigger your AJAX scripting? Like
Some URL
I don't think Google would punish you for this, you primarily use JScript, but you do provide a fall back for their bot, so your site doesn't get any less accessible.
EDIT
Ok, I misunderstood. Then my guess would be you basically have two options:
1. Write a different part of your site where bots end up, or,
2. Rewrite your current site to for example always give a 'full' page, with an option to only get, say, the content div. Then you can get only the content with JavaScript, but bots will always get a nice page.

Resources