Google Sheets Import XML [duplicate] - xpath

This question already has answers here:
Scraping data to Google Sheets from a website that uses JavaScript
(2 answers)
Closed 4 months ago.
I would like to import the "Major Market Sectors" table from the following webpage into Google Sheets. I have tried using the chrome inspector tool, as well as the XPath reference without any luck. Any help is much appreciated. Thank you for your time.
webpage: https://fundresearch.fidelity.com/mutual-funds/composition/316389303

Google Sheets IMPORTXML function only can get data included in the source code of the page, but the table you like to import is added programatically, so you should use another tool to do that.

Related

Finding a similiar feature/plugin for creating laravel blade components

I need assistance in locating a plugin that can generate blade components similar to the one shown in the image.
I vaguely recall seeing it on a video a few months ago and was wondering if anyone was aware of this feature or if it is a plugin.
I tried Google and several blade extensions in VS Code, but none of them work.
Thanks in advance.

Joomla and Google Analytics with advanced options [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to insert google analytics tracking code in my joomla site.
so i registered in the official site of google and saw there is an advanced tab with three more options than standard.
Do i have to check "i want to track dynamic pages" and "i want to track php pages"?
Do these options provide me better results or they are necessary for a dynamic site based on php like joomla?
Also where do i place the tracking code? Because of some bugs some say it is better just after the tag whereas other say just before the tag.
Thank you
General SEO advice: with Joomla you don't need to track pages dynamically, if you want, you can turn on SEF and use robots.txt, ror.xml and sitemap.xml (the first and last files are very important to google).
I also recommend on using Google webmaster tool to update Google whenever you post a new article as well as to check if there are crawling errors and remove "bad" URLs from google.
Like I commented on the other answer, tracking code should be located just before the closing tag for your web page - I recommend on placing the tracking code in the template! (even though you can copy and paste it separately into each article - this option should be done only on rare cases that you need to pass different parameters to GA from different articles.
Update:
Regarding your comment: yes, if you go to the "admin" section, then to "tracking code" you'll see the following option:
All it does is provide a different way of including the tracking code in your pages. I got to admit that I didn't use this option with a few Joomla and WP websites I've dealt with, and it still works totally fine. But if Google recommend on doing so - by all means go ahead and do it!
Judging by this source on setting up GA, it is important for PHP websites that you include the google tracking script at the bottom of the page before the </BODY> tag. I'm struggling to find any information that would give relevance to the questions you have mentioned above, beyond the fact that choosing different checkboxes shows you different instructions to setting up your script.

DOTNETNUKE development [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm developing websites using asp.net. Now, I'm interested in developing a website using dotnetnuke. Big question is:
When using dotnetnuke do I have to develop model for every little thing that is gonna be part of site content (for instance text form and button , datetimepicker, datagrid showing some data from database)?
As far as I can see you can add content like text, images and video using control panel of dotnet nuke but what if I want to put Image gallery that is using jquery, or just div element containing few controls.
Ps: when I create new website usign dotnetnuke control panel, where can I find html code of that site (is it possible to edit it in visual studio). I'm able to open whole dotnetnuke website and run it but I can see only Default.aspx.
In short, yes and no.
You can put HTML and jquery code into a variety of the modules that come with DotNetNuke, primarily the HTML module.
You can also "code" things using the Razor Host module if you want to add custom functionality to a page that isn't easily done with HTML or jquery.
The HTML code for a DNN site is stored in a database, depending on the module you use on a page that code could be in any number of database tables.
I would recommend taking a look at some of the "basic" webinars on our training page they will get you a general overview of things, and how you do development within the platform. http://www.dotnetnuke.com/Resources/Training.aspx#basicWebinars
Also check out the Wiki for more specific development questions and tutorials.
You don't have HTML code for every page in DNN. But if you want then you can create skins for pages and add html modules for the content in the respective pages.
You can create image gallery which uses jquery, for that you need to create visualizer for that section of images. You need to use the concept of liquid content, which allows you to use jquery, css and HTML(Visualizer Template) for that section.

No Tabs-Urls added [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to add tab application to a page with the “new auth dialog”
I just have "programmed" a new tab for my fanpage as described in the help for developers.
(https://developers.facebook.com/docs/appsonfacebook/pagetabs/)
I have got a app ID, but no App Urls. I can't reached the App Page to add it to my fanpage.
Why does the app don't work? Where can I get help. It looks so easy, but I am on it for hours now.
Thanks a lot.
Wolfgang
From the Facebook Documentation:
https://www.facebook.com/dialog/pagetab?app_id=YOUR_APP_ID&next=YOUR_URL
YOUR_URL seems to need to be the Website Url associated with your App.
Finally it works.
If anybody do have the same problem, here a hint:
YOUR_URL is the path to the app hosted on your server (e.g. https://www.example.com/app/index.html).

Saving PDF files with Chickenfoot

I'm writing a web-crawler using Chickenfoot and need to save PDF files. I can either click the link on the page or grab the PDF's URL and use
go("http://www.whatever.com/file.pdf")
and I get the firefox "Opening file.pdf" dialog box, but can't click the "OK" button to actually save the file.
I've tried using other means to download the files (wget, python's urllib2, twill), but the PDF files are gated so none of those will work.
Any help is appreciated.
This example of how to save a target in the Mozilla developer documents looks like it should do exactly what you want. I've tested a Chickenfoot example that is very similar that gets the temp environment variable, and that worked well for me in Chickenfoot.
https://developer.mozilla.org/en/XPCOM_Interface_Reference/nsIWebBrowserPersist#Example
You might have to play with the application associations in Tools, Options, Applications to make sure the action is set to Save File, but those settings might not apply to these functions.
End Answer, begin related grumblings...
I sure wish someone would fix the many bugs in Chickenfoot, and write a nice Cookbook programming guide. I've been using it for years, and there are still many basic things I've not been able to figure out how to do. I finally broke down and subscribed to the mailing list, as the archives have some decent script examples. It takes a lot of searching through the pdf references, blogs, etc. as the web API reference is very sparse.
I love how simple Chickenfoot can make automating some tasks, but it takes me days of searching javascript, DOM, and Firefox documents to find ways to do some of the things it can't, since I'm not really a web programmer. The goal of Chickenfoot seems to be that I shouldn't have to be, but unfortunately few are refining the proof of concept, as MIT has dropped the project.
I tried to do this several ways using only Chickenfoot commands and confirmed they don't work with the latest Firefox 3 and Chickenfoot 1.0.7.
I hope this helps! Good luck. Sorry I only ran across your question yesterday, but found it too interesting to leave alone.
You won't be able to click on Firefox dialogs for the sake of security.
The best way to download the content of a URL is to read then write the content of the URL.
// Chickenfoot 1.0.7 Javascript Code to download the content of a url.
include( "fileio.js" ); // enables the write function.
var url = "http://google.com",
saveFileTo = "c://chickenfoot-google.com";
write( saveFileTo, read( url ) );
You might find it helpful to use jquery with chickenfoot.
http://groups.csail.mit.edu/uid/chickenfoot/scripts/index.php?title=Using_jQuery,_jQuery_UI_and_similar_libraries
This has worked for me to save Excel files from NCES portal.
http://muaz-khan.blogspot.com/2012/10/save-files-on-disk-using-javascript-or.html
I was using Firefox 3.0 and the "old syntax" version of the code. I also stripped code intended for IE and "(window.URL || window.webkitURL).revokeObjectURL(save.href);" which generated an error.

Resources