Hoping someone smarter than me can help me sort this out! I've been stumped for a few days now trying to pull some data from website into Google Sheet using ImportXML with no luck.
I'm looking to import the average odds for various sporting events from the website Oddsportal.com which update and change throughout the day. I'd like my sheet to also update these odds, similar to stock prices.
For example:
http://www.oddsportal.com/search/San+Jose+Sharks/
I would like to pull the Average Odds for Team "1" (+136) into a cell, the odds for Tie "X"(+277) into a cell and Team "2"(+161) into individual cells. Just the odds portion. If it's unable to be pulled from that page it is also listed on http://www.oddsportal.com/hockey/usa/nhl/san-jose-sharks-nashville-predators-6cPaAHOM/ down at the bottom in the Average Odds Row.
This seems simple enough but I just can't seem to get the ImportXML query correct without an error.
I've looked at the page's source code (Ctrl-U). The original html does not contain needed values, they most likely loaded later thru xhr (ajax) call:
So most likely you'll not succeed with mere a request html.
You need to explore Network in the browser DevTools to find out what request is initiated (by JS files) to get needed data. This might be even unique one containing hash signiture, so you'll not reproduce it for future use.
I recommend you to turn to scriping tools for retrieving that info.
Related
I am trying to use Octoparse to extract the podcast details from Marie Brown's "Beyond the kitchen table" website. https://beyondthekitchentable.co.uk/podcast/
I'm using Octoparse's free version which allows for scraping locally. The problem is that while Octoparse will automatically auto-detect the Title, Title_URL, and Content webpage data and correctly set up the Pagination, Scroll Page, and Loop item workflow to extract (Title, Title_URL, and Content fields), it does not auto-detect the 'Date' and 'Podcast time duration' fields of each individual podcast as these pieces appear to be getting embedded from an iframe. However, while I am able to custom add Date and Podcast time duration using an Absolute Xpath i.e. //div[#class="cfm-episodes-list"]/div[1]/div[2]/div[1]/iframe[1]. This results in the same value copied for each record. So when I attempt to fix this by using the Relative XPath setting in Octoparse to loop each item //span[#class="cp-episode-date"] in order to gather all individually unique, it does not get any values even though this relative Xpath //span[#class="cp-episode-date"] is finding all items when I use WebDevTools to search and find all occurrences seen within Chrome. I saw what might be another helpful post on Stackexchange about this but I was not able to make sense of it.
This portion //span[#class="cp-episode-date"] is relative Xpath as it finds multiple Date items in Chrome WebDevTools but it is not complete and I am not sure how to implement the unique Iframe traversal for the Date and Podcast time duration custom added fields I added that Octoparse's Relative XPath settings are looking for. I even tried to install the SelectorsHub Chrome browser extension but it didn't pull up the nested SelectorHub to query the Xpath the way the SelectorHub Youtube video demonstrates - it only showed me the relative Xpath I already am showing below.
Please have a look at this site using Octoparse and see if it is possible. If so, how can I do it?
When Absolute Path is used - //div[#class="cfm-episodes-list"]/div[1]/div[2]/div[1]/iframe[1]
vs.
When Relative Path is used - //span[#class="cp-episode-date"]
There are plenty of iframes inside the webpage. I don't know if Octoparse could handle this. Choose another starting point.
For example, use Apple Podcast :
https://podcasts.apple.com/gb/podcast/the-website-coach/id1587503231
Dates could be recovered with the following XPath :
//div[#class="l-row"]//time[#class]/#aria-label
Other possibility, scrape the following page :
https://feeds.captivate.fm/the-website-coach/
Dates could be recovered with the following XPath :
//h4/text()
Even easier, get directly the data from this URL (.json file) :
https://itunes.apple.com/lookup?id=1587503231&media=podcast&entity=podcastEpisode&limit=100
This question already has answers here:
Scraping data to Google Sheets from a website that uses JavaScript
(2 answers)
Closed last month.
I am trying to import, into google sheets, the last quarter's research and development expense for a few thousand companies from their financial statements. While I want to import several different elements from financial statements, the last quarter R&D expense is currently pertinent (and potentially the previous 3 quarters).
I have tried several different sites (yahoo finance, bloomberg, etc) but the simplest URL seems to be from stockrow.com because I can simply automate the substitution of the stock ticker in the URL.
To get the xpath, I inspect the element and copy the xpath using the browser (have tried with Chrome and Firefox).
I am using IMPORTXML on googlesheets and, on my last attempt, used the following input: =IMPORTXML("https://stockrow.com/JNJ/financials/income/quarterly","/html/body/div[1]/div/div/section/div/div[2]/div[1]/section[4]/div/div[3]/div/div/div[3]/div/div/div[11]/div/span")
I have attempted all sorts of combinations of sites, browsers, and xpaths related to the element, but no matter what I do, I always get the same error "Imported content is empty."
I read xpath google sheet importxml but can't make sense of what is happening in the change to the xpath or how to solve this particular challenge.
Because I want this to be repeatable across multiple stock tickers in google sheets, I am hoping that the "location" of the R&D expense (and other elements in the financial statement) are consistent across all pages, rather than just a specific solution to this challenge.
Looking forward to receiving guidance. Thanks!!
you need some other source. Google Sheets does not support the scraping of JavaScript elements. you can test JS dependency simply by disabling JS for a given site and what's left can be scraped. in your case its nothing:
I'm new to google-sheets so have a lot to learn, I'm wanting to scrape latest horse odds from daily races on https://www.racenet.com.au
An example URL would be
https://www.racenet.com.au/racing-form-guide/ararat-20191210/all-races
I am having trouble getting importxml to pick up any useful data no matter what parameters I try, can anyone give me any suggestions on the correct syntax and parameters to get the horse name and odds from this site...
the first step is to block JavaScript on a given website to see what left to be scraped:
Google Sheets does not support importing of JS elements
then the next step would be running the IMPORTXML formula to see stuff that can be scraped
=IMPORTXML("URL_here", "//*")
I am trying to write a function that will give me the annual payout dividend for a given stock. The website I am using is www.seekingalpha.com
So I understand that the function is =IMPORTXML (URL, xpath_query).
In that case, my URL is: https://seekingalpha.com/symbol/VOO/dividends/scorecard
but the problem I am having is figuring out the correct XPath to acquire the dividend value.
I currently have this as my function:
=IMPORTXML(CONCATENATE("https://www.seekingalpha.com/symbol/", $B2, "/dividends/scorecard"), "//body")
$B2 is a cell that holds the ticker symbol if you are wondering. Anyways, I right-clicked the number I wanted from the website and followed it downstream and tried seeing where it is nested under but keep running into the wrong "directory" per se, because I am usually left with an error "Empty."
I have also tried copying the xPath directly:
/html/body/div[2]/div[1]/div/div[1]/div/div/div[2]/section/section[1]/table/tbody/tr/td[1]
but am greeted with another empty field error.
Could anyone point me in the right direction? I've been researching this for a while and figured this would be a great way to learn. Thank you in advance
you need some other source. Google Sheets does not support scraping of JavaScript elements. you can test JS dependency simply by disabling JS for given site and what's left can be scraped. in your case its nothing:
UPDATE:
=INDEX(IMPORTXML("https://stocknews.com/stock/"&A15&"/dividends/",
"//div[#class='grade-cat-ytd']"), 2)
understanding how to read a page source
I have a formula that fetches names of books from goodreads.com:
=IMPORTXML("https://www.goodreads.com/book/show/" & gr_id; "//*[#id='bookTitle']")
where gr_id is a column containing ids of the books. For example when gr_id=23848607, it fetches from URL https://www.goodreads.com/book/show/23848607 and the result is "Warheart".
The formula worked fine some time ago. I did not change anything and now I noticed it stopped working for some of the books (still working for others). Instead of the name of the book now it gives N/A with "Import Internal Error" hint. The ids that do not work are:
48332548
35906922
How to make it work for all books?
There were many questions posted about "Import Internal Error" problems. I tried some solutions including copying the formula to a fresh sheet, but it did not work.
Update: I tried the following different XPath formulas instead of "//*[#id='bookTitle']".
"//h1[#id='bookTitle']"
"//h1"
Those different XPath formulas worked the same as the original XPath formula. They worked correctly for the same ids that the original one did and produced N/As for the same ids that the original one did.
Update: I just re-checked and all my formulas worked correctly for all gr_ids (I had not changed anything since the time when they did not work.) May be someone knows how to prevent them from stopping working in the future.
Update: I re-checked once again. Of all gr_ids only this one was showing N\A now: 35906922. I created an example spreadsheet, because my working spreadsheet contains too many unrelated details, but the problem did not appear in the example spreadsheet. I went back to my working spreadsheet and reloaded it - and the problem disappeared in my working spreadsheet too. Then I added more test data in the example spreadsheet and the following new example gr_ids showed N\A:
48213012
48213092
I tried to make a copy of the example spreadsheet to see if it fixes the problem. The behavior in the copy example spreadsheet was identical to the original example spreadsheet - the problem only with two gr_ids specified above.
if you run full IMPORTXML on those two IDs you can see it won't return anything at all:
=IMPORTXML("https://www.goodreads.com/book/show/48213012-fathers-and-sons", "//*")
which means that Google Sheets can't reach the XML content for some reason (could be something similar to https://stackoverflow.com/a/24891676/5632629)
therefore we can try to read the source code directly with IMPORTDATA where we can find around 70 elements with the same information so we pick one, isolate it and remove HTML tags. then we just wrap the prior formula in IFERROR and force the formula to take a 2nd look if it fails first time. the result is like this:
=IFERROR(IMPORTXML("https://www.goodreads.com/book/show/"&A:A, "//*[#id='bookTitle']"),
REGEXEXTRACT(QUERY(ARRAY_CONSTRAIN(
IMPORTDATA("https://www.goodreads.com/book/show/"&A:A), 100, 1),
"select Col1 where Col1 contains '</title>'"), ">(.*) by"))
IMPORTXML() seems to be unreliable. I decided not to use it, because I did not find an acceptable solution to my problem. Instead of using IMPORTXML() I exported my books from goodreads.com to csv file (there is such a feature of goodreads.com) and then imported the csv file into my spreadsheet. This is not be an perfect solution, because I need to re-import every time I need to update the books, but at least it works.