I am trying to pull in information from a table of businesses - company names, addresses, phone numbers - that are formatted similarly. I'm able to pull in that information through IMPORTHTML (and IMPORTXML) for the first page of results when I load the URL. However, there are multiple tabs of the table under the same URL.
How do I write the IMPORTHTML formula so it will pull in relevant information from the other table tabs?
URL (in A2 of Google Sheets): https://www.tcia.org/TCIA/Directories/FindQualifiedTreeCare.aspx?State=MD
Formula:
=IMPORTHTML(A2,"table",3)
unfortunately, that is not possible in Google Sheets because the URL is the same for all page views
As mentioned by #player0, this is not possible because URL is same for every Page.
Take a look at javascript function for each links
For the Page 1:
__doPostBack(
'ctl01$TemplateBody$WebPartManager1$gwpste_container_MemberResults$ciMemberResults$gvSearchResults',
'Page$1')
For the Page 2:
__doPostBack(
'ctl01$TemplateBody$WebPartManager1$gwpste_container_MemberResults$ciMemberResults$gvSearchResults',
'Page$2')
For the Page 3:
__doPostBack(
'ctl01$TemplateBody$WebPartManager1$gwpste_container_MemberResults$ciMemberResults$gvSearchResults',
'Page$3')
So when you put this function in your browser console, you will be redirected to corresponded page.
Is it possible do add this function into location bar so we can get a direct URL?
It is not possible for security reasons, browsers block this practice in general.
Any ways to bypass it?
Here are the steps what I would do:
You can use python browser driver to simulate user behavior in browser
Web scrape the data to your local machine
Upload data using Google Sheets API
Parse it how ever you want with Apps Script
I hope it helps
Related
I have made several attempts to collect the data within this table:
The simple ways of the two functions I've commented on, I've tried, but not succeeded.
I would like to if anyone knows any other way to collect this data in Google Sheets.
Site Link:
https://www.onlinebettingacademy.com/stats/team/brazil/operrio-pr/13217#tab=t_squad
the table you want to scrape is under JavaScript control, therefore, it can't be scraped.
all you can get from that site into Google Sheets is:
=ARRAY_CONSTRAIN(IMPORTDATA(
"https://www.onlinebettingacademy.com/stats/team/brazil/operrio-pr/13217#tab=t_squad&team_id=13217"); 10000; 10)
Because the page you are trying to scape is rendered using Javascript — i.e. the content you are looking to scrape is not in the markup, you will not be able to use a tool like Google Sheets.
However... you can easily scrape this by using a "headless browser". You pretty much will use a browser (without a UI) that will render your URL with the Javascript, and then once the page is loaded, you query the data using XPATH, etc.
Check out Puppeteer for an example of a JS framework that you can use for this task.
I am trying to import data from the following website to Google Sheets. I want to import all the matches for the day.
https://www.tournamentsoftware.com/tournament/b731fdcd-a0c8-4558-9344-2a14c267ee8b/Matches
I have tried importxml and importhtml, but it seems this does not work as the website uses JavaScript. I have also tried to use Apipheny without any success.
When using Apipheny, the error message is
'Failed to fetch data - please verify your API Request: {DNS error'
Tl;Dr
Adapted from my answer to How to know if Google Sheets IMPORTDATA, IMPORTFEED, IMPORTHTML or IMPORTXML functions are able to get data from a resource hosted on a website? (also posted by me)
Please spend some time learning how to use the browsers developers tools so you will be able to identify
if the data is already included in source code of the webpage as JSON / literal JavaScript object or in another form
if the webpage is doing a GET or POST requests to retrieve the data and when those requests are done (i.e. as some point of the page parsing, or on event)
if the requests require data from cookies
Brief guide about how to use the web browser to find useful details about the webpage / data to import
Open the source code and look if the required data is included. Sometimes the data is included as JSON and added to the DOM using JavaScript. In this case it might be possible to retrieve the data by using the Google Sheets functions or URL Fetch Service from Google Apps Script.
Let say that you use Chrome. Open the Dev Tools, then look at the Elements tab. There you will see the DOM. It might be helpful to identify if the data that you want to import besides being on visible elements is included in hidden / not visible elements like <script> tags.
Look at Source, there you might be able to see the JavaScript code. It might include the data that you want to import as JavaScript object (commonly referred as JSON).
There are a lot of questions about google-sheets +web-scraping that mentions problems using importhtml and/or importxml that already have answers and even many include code (JavaScript snippets, Google Apps Script functions, etc.) that might save you to have to use an specialized web-scraping tool that has a more stepped learning curve. At the bottom of this answer there is a list of questions about using Google Sheets built-in functions, including annotations of the workaround proposed.
On Is there a way to get a single response from a text/event-stream without using event listeners? ask about using EventSource. While this can't be used on server side code, the answer show how to use the HtmlService to use it on client-side code and retrieve the result to Google Sheets.
As you already realized, the Google Sheets built-in functions importhtml(), importxml(), importdata() and importfeed() only work with static pages that do not require signing in or other forms of authentication.
When the content of a public page is created dynamically by using JavaScript, it cannot be accessed with those functions, by the other hand the website's webmaster may also purposefully have prevented web scraping.
How to identify if content is added dynamically
To check if the content is added dynamically, using Chrome,
Open the URL of the source data.
Press F12 to open Chrome Developer Tools
Press Control+Shift+P to open the Command Menu.
Start typing javascript, select Disable JavaScript, and then press Enter to run the command. JavaScript is now disabled.
JavaScript will remain disabled in this tab so long as you have DevTools open.
Reload the page to see if the content that you want to import is shown, if it's shown it could be imported by using Google Sheets built-in functions, otherwise it's not possible but might be possible by using other means for doing web scraping.
According to Wikipedia,
Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites.
Use of robots.txt to block Web crawlers
The webmasters could use robots.txt file to block access to website. In such case the result will be #N/A Could not fetch URL.
Use of User agent
The webpage could be designed to return a special a custom message instead of the data.
Below there are more details about how Google Sheets built-in "web-scraping" functions works
IMPORTDATA, IMPORTFEED, IMPORTHTML and IMPORTXML are able to get content from resources hosted on websites that are:
Publicly available. This means that the resource doesn't require authorization / to be logged in into any service to access it.
The content is "static". This mean that if you open the resource using the view source code option of modern web browsers it will be displayed as plain text.
NOTE: The Chrome's Inspect tool shows the parsed DOM; in other works the actual structure/content of the web page which could be dynamically modified by JavaScript code or browser extensions/plugins.
The content has the appropriated structure.
IMPORTDATA works with structured content as csv or tsv doesn't matter of the file extension of the resource.
IMPORTFEED works with marked up content as ATOM/RSS
IMPORTHTML works with marked up content as HTML that includes properly markedup list or tables.
IMPORTXML works with marked up content as XML or any of its variants like XHTML.
The content doesn't exceeds the maximum size. Google haven't disclosed this limit but the below error will be shown when the content exceeds the maximum size:
Resource at url contents exceeded maximum size.
Google servers are not blocked by means of robots.txt or the user agent.
On W3C Markup Validator there are several tools to checkout is the resources had been properly marked up.
Regarding CSV check out Are there known services to validate CSV files
It's worth to note that the spreadsheet
should have enough room for the imported content; Google Sheets has a 10 million cell limit by spreadsheet, according to this post a columns limit of 18278, and a 50 thousand characters as cell content even as a value or formula.
it doesn't handle well large in-cell content; the "limit" depends on the user screen size and resolution as now it's possible to zoom in/out.
References
https://developers.google.com/web/tools/chrome-devtools/javascript/disable
https://en.wikipedia.org/wiki/Web_scraping
Related
Using Google Apps Script to scrape Dynamic Web Pages
Scraping data from website using vba
Block Website Scraping by Google Docs
Is there a way to get a single response from a text/event-stream without using event listeners?
Software Recommendations
Web scraping tool/software available for free?
Recommendations for web scraping tools that require minimal installation
Web Applications
The following question is about a different result, #N/A Could not fetch URL
Inability to use IMPORTHTML in Google sheets
Similar questions
Some of this questions might be closed as duplicate of this one
Importing javascript table into Google Docs spreadsheet
Importxml Imported Content Empty
scrape table using google app scripts
One answer includes Google Apps Script code using the URL Fetch Service
Capture element using ImportXML with XPath
How to import Javascript tables into Google spreadsheet?
Scrape the current share price data from the ASX
One of the answers includes Google Apps Script code to get data from a JSON source
Guidance on webscraping using Google Sheets
How to Scrape data from Indiegogo.com in google sheets via IMPORTXML formula
Why importxml and importhtml not working here?
Google Sheet use Importxml error could not fetch url
One answer includes Google Apps Script code using the URL Fetch Service
Google Sheets - Pull Data for investment portfolio
Extracting value from API/Webpage
IMPORTXML shows an error while scraping data from website
One answer shows the xhr request found using browser developer tools
Replacing =ImportHTML with URLFetchApp
One answer includes Google Apps Script code using the URL Fetch Service
How to use IMPORTXML to import hidden div tag?
Google Sheet Web-scraping ImportXml Xpath on Yahoo Finance doesn't works with french stock
One of the answers includes Google Apps Script code to get data from a JSON source. As of January 4th 2023, it's not longer working, very likely because Yahoo! Finance is now encrying the JSON. See the Tainake's answer to How to pull Yahoo Finance Historical Price Data from its Object with Google Apps Script? for script using Crypto.js to handle this.
How to fetch data which is loaded by the ajax (asynchronous) method after the web page has already been loaded using apps script?
One answer suggest to read the data from the server instead of scraping from a webpage.
Using ImportXML to pull data
Extracting data from web page using Cheerio Library
One answer suggest the use of an API and Google Apps Script
ImportXML is good for basic tasks, but it won't get you too far if you are serious in scraping:
The approach only works with the most basic websites (no SPAs rendered in browsers can be scraped this way. Any basic web scraping protection or connectivity issue breaks the process, and there isn't any control over HTTP request geo location, or number of retries) - and Yahoo Finance is not a simple website
If the target website data requires some cleanup post-processing, it's getting very complicated since you are now "programming with Excel formulas", rather a painful process compared to regular code writing in conventional programming languages
There isn't any proper launch and cache control, so the function can be triggered occasionally and if the HTTP request fails, cells will be populated with ERR! values
I recommend using proper tools (automation framework and scraping engine which can render JavaScript-powered websites) and use Google Sheets just for basic storage purposes:
https://youtu.be/uBC752CWTew (Pipedream for automation and ScrapeNinja engine for scraping)
Let's say I am creating a webapp for a library. My base url is http://mylibrary.com. I want to use "pretty" URLs as follows:
http://mylibrary.com/books (list all books)
http://mylibrary.com/books/book1 (details of a particular book)
At present my approach is to create a single page app and use history api to manage the URLs. i.e I load all CSS and JS files when the user visits the home page. From then I just get data from server using AJAX, in JSON format and then create the required HTML using Javascript.
But I have learnt that this is not so good from SEO point of view.If a crawler were to visit http://mylibrary.com/books it will not see booklist at all because AJAX calls would not take place.
My question is what is the other approach to design this kind of app ? Specifically:
Should the server create entire web page and send it to browser? I mean will the response from server include everything from <html> to </html> or only the required parts?
Do programming languages like PHP efficiently manage to send the HTML to clients ? I would rather have the webserver do it ..
It appears to me that in this scenario AJAX would have very little role to play other than may be change minor parts of the page. Is that a correct understanding ? ..and here I was thinking AJAX is the modern way of doing things
A library would have many books.
So the list would be long..
Using ajax allows you to fetch only the part of it the user is trying to read, without having to retrieve the entire list, or navigate by reloading.
so for low bandwidth, and impatient users, ajax is a godsend.
for crawlers that need the entire page to collect data from, not so much..
so really you want to provide different content depending on the visistor.
How to identify web-crawler?
IMHO: Provide the page from php, if the user agent is a robot, provide the list, otherwise provide the fancy ajax based site, that shows only what you want, when you want..
I'm just starting to play around with Orchard CMS. I like what I see so far, but I need to be able to create pages that display record details for data stored in another system. Does any one know if that is possible?
I have a SQL Server database that hold real estate property record information. This information gets displayed on the web. On that same website are informational content pages (FAQs, Contact Us, Home, etc...) What I would like to to is leverage the CMS portion of Orchard for the content pages. Then I would like to write a module using the Orchard that would get the real estate info, allow users to search parcels, and display detail pages for each parcel.
If you view the site http://www.sc-pa.com/search you can search by last name "smith" and select one record. That may help illustrate what I need Orchard to do.
Yes, that is possible, but your scenario is way too vague to get into any specifics. Can you elaborate on exactly what you are trying to do: what does the external data look like, where is it stored, how do you want to integrate it into Orchard, do you need any integration with content types and parts, or with search, etc.
One alternative is to expose ur data as web service or odata endpoint and then use jquery to do asynch call to get json data. Then ur home free.
Create a page and put the javascript in that or include a ref to js file.
I am interesting in Google AdSense bot's algorithm and behavior with web site. I did not work with AdSense and i do not have account. So i need your help to understand:
1) Gbot from time to time downloads all pages from web site. Am i right?
2) Gbot do not understand dynamic content (loaded by ajax). So i must generate static content and return it within html page and this pages must show identical content for all users and for Gbot?
3) Because of (1) and (2) i cannot use only root path http://example.com with some "main" widget. I must generate unique pages for example http://example.com/thread?id=101 ?
4) Gbot downloads pages (1) for grabbing (indexing) keywords from them and then store (on it's servers) these information for example by key/value (where key is page path, value is tag cloud). Am i right?
5) When web site has been opened in browser by user. Integrated html AdSense's code loads some JavaScript. As i understand by "googling" this JavaScript do not index page, but makes call (with some parameter key==page_path) to Google's server and gets appropriate ad links. Then shows this ad links in it's frame. Is it right behavior? Maybe JavaScript makes some local indexing of page's content?
6) How Gbot and AdSense's JavaScript work with cookies? As i understand AdSense can use cookies for show appropriate ad links. If it is right, please give me some use cases;)
I know that "true" algorithm is known only by engineers from Google. But some of you had experience with AdSense and AdSense html/javascript. Please correct my vision of it;)
Thank you very much for any advice!!!
P.S. This question is very important for me. It is not some question for fun! So Please do not close it;)
1) Yes if Googlebot can access the pages and if it knows about the pages through a link, XMLSitemaps, Google +1, etc.
2) Googlebot will now make AJAX / XHR requests to understand AJAX content (http://googlewebmastercentral.blogspot.com/2011/11/get-post-and-safely-surfacing-more-of.html).
Yes, you should show the same content to Googlebot as you would Users, otherwise this would be consider cloaking, which is against their guidelines.
3) This question isn't clear. But basically it's preferable to have the URL change because Google will then know how to index the content separately. If you're using AJAX then you might want to consider permalinks like you suggested, or you can use HTML5 popstate.
4) Yes Google will index the words on the page. I'm not certain they store it as a key/value pair. I'm not even sure if they're still using Big Table (http://labs.google.com/papers/bigtable.html) ... but it's likely they use Big Table or a similar system to store the inverted index.
5) The Adsense code is embedded Javascript ... for new webpages that Google hasn't seen before, it tries to deliver the most relevant ads based on the information it's found on the web about the site or possibly through anchor text of links pointing to that page. However, to get a more accurate understanding of the content of the page, Google will send an adsense specific bot to crawl your page ... sometimes you'll see it come very fast, even as soon as you load the page for the first time. It uses a different user agent than the traditional Googlebot ... you can find all the User Agents from Google here (http://www.google.com/support/webmasters/bin/answer.py?answer=1061943)
6) Google's crawlers don't accept cookies and won't pass back cookies to your server. It has to do with the massively distributed nature of Google crawlers that makes maintain cookies or sessions extremely difficult.