i have develped a code with the help of Google API mailmerge LINK: https://developers.google.com/docs/api/samples/mail-merge
It works great but the problem is it creates a different sheet for each row, how can i make it create the rows in the same sheet?
Template link: https://docs.google.com/document/d/1eSt7mK17zwze8UcM65zA8OOGI0kPK6n-QaTZ6Rra1r4/edit?usp=sharing
sheet link: https://docs.google.com/spreadsheets/d/1zTZRO27k_f5YsX9zkUXjsmuHsGvYdjDntUk0cJHLDog/edit#gid=1089207158
Full code of my project: https://github.com/EyadZaeim/MDS
Related
I am trying to reproduce a RSS reader like Feedly with Google Sheet and displaying with Glide as an app on my mobile phone.
Everything's fine with IMPORTFEED() function with titles, description, URL.
But it seems this function doesn't allow pictures to be displayed even if they are in the feed (which is not all the time).
So I am looking for a way to extract the main image from a blog post... the one displayed when you hover on a link in a Google Sheet cell.
I would like to get the link of that image displayed in the link preview and put that link in another cell.
Here is an example:
I tried IMAGE()
and also IMPORTXML when there is an image in the RSS feeds (but not all of them do... so I stopped)
Is it possible in Google Sheet to get the main image from the one displayed in the link preview ?
For instance, one of the blog I want to extract the main picture of a blog article would be Creajv (URL : https://creajv.com/ ; Feed : https://creajv.com/feed/)
So the IMPORTFEED() function I did in Google Sheets was :
=IMPORTFEED("https://creajv.com/feed/";"items";FALSE;3)
Which stands for :
=IMPORFEED(...) the function to import feeds from an URL
"items" the way to pull every data there is in the feed (you can use other parameters and can see all the possibilities on the GoogleFormulas documentation)
FALSE because I don't want the headers to be included
and the number 3 because I want only the last 3 results displayed.
And it displays perfectly : author, description, URL, date
But I did a little digging in Google and found that basically IMPORTFEED() cannot get images from feeds, even if it is added by the author of the blog (he has to add a feature to do it).
So I am now trying to see if there is another way which is not IMPORTFEED() to get every time the main image of a blog post.
And I saw Google Sheet is able to pull instantly it when I copy paste the URL of a blog article within a cell for instance for Creajv :
Print screen of the image I get when I click in the cell which contains the post URL
So my thoughts would be that I can pull the author, date, description etc. with IMPORTFEED (which works perfectly every time) and use a formula on the cell with the URL to get in another cell the URL of the picture pulled from the one in the link preview.
Two other possibilities might also be with Google App Script :
creating with the App Script a custom function
or creating a script pulling the image in a cell every time a new row is added via the IMPORTFEED() function.
Functions only, as Apps Script doesn't run on mobile Apps
How about this solution? I checked the website and inspected the image from the thumbnail.
Luckily, the structure is simple:
<div class="article-image">
<img src="https://creajv.com/wp-content/uploads/2020/11/HighresScreenshot00000.png" alt="Concours de Level Design avec Unreal Engine, du 11/11 au 05/12/2020">
</div>
You can get the url with IMPORTXML, and apply IMAGE to it:
=IMAGE(ImportXml("https://creajv.com/2020/11/08/concours-de-level-design-avec-unreal-engine/", "//div[#class='article-image']//img/#src"))
Since you are already retrieving the post url with your previous formula, change the source url by the correspondent cell:
=IMAGE(ImportXml(C1, "//div[#class='article-image']//img/#src"))
For example:
There is a google sheet containing a list of MPN's (manufacturer part numbers). Trying to scrape a site called wikiarms for the UPC Codes when I have the MPN for an item.
I have the correct formula for doing this on another site.
=IMPORTXML("http://gun.deals/search/apachesolr_search/"&B1,"//dd/a[../../dt[contains(text(),'UPC')]]|//dd/span[../../dt[contains(text(),'UPC')]]")
Trying to figure out what the correct xpath to complete this formula. Some videos I have watch said to open the page in Chrome and use inspector to select and copy the xpath to complete the importxml function. I tried this with no luck.
Sample
Visit https://www.wikiarms.com/guns?q=20071
In the table there is a button "available in 6 stores" click that to reveal the list. The UPC should be listed after the MPN.
If I copy the xpath in Chrome this is the result
/html/body/div[1]/div/div/div[2]/div/div/div[2]/div[2]/table/tbody/tr[2]/td[5]
=IMPORTXML("https://www.wikiarms.com/guns?q="&B2,"xpath here")
What do I have to add at the end of this formula to pull in the UPC code? I will be using this formula to pull in UPC code for about 1000 items.
Thank you for your help.
Using your sample link, try
=IMPORTXML("https://www.wikiarms.com/guns?q=20071","//td[#class='upc']/a/#title")
and see if it works for you.
I am trying to take data from https://financials.morningstar.com/ratios/r.html?t=0P0000032S&culture=en&platform=sal and use the values in the table in a Google sheet. This is table1 when I inspect the element but when I use:
=IMPORTHTML("https://financials.morningstar.com/ratios/r.html?t=0P0000032S&culture=en&platform=sal","table",1)
on google sheets, it says the imported content is empty? Any help on how to import this data?
I've tried importhtml using table number references found when I inspected the page.
unfortunately, that won't be possible because the site is controlled by JavaScript and Google Sheets can't understand/import JS. you can test this simply by disabling JS for a given link and you will see a blank page:
Today when experimenting with using importXML in Google Sheets, I ran into a problem. I was attempting to import the title header of a USTA Tournament page into the Google Sheet, however, this did not work as it just resulted in the HTML title of the webpage being displayed ('TournamentHome'). Below is the Google Sheet, and the website that is used:
Google Sheet and Function:
=importXML(F2, "//html//body[#id='thebody']//div[#id='content']//div[#id='pagetitle']")
Website and Section of Source Code Being Used
The title that I am trying to extract from the website is TOWPATH 24th ANNUAL THANKSGIVING JR SINGLES.
The link to the website is https://m.tennislink.usta.com/tournamenthome?T=225779
update:
=REGEXEXTRACT(QUERY(ARRAY_CONSTRAIN(IMPORTDATA(
"https://m.tennislink.usta.com/tournamenthome?T=225779"), 555, 1),
"where Col1 contains 'escape'"), "\(""(.*)""\)")
unfortunately, that won't be possible the way you trying because the field you attempt to scrape is controlled by JavaScript and Google Sheets can't understand/import JS. you can test this simply by disabling JS for a given link and you will see what exactly can be imported into Google Sheets:
How about this sample formula? In this formula, the title value is directly retrieved from the script before the value is put to #pagetitle. Please think of this as just one of several answers.
Sample formula:
=REGEXEXTRACT(IMPORTXML(A1,"//div[#class='tournament_search']/script"),"escape\(""([\w\s\S]+)""")
Result:
When https://m.tennislink.usta.com/TournamentHome/tournament.aspx?T=38079 and https://m.tennislink.usta.com/tournamenthome?T=225779 are put in "A1" and "A2", the results are as follows.
Reference:
REGEXEXTRACT
Is my expectation valid? If yes, please guide me.
Local machine->local server process-> I generate Birt report-> which contain hyperlinks hard coded for example: http://www.ip_one.com/birtserver/parameters...(this is fine and points to another report and get me the report also when I hit the Url from inside the generated pdf report).
Now, what I need is to change ip_one to suppose ip_two once I hit the hyperlink which is inside the pdf(on the fly) keeping all the other parts of the url intact.
I am using birt-rcp-report-designer-4_2_2.
Thanks in advance.
It sounds like you are trying to dynamically create a hyperlink at report run time.
In the properties editor of the report item that has your link (i.e. label), edit the hyperlink. In the Hypelink Options (pop-up) to the left of "Location" field is the button ab| select 'JavaScript' Syntax.
You will be able to create the URL using JavaScript.