I create the Teams app. I want to put the link in the link in the description. Is it possible? I try to surrounds link with [a]link[/a] and with link, both option are escaped.
The description field supports simple markdown syntax. I haven't tried a link, but you can easily test it your side. Please see here for details on links and other markdown syntax: https://www.markdownguide.org/cheat-sheet/
Related
I have a problem to display doc / docx / pdf docs assuming I have an online link to them (i.e. http://my.site.com/files/file_id_123423234)
2 Alternative I know Google provide are 2 following links:
http://docs.google.com/gview?url=[link_to_file]
Example:
http://docs.google.com/gview?url=http://my.site.com/files/file_id_123423234&embedded=true
https://docs.google.com/viewerng/viewer?url=[link_to_file]
Example:
https://docs.google.com/viewerng/viewer?url=http://my.site.com/files/file_id_123423234&embedded=true
Both alternatives not always manage to successfully display the doc ending up the http call with "no preview available":
Can you please provide a solution / alternative?
You just need to make use of the Embedded links. To do this, you have to publish the files on the web as stated in Publish and embed Google Docs, Sheets, Slides & Forms.
File -> Publish to the Web. Get the Embed Link.
I'm totally stumped on this and reaching our for help!
I'm using Import.io crawler to extract reviews from TripAdvisor. However when I am training the crawler, the "more" button is inactive.
Here's an example of the page: [http://www.tripadvisor.co.uk/Hotel_Review-g295424-d306662-Reviews-Hilton_Dubai_Jumeirah_Resort-Dubai_Emirate_of_Dubai.html#REVIEWS][1]
Here is the Xpath to the review in full: //*[#id="UR288083139"]/div[2]/div/div[3]
And to the More button:
//*[#id="review_288083139"]/div[1]/div[2]/div/div/div[3]/p/span
Is it possible to have an Xpath so the full review is included in Import.io?
One way you can do this is by using a Crawler then an Extractor. This would split the process into two parts.
Create a crawler that you'd train to capture the links for every review on the page. Make sure that you select link for the column.
Sample review from the website
Create an Extractor to capture the full review from the links you got from the crawler.
Voila! You got all reviews!
Note: If you already have all the links for the pages you need the reviews from, better make an Extractor instead of a Crawler. This way, you can chain the API to the other extractor. You'd only need a crawler if you don't know all the links.
Hope this helps!
It looks like the html is NOT on the page before you click that button, and there isn't a URL which has that data on it. So you may be out of luck.
You could try playing around with the developer console to see if you can find the full reviews buried in a xml file or dynamic URL somewhere. Im not sure how though.
I have some question suppose if I comment on other blog which don't accept html url .Is there any chance that I will get backlinks for my site if I edit my disqus profile and enter my url.
In short how can I get backlinks for disqus . Withought sapmming.
There is no way to get direct backlinks through using the disqus comment system other than pasting them in the comment content.
As you mentioned you can add your url to your Disqus profile page and each comment you make will link to that page (but with a no-follow link) however the url on the profile is also no-followed so it's hardly a worthwhile way of building backlinks... but it's an extremely good way of getting involved in communities and networking so use it for doing that instead.
In short: You cannot indirectly build backlinks to your chosen url with Disqus.
If you mean to get a backlink using an anchor text, its simply not possible.
I have read few posts and it said that you get backlink from wordpress site with disqus and the backlink are of nofollow type.. But with other sites having disqus google does not index the comments made on disqus. So you don't get a link.
I was hoping someone can help me fix an issue. When someone posts a link to my joomla created website, they get the heading "Whats New?", which is my default article page for the site. It is the current blog articles written.
For example, if someone posted my link on facebook, it would look like this:
Whats New?
MyDomain.com
Description of website goes here...
Everything looks great except for the "Whats New?". Is there a way to put My webpage name instead of the name of the default page? How about showing an image? When posted on facebook, there is just text and no image used.
Thanks, any help would be greatly appreciated
Facebook uses Opengraph data to build those posts. If facebook isn't offered OpenGraph data, then it will use its own methods to try and find the information it needs. Sometimes with useless results. There are a lot of options to fix this. Joomla extensions has a few opengraph extensions for you to install, some of those should work fine. You can always write something yourself or add the data in your template. But don't expect results right away, because facebook caches those media objects for some time.
Open graph: https://developers.facebook.com/docs/opengraph/
Joomla Extensions: http://extensions.joomla.org/extensions/site-management/seo-a-metadata/open-graph
There are more ways to fix this, but this is probably the easiest for you. Hope it helps.
Good Luck.
In the Joomla backend, do the following:
Open the menu item the the article is assigned to.
On the right hand side, open the Page Display Options panel
Add whatever you like to the Browser Page Title parameter.
Hope this helps
First time posting here and newbie at Google apps. I am putting together a url in a spreadsheet for a linkedin company. example: http://www.linkedin.com/company/National-Renewable-Energy-Laboratory
Can I use =importXML from a google spreadsheet plus Xpath to get the website url that is listed on each company page.
I have gotten to a point where I can extract all the href's from the page and the link that I need is in that, but I just want the website url.
Here is what I am using so far:
=importXML(R2, "//*[#href]")
Here is a link to my spreadsheet: https://docs.google.com/spreadsheet/ccc?key=0AheVK6uxf6AvdHhILTFrR1k4Wl9tWW5OVWpRRUJKMlE
The code is in S2
Appreciate your response.
//*[#href] matches elements that have href, not the href attributes themselves. Try //#href instead.
It's more complicated, but a good solution would be to use the LinkedIn API, which you can access using UrlFetchApp.