Does Google read a CodeIgniter view displayed as a sitemap? - codeigniter-2

I have created a view that displays a sitemap with codeigniter when you access into www.domain.com/sitemap.xml, but there is any real sitemap.xml file.
My question is if google will read this or if i need to create a real sitemap.xml file on my site.
Thank you very much.

You can try to use Google webmaster tools
https://support.google.com/webmasters/answer/158587?hl=en
Here you find a function (fetch as google) where you can see your page in the way google see it.

Related

Can I set cookies for discord.Embed.set_thumbnail in discord.py?

I am making an online encyclopedia searching bot with discord.py. But the encyclopedia site is opened only for my school students. So I had to use cookies to get documents' contents. But I got a problem. The site provides changing the logo for each document, and I'm going to put it in discord.Embed.thumbnail. As I mentioned above, the encyclopedia site is opened only for my school students, and the logo file as well as.
Can I use my cookies to access file link for discord.Embed.set_thumbnail? If possible, how? Thanks for your help.
you could currently use the requests module to get the image URL setting cookies. Then, you get the URL of the image and you just put it into the thumbnail field.

embedding Google My Business reviews onto a webpage

Before Google+ got shut down I was able to embed reviews from my Google+ business page directly into my website by using the following code:
<div class="g-post" data-href="https://plus.google.com/+myCompanyName/posts/C5mXxBfvuyQ"></div>
<script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script>
The link in the data-href attribute was obtained directly from the Google+ my business page.
Now that Google+ is no longer, the Google my business page provides a differently formatted link for reviews such as:
https://business.google.com/reviews/l/13034364536743825118/r/AIe9_BFhqAtkXvUqdYNeMuBBGjaAo-4Nzsp8GqZodh3JinpksaIs5fbp68A98KcYqF2nBVn5d98tYmQEc0S_NHIm8awwzKlOh216MBgrXUXucioaxZb60DA
When I place the new link into my old code it does not work and I receive the following error in the console:
Failed to load resource: the server responded with a status of 404 ()
Does anyone know the way to embed My Business reviews now?
If anyone finds this the best answer I could find was located here:
Link How To Show Google Reviews On Website
It's much more technical than it used to be, but I suppose that's expected...lol
There is a widget for WordPress I recently reviewed that does display your Google My Business reviews realy nicely and enables you to use your own css styles
Its called "Google Reviews Widget"
https://richplugins.com/documentation

Display File from Google Drive in our Website in gmail-styled UI

I want to display a file from google drive to the user on my website.I have already searched and found that we can use the link of google drive file with corresponding {FILE_ID} to use it on our site.But I want to display the file to user in Gmail-Styled UI where the file contents can be viewed on the page itself.Do Google provide any API for this kind of display? Or is there any other way to do it? Please Help me.Any suggestions would be highly helpful.

Hashbang URLs make the website difficult to crawl by Google?

Our agency built a dynamic website that uses a lot of AJAX interactions and #! (hashbang) URLs: http://www.gunlawsbystate.com/
It's a long book which you can scroll through and the URL in the address bar changes dynamically. We have to support IE so please don't advise using pushState — hansbang is the only option for us for now.
There's a navigation in the left sidebar which contains links to all chapters in the book.
An example of a link:
http://www.gunlawsbystate.com/#!/federal-properety/national-parks-and-wildlife-refuges/
We are expecting google to crawl this:
http:// www.gunlawsbystate.com/?_escaped_fragment_=/federal-properety/national-parks-and-wildlife-refuges/
which is complete html snapshot of the section. (+ there are links to the subsections like www.gunlawsbystate.com/#!/federal-properety/national-parks-and-wildlife-refuges/ii-change-in-the-law/ => www.gunlawsbystate.com/?_escaped_fragment_=/federal-properety/national-parks-and-wildlife-refuges/ii-change-in-the-law/ ).
It all looks to be complete according to the Google's specifications ( developers.google.com/webmasters/ajax-crawling/docs/specification ).
The site is run for about 3 months for now. The homepage is getting re-indexed every 10-15 days.
The problem is that for some reason Google doesn't crawl hashbang URLs properly. It seems like Google just "doesn't like" those URLs.
www.google.ru/search?&q=site%3Agunlawsbystate.com :
Just 67 pages are indexed. Notice that most of the pages Google indexed have "normal" URLs (mostly wordpress blog posts, categories and tags) and just 5-10% of result pages are hashbang URLs, although there are more than 400 book sections with unique content which Google should really like if it crawles it properly.
Could someone give me an advise on this, why Google does not crawl our book pages properly? Any help will be appreciated.
P.S. I'm sorry for not-clickable links — stackoverflow doesn't let me post more than 2.
UPD. The sitemap has been submitted to google a while ago. Google Webmaster Tools says that 518 URLs submitted and just 62 URLs indexed. Also, on the 'Index Status' page of the Webmaster Tools I see that there are 1196 pages Ever crawled; 1071 pages are Not selected. It clearly points to the fact that for some reason google doesn't index the #! pages that it visits frequently.
You are missing a few things.
First you need a meta tag to tell google that the Hash URLS can be accessed via a different url.
<meta name="fragment" content="!">
Next you need to serve a mapped version of each of the urls to googlebot.
When google visits:
http://www.gunlawsbystate.com/#!/federal-regulation/airports-and-aircraft/ii-boarding-aircraft/
It will instead crawl:
http://www.gunlawsbystate.com/?_escaped_fragment_=federal-regulation/airports-and-aircraft/i-introduction/
For that to work you either need to use something like PHP or ASP to serve up the correct page. Asp.net routing would also work if you can get the piping correct. There are services which will actually create these "snapshot" versions for you and then your meta tag will point to their servers.
Since it is deprecated by Google and now Google is not able to access the content under hashbang URLs.
Based on research Google avoids Escaped fragment URLs now and suggesting to create separate pages rather than using HashBang.
So I think PushState is the other option which can be used in this case.

Is my AJAX content already crawlable?

I have build a site based on Ajax navigation.
I have build it that way, that whenever someone without javascript visits my site, the nav links, which usually load content via Ajax, are acting like normal links and the user can browse through the pages as usual.
Since, Google bot doesn't run javascript, it should theoretically be able to go through all links and corresponding sites as usual, right? Since they are valid links with the href tag pointed to the corresponding site.
Now I was wondering if thats sufficient or if I need to implant this method from Google too to make sure Google sees all my content?
Thanks for your insights and excuse my poor English!
If you can navigate your site by showing source (ctrl-u in chrome), google can also crawl your site. Yes, its that simple

Resources