My site title not appearing in google search results.
How can i fix this?
Google search: https://i.stack.imgur.com/Fv8Oc.png
Site title: https://i.stack.imgur.com/FfJqo.png
Code: https://i.stack.imgur.com/oQqLM.png
When did you make this change? It takes time for Google to change a page title (a few days).
You may take a look at Google documentation.
You can also add your website to Google Search Console and Ask Google to recrawl your URLs.
Related
My question is about when I search my website name on Google Search, how to do this on my website?
If you use WordPress; you can use WP-POSTRATINGS plugin, which is a very easy plugin to enable rating for articles, and it will show the total rating in google results.
Or if you're not using CMS, read this crash tutorial about adding stars: https://www.webpagefx.com/blog/seo/how-to-get-stars-search/
I am trying to setup a process where I can query a google API with a text string to get the first image result.
I have setup a Google Custom Search Engine but I can't seem to replicate a standard google search. My settings:
Image Search: ON
Domains to include: google.com. Include all sites this page links to
My hope with above settings was that it would imitate a standard google search. But when I try the term "poker", I get no results. I'm assuming this has something to do with my CSE settings but not sure how to adjust.
You need to choose the option to search the entire web:
In the left-hand menu, under Control Panel, click Basics.
In the Search Preferences section, select Search the entire web but emphasize included sites.
screenshot example
I am using "Open SEO stats Plugin" to check the page rank of any domain. I have also a custom google script which shows page rank in google sheets. It seems "toolbarqueries not working" (). Can anyone suggest any other sources where I can get the official page rank other than hitting "http://toolbarqueries.google.com/tbr"
Google has removed external PageRank access. There is no longer a method to get it.
Google had a beautiful API which you can use to search for large images, but unfortunately they decided to disable it. Now you can use their "custom search engine", but it doesn't get even close to what that old API could do. For a start, the results you get are not the same as if you search in the common search page with your browser, and you can't specify the size of the images you are searching for.
Is it there any programatically way I can get a list of the URLs of the images I would find in the common search google page, size included?
You can use scrapping the google image search results and parse the links to the images. urllib2 library in python can help you here.
Our agency built a dynamic website that uses a lot of AJAX interactions and #! (hashbang) URLs: http://www.gunlawsbystate.com/
It's a long book which you can scroll through and the URL in the address bar changes dynamically. We have to support IE so please don't advise using pushState — hansbang is the only option for us for now.
There's a navigation in the left sidebar which contains links to all chapters in the book.
An example of a link:
http://www.gunlawsbystate.com/#!/federal-properety/national-parks-and-wildlife-refuges/
We are expecting google to crawl this:
http:// www.gunlawsbystate.com/?_escaped_fragment_=/federal-properety/national-parks-and-wildlife-refuges/
which is complete html snapshot of the section. (+ there are links to the subsections like www.gunlawsbystate.com/#!/federal-properety/national-parks-and-wildlife-refuges/ii-change-in-the-law/ => www.gunlawsbystate.com/?_escaped_fragment_=/federal-properety/national-parks-and-wildlife-refuges/ii-change-in-the-law/ ).
It all looks to be complete according to the Google's specifications ( developers.google.com/webmasters/ajax-crawling/docs/specification ).
The site is run for about 3 months for now. The homepage is getting re-indexed every 10-15 days.
The problem is that for some reason Google doesn't crawl hashbang URLs properly. It seems like Google just "doesn't like" those URLs.
www.google.ru/search?&q=site%3Agunlawsbystate.com :
Just 67 pages are indexed. Notice that most of the pages Google indexed have "normal" URLs (mostly wordpress blog posts, categories and tags) and just 5-10% of result pages are hashbang URLs, although there are more than 400 book sections with unique content which Google should really like if it crawles it properly.
Could someone give me an advise on this, why Google does not crawl our book pages properly? Any help will be appreciated.
P.S. I'm sorry for not-clickable links — stackoverflow doesn't let me post more than 2.
UPD. The sitemap has been submitted to google a while ago. Google Webmaster Tools says that 518 URLs submitted and just 62 URLs indexed. Also, on the 'Index Status' page of the Webmaster Tools I see that there are 1196 pages Ever crawled; 1071 pages are Not selected. It clearly points to the fact that for some reason google doesn't index the #! pages that it visits frequently.
You are missing a few things.
First you need a meta tag to tell google that the Hash URLS can be accessed via a different url.
<meta name="fragment" content="!">
Next you need to serve a mapped version of each of the urls to googlebot.
When google visits:
http://www.gunlawsbystate.com/#!/federal-regulation/airports-and-aircraft/ii-boarding-aircraft/
It will instead crawl:
http://www.gunlawsbystate.com/?_escaped_fragment_=federal-regulation/airports-and-aircraft/i-introduction/
For that to work you either need to use something like PHP or ASP to serve up the correct page. Asp.net routing would also work if you can get the piping correct. There are services which will actually create these "snapshot" versions for you and then your meta tag will point to their servers.
Since it is deprecated by Google and now Google is not able to access the content under hashbang URLs.
Based on research Google avoids Escaped fragment URLs now and suggesting to create separate pages rather than using HashBang.
So I think PushState is the other option which can be used in this case.