Add blocked (via robots.txt) URLs in Sitemap? - sitemap

In my sitemap there are some links which I don't want Google to index, so I blocked them using robots.txt.
Now in Google Webmaster Tool, it is showing warnings. Will it adversely impact my website in Google?

It's better to remove these URLs from XML sitemap.

Related

How to make Robots.txt and Sitemap.xml only accessable through search engine bots

Several website like Quora, Stackechange, and including Stackoverflow (https://stackoverflow.com/sitemap.xml) only access through the search engine crawlers (Google, Yahoo, Bing, etc).
How can i do same for my website robots.txt and sitemap.xml
What are the user-agents these crawlers use and where i can find a list
Google and Bing crawlers do not use any static IP's, they are dynamic and lot of IP's. How this big site like Stackoverflow manage whitelisting IP's of crawlers.
How big site content indexed instantly on Google. like my this question will get indexed instantly after publishing it. where my website usually take 2-7 days for indexing.

Google don't index my site with sitemap.xml

I have problem with indexing my website. I have sent to google my website sitemap and google was delete all indexed sites. My sitemap is here: enter link description here
What is wrong with this sitemap?

Google Not Indexing AJAX URLs

I have submitted a sitemap for my AJAX web application to Google via their Webmaster Tools. The submitted URLs are of the form:
http://www.mysite.com/#!myscreen;id=object-id
http://www.mysite.com/#!myotherscreen;id=another-id
However, even though more than a week has passed since sitemap submission, Google has not indexed the URLs. Google states that the sitemap has been processed, states that 60 URLs have been detected, states that no errors occurred, but does not index any of the URLs.
I have already implemented the AJAX crawlability contract on the server side, where requests containing an _escaped_fragment_ are responded to with a snapshot.
Any help/info regarding why Google is not indexing the URLs would be greatly appreciated.
See GWT SE friendly application
Suggestions include following the guide at http://code.google.com/web/ajaxcrawling/.
Nowadays, you don't need to do something specific for Google anymore, and AJAX crawling scheme has been deprecated been Google.
Just make sure that your website is easy to use for your users, and Google will be able to properly crawl it.
If you want to go the extra mile, however, you can check that article:
* https://moz.com/blog/optimizing-angularjs-single-page-applications-googlebot-crawlers

Merging Uservoice and Google Analytics CSS into application.js

Both user voice and google analytics give some small javascript code which internally loads another BIGGER js file from their server during page load. I am looking to reduce the HTTP requests. Can I download these files and let RAils assets precompile merge them into one? Or you think this will cause issues?
You can't do it. Both Uservoice and Analytics need to do AJAX requests to the corresponding site.
According to the AJAX cross-domain request policy, they have to be on the same domain as the page they're calling. If you'd pack them into your javascript they would break.

How to prevent Googlebot from overwhelming site?

I'm running a site with a lot of content, but little traffic, on a middle-of-the-road dedicated server.
Occasionally, Googlebot will stampede us, resulting in Apache maxing out its memory, and causing the server to crash.
How can I avoid this?
register at google webmaster tools, verify your site and throttle google bot down
submit a sitemap
read the google guildelines: (if-Modified-Since HTTP header)
use robot.txt to restrict access from to bot to some parts of the website
make a script that changes the robot.txt each $[period of time] to make sure the bot is never able to crawl too many pages at the same time while making sure it can crawl all the content overall
You can set how your site is crawled using google's webmaster tools. Specifically take a look at this page: Changing Google's crawl rate
You can also restrict the pages that the google bot searches using a robots.txt file. There is a setting available for crawl-delay, but it appears that it is not honored by google.
Register your site using the Google Webmaster Tools, which lets you set how often and how many requests per second googlebot should try to index your site. Google Webmaster Tools can also help you create a robots.txt file to reduce the load on your site
Note that you can set the crawl speed via Google Webmaster Tools (under Site Settings), but they only honour the setting for six months! So you have to log in every six months to set it again.
This setting was changed in Google. The setting is only saved for 90 days now (3 months, not 6).
You can configure the crawling speed in google's webmaster tools.
To limit the crawl rate:
On the Search Console Home page, click the site that you want.
Click the gear icon Settings, then click Site Settings.
In the Crawl rate section, select the option you want and then limit the crawl rate as desired.
The new crawl rate will be valid for 90 days.

Resources