we have 3 eshops for different countrier which every one is on different URL and the products are automatically pairing stock and translating.
We have aroung 6000 products now and the google search console has discovered most of the pages on 1st and 2nd eshop, but somehow it has discovered only 6 pages on our latest eshop which we started few months ago already. Last load of the sitemap was on 14.8.2022.
There is 19 indexed URLs and only only 145 non-indexed right now.
Where can be the problem? Can I somehow load the sitemap again manually some hard way? The sitemap is in the same format as for the other eshops and it is generated the same way.
I haven't find a way how to contact google with this issue.
here is the link to our sitemap https://www.super-parts.eu/sitemap.xml
enter image description here
I have tried to contact our eshop provider and they told me that the feed is okey and that there is no problem with it.
Also I have tried deleting the sitemap from search console but it was not loaded again, it just says that it was loaded succesfully already and the last load stays on 14.8.2022
Related
our 2 products such as www.nbook.in and www.a4auto.com are removed from all search engines. These projects got sub-domains and the links from subdomains are available. The domain is not blacklisted. The custom sitemap is created and the same is getting indexed even now. Analyzed the URL by google search console and it seems fine. 'site:nbook.in' in google didn't produce any result. Actually, we got more than 75,000 links available on google. it was working fine till last week. This is affecting our product's Alexa rank and reach. There is no issue with robots.txt file in the server document root folder. There is no rule set to deny any bot or user agent. It is not just Google, all search engines removed our links and this is making me confuse. We have other products which is designed and developed on the same way but there is absolutely no issue. I think there is nothing more to do with google search console/webmaster. I'm ready to provide more data upon requirement. Please help.
create robots.txt on root and put this code
User-agent: *
Disallow: /
I'm currently trying to upload a html+css page to my hosting service (1&1 internet).
Unfortunately, I have very little experience in that; and would like to ask this question:
How is it possible to update the existing homepage on a site that already has been uploaded to the web?
I'm using WebspaceExplorer. Now, when I enter it, (Hosting section) there are a lot of folders and files with a few options like download, delete and so on.
I can't seem to find out how to make the program understand which one should be the homepage. This is because I see that the page I uploaded is online and accessible, but is not the first page the user sees! How can I change it?
It may be confusing because of the poor description so here is the url of both:
velten-berlin.org (current Homepage)
velten-berlin.org/Upload (desired homepage)
I am working with Umbraco 7.5.2 and I am trying to set a custom 404 page.
I am hosting 2 totally different sites on the same Umbraco.
I've created my page types and pages in the back office and in umbracoSettings.config I have added this:
<errors>
<error404>//errorListFolder/page[#nodeName = '404']</error404>
Which will find the page with name 404 from type page right below errorListFolder.
But this page is the same for both of my sites (shows the one on the first site). How can I fix it to show the relevant error page on each site?
I have tried
<error404>$site//errorListFolder/page [#nodeName='404']</error404>
But it doesn't find anything and I will end up with the basic error page of umbraco
And this is my structure (I removed the other nodes for simplicity)
Comment
Thanks to Marcin, I could fix it with Nibble.Umbraco.PageNotFoundManager. It is a wonderful tool! but I need to mention that you have to set umbracoSettings.config manually back to normal, otherwise it won't work.
I had the xpath in that file and no matter what I select in the back office, I couldn't see my page. then I change that back to 1 and it is working like a charm :)
You can access specific trees by going from $root element. Check my solution from one of our projects:
<error404>
<errorPage culture="pl-PL">$root/homePage[#nodeName='PL']/hTMLContentPage[#nodeName='content not found TERROR - 808']</errorPage>
<errorPage culture="en-US">$root/homePage[#nodeName='EN']/hTMLContentPage[#nodeName='content not found TERROR - 808']</errorPage>
<errorPage culture="default">$root/homePage[#nodeName='PL']/hTMLContentPage[#nodeName='content not found TERROR - 808']</errorPage>
</error404>
It takes first node with original name 'content not found TERROR - 808' from node of type 'homePage' and specific nodeName as you can see above.
For your scenario, you should try Umbraco Page Not Found Manager made by Tim Geyssens: https://our.umbraco.org/projects/backoffice-extensions/umbraco-page-not-found-manager/ (uHangout: https://www.youtube.com/watch?v=bFL0xUhRerI). Probably it's something what you're looking for. It may be hard or even impossible to arrange it with basic error404 settings in config file, especially if you want to keep some culture depending 404's underneeth previously accesed tree nodes.
Hope that it help you!
I have been developing a web page named: directorioelectronico.com and I have specially issues now, I will be very grateful that someone can be help me.
The web page has been submitted in Google.com and now all the links that are in the homepage are listed in search results BUT some links ej. google.com/maps are not list in the home page (because it appears only when you select your municipality). How can I tell to Google that it exists (maybe without a sitemap.xlm because my links have metadata that is very important that google knows?
In advance, Thank you very much for you help.
My solution was that I create a route /sitemap.xml and put it in robots.txt too I created an html page with all the sitemap with cities, companies and all imoportant links.
I am trying to migrate a wordpress site from wordpress.com to a self-hosted site.
I export the data from wordpress.com to the xml file as one might expect. I then try to import the data into my wordpress installation on the self-hosted site. I check off import media when importing and everything seems to work after a couple of tries.( there are a LOT of images, so Varnish seems to gak somewhere in the middle, but it eventually all comes over). All the images are in the new database, but if I look at the Media tab in the Dashboard, there are no relationships between the images and any of the posts. Consequently none of the galleries defined in the original posts show up, although the directly linked image files in the posts show up fine.
So, I think the relationships between the images and the posts are getting dropped somewhere during the import. I've looked thru the XML file and can't seem to find where exactly these relationships between images (media) and posts are. I've also looked thru the database using phpmyadmin and can't seem to find where they're related either.
I'm hoping that if I can find where these relationships are stored in the XML file, I might be able to find a way to get them imported into the new database.
Would appreciate any help with this.
What are the URLs of the galleries that are missing on the new self-hosted site? Are they still "yoursite.wordpress.com/wp-content/..."?
You need to find/replace in the database using phpmyadmin to change URLs that are still pointing to yoursite.wordpress.com. See How to Move WordPress Blog to New Domain or Location ยป My Digital Life and/or use Search RegEx, a good plugin that allows searching and replacing with Grep through all posts and pages.