How to clear Linkedin Share cache? - social-networking

I have a new description in the page but when I share the page it is still using the old description that no longer exists, I am after something similar like the Facebook Lint.
Any ideas?

You can append dummy query string value to your url and make it look like a new url and LinkedIn fetches it again. I've tried it and it works.
For example:
https://www.codeproof.com/?refid=LinkedIn
where refid=LinkedIn is just a dummy value.
If your url already contains query string and then just append "&refid=LinkedIn" at the end of the url.

Unfortunately, appending a query string to the URL no longer works.
From the following StackOverflow post:
LinkedIn's content cache presently stores website information for approximately 7 days before the crawler will revisit the site.
There looks like there is no instant way to clear the cache, but to wait seven days, remove the media and re-add it.

Appending a query string to the URL no longer works, so you'll have to wait 7 days.
But if you really need to share your URL with the medias you want, you'll have to go with a custom API call.
From the LinkedIn developer docs :
The first time that LinkedIn's crawlers visit a webpage when asked to
share content via a URL, the data it finds (Open Graph values or our
own analysis) will be cached for a period of approximately 7 days.
This means that if you subsequently change the article's description,
upload a new image, fix a typo in the title, etc., you will not see
the change represented during any subsequent attempts to share the
page until the cache has expired and the crawler is forced to revisit
the page to retrieve fresh content.
If you make API calls that directly provide the content to be shared
rather than by a URL that requires analysis, LinkedIn will always use
the values you provide.

Step 1: Visit https://www.linkedin.com/post-inspector/
Step 2: Enter your URL and click on Inspect, You will see the updated preview image
Step 3: Now try sharing your URL on LinkedIn

I've just found a way to force linkedin to fetch a fresh version of the page. Just create a redirect to your destination page and share the redirect page.
For example:
If your page that you want to share is: http://stackoverflow.com
Create a redirect for a page: https://stackoverflow.com/share-li to go to http://stackoverflow.com
And then share the https://stackoverflow.com/share-li on linked in. This way linkedin will think it's a new page and it'll get a fresh page version.
It's easy to do if you're using wordpress, just install a redirection plugin like this one for example: https://wordpress.org/plugins/redirection/

For wordpress these steps work for me:
In the home page I've removed the featured image and add it as a simple image on the header of the page
I've created a redirect page in my blog like (mydomain.com/social) that redirects all requests to my blog (mydomain.com)
Share the blog again in the social networks and everything will be ok
It's done =D

Unfortunately, there is none as of now. We are investigating what it would take to expose a similar feature. Please stay tuned. We'll announce it on the developer site at http://developer.linkedin.com.

Related

URL Re-writes and Google Indexing

I was asked to perform some URL re-writes for a new site with numerous dynamic pages and this has all worked fine.
However when I look at the URLs that Google has indexed, it has indexed the 'non-rewrite' url, so all the '?', '&' etc are being used.
What do you have to do to force Google to index your re-written URLs?
I just assumed it would do this automatically and never expected it to be an issue.
All help is gratefully appreciated.
Thanks.
Steps
1) Make sure that expired pages are no longer publicly accessible
2) Anything you do not wish Bots to crawl should be flagged with appropriate "nofollow" meta tags
3) Submit a new sitemap to your Google Web developer account
4) Make sure your Website throws a 404 error when a page isn't found. It is always a good idea to make a splash page for a 404 error which links back to your home page. (this is accomplished different ways across different server-side languages)
Google will automatically remove indexed pages if they no longer exist.. So be patient.

AJAX URLs & URL Rewriting

I am starting to set up a personal website, and I would like it's layout to look something like
-------------------------------
- Page Header & Menus Go Here -
-------------------------------
- Main Contents -
-------------------------------
- Footers -
-------------------------------
The main question is that I would like it to be a single-page interface in which the main contents are loaded and displayed with a combination of AJAX and jQuery to produce a nice effect. However, I would, of course, like to have the contents bookmark-enabled and indexed by search engines. I have skimmed throught the Single Page Interface Manifesto which explains some nice ways of achieving this, but I wouldn't really like to have my URLs like
http://www.mysite.com/index.php#!section=section1
http://www.mysite.com/index.php#!section=section2
I would, of course, like to re-write them as
http://www.mysite.com/section1
http://www.mysite.com/section2
My questions are this whether this approach is correct/doable and if AJAX URLs are compatible with URL rewriting. What URLS would be indexed by, say, Google anyway?
If you want your page to work without reloading and update at the same time the page's URL, the only way to archieve this is by changing the hash in the URL (location.hash = 'whatever').
URL rewriting cannot be used since the hash is not sent to the server, it's only available in the browser's scope.
Check Facebook or Twitter URLs. They are prettier than #!section=section1 but still need the hash.
Cheers.
If you want to load different content/tabs/some content of page without reloading browser,
Now It is possible with pjax..
you can use something like http://padrino-pjax.heroku.com/
you can try it, go to the link and click on any of links home,dinosaurs,aliens
and you will see It will change url and some content without reloading full page
It is achieved using ajax+push/pop of url in browser
I'm looking for a solution myself for a similar problem (I have a client site with an AJAXed wordpress theme, and these dreadful #! stuff on the URL prevent all the Social sharing plugins I have tried so far, from working correctly).
Apparently, there is a solution (with some drawbacks ofc..). I found about it here: http://moz.com/blog/create-crawlable-link-friendly-ajax-websites-using-pushstate
I know it's like two years since you've asked, but it could be helpful for someone else, or you may wanna check it out just for the sake of the curiosity itself! :-)

Is my AJAX content already crawlable?

I have build a site based on Ajax navigation.
I have build it that way, that whenever someone without javascript visits my site, the nav links, which usually load content via Ajax, are acting like normal links and the user can browse through the pages as usual.
Since, Google bot doesn't run javascript, it should theoretically be able to go through all links and corresponding sites as usual, right? Since they are valid links with the href tag pointed to the corresponding site.
Now I was wondering if thats sufficient or if I need to implant this method from Google too to make sure Google sees all my content?
Thanks for your insights and excuse my poor English!
If you can navigate your site by showing source (ctrl-u in chrome), google can also crawl your site. Yes, its that simple

Facebook like or share with dynamic document title

I found this problem all over the net but no answer yet, so maybe here someone solved it ...?
I built a page relying heavily on jquery.address. It's got one index page and the rest loads dynamically via Ajax following Google's /#!/ scheme for crawlable pages. Now I want to add Facebooks Like or share button but I can't get it to grab the actual page title or url.
Whatever I do, it always falls back to title and url of the index page. It tried:
(obviously) changing title an openGraph meta on load of the new parts.
"linking" the crawler page (?_escaped_fragmet_=xyx) but specifying the #! page in meta
"sharing" with a given title and url.
I never get anything but a link to the index page or a blank "share" to the right url with title and thumbnail ignored.
Has anyone got a similar setup working?
Thanks for any hints,
thomas
Facebook is actually using #! now and it works! If you build your site so that http://site.de/?_escaped_fragment=something is identical to http://site.de/#!/something all you have to do is "share" the #! url and it'll display the info from the escaped fragment page.
Use this URL to check: http://developers.facebook.com/tools/debug
But: A much cleaner solution to the problem can be found here: http://github.com/browserstate/history.js/wiki/Intelligent-State-Handling
My guess would be that Facebook's crawler doesn't run Javascript and will always display whatever's actually in the page it gets from the server.
Facebook share has a BRUTAL cache, last time I checked it was impossible to change the title / description data once it was scraped :(
The issue I had was the og:url and the actual url of the page did not match. I also read a number of comments about the og data being just after the title element, but I don't think that solved anything.
With regard to issues of caching, it is true that Facebook's caching is "brutal", but it does not cache anything for the lint tool: http://developers.facebook.com/tools/debug.
I use no-hash-bang urls when sharing links. I process the hard links and redirect them to a hash bang client side using javascript. That way if a crawler goes to the hard linked page it will display the information just as it would if javascript were enabled.
Compare:
http://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Flikeapage.com%2F%23!%2FChristmas%2Fvs%2FBacon
and
http://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Flikeapage.com%2FChristmas%2Fvs%2FBacon
Hope this helps.

Rewrite url ala Google Instant?

I have a e-commerce website built in Ajax and Js, when the user type a search keyword the list is pulled via ajax but the browser url, in my case doesn't change, so if the user reaload or simply bookmarks the address he 'll have to start form scratch loosing the keywords input.
i noticed Google instead rewrites the url with the complete query, no hashtag or complex workaround...apparently
how can i achieve that? consider i have complete control on my server so i can set my apache in any way i want.
thanks!!
See this question, almost the same except they used Facebook as a example.
How does facebook rewrite the source URL of a page in the browser address bar?
If you watch the URL in Google Instant, it doesn't change until you hit "Search" or pause for a set period of time (2 seconds, i think).
After this delay, Google refreshes the page with those search queries.
I'm not sure what browser you're using, but I get all the search terms after a hashtag in Chrome (e.g., http://www.google.com/#sclient=psy&hl=en&q=test+test+sibilance&aq=3&...). I don't think what you think is occurring is actually happening. It could be done on Chrome and other HTML5 browsers using history.pushState(), but I don't see Google Instant using that method.
Then it is not instant. Without reloading the page you can only change the fragment identifier in the URL.
My experience is, that after you changed the search, the Google URL is no longer "correct", i.e. it does not represent the latest query.

Resources