Does the Pinterest validator work? - validation

I'm trying to embed rich pins to our website. I used Schema.org for Product. While Google validates correctly the page, when I m trying the Pinterest validator I'm always getting the following error:
The data we scraped from your site could not be validated. Please make sure all required tags are present and you aren't serving different pages depending on different user agents.
Anybody had the same experience and somehow managed to solve this?

For me the answer was quite simply to use "http://www.example.com" instead of "http://example.com"
That was it. Didn't need to change anything else.
So I notice you're also using:
https://sealsmile.com/products/search?keyphrase=smart&category=&resultType=product
try using:
https://www.sealsmile.com/products/search?keyphrase=smart&category=&resultType=product

Related

ajax-content not displayed in soapUI using REST

I'm not entirely sure I am asking the question correctly but here goes.
I am trying to view customer data via the REST service. I've gotten the login and and can view the servlet(Response as HTML), atleast i thought it was the servlet, i just realized its just the path to the servlet/start.
I think what I am seeing is just the hardcoded HTML messages that get viewed depending on customer data. Viewing as JSON doesn't work either i get "The content you are trying to view cannot be viewed as JSON"
I'm sorry I don't know enough to ask this properly
Well, it was a n00b mistake. I didnt call the action prior to loading the page so there was no data to ever load.

MEAN-SEO not working as expected

I have a project in meanjs.
It has html5mode disabled so my URLS are like that:
http://localhost:3000/#!/products
I am trying to implement AJAX snapshoots in order to allow Google Crawlers to see content generated by javascript on client side.
I installed a module called MEAN-SEO:
http://blog.meanjs.org/post/78474995741/mean-seo
Now when I access the following URL:
http://localhost:3000/?_escaped_fragment_=
I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/
And when I click on "products" or when I access directly, I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/products
After reading the Google specification detailed here https://developers.google.com/webmasters/ajax-crawling/docs/getting-started , what I need is to get is something without hashbangs, like the following:
http://localhost:3000/?_escaped_fragment_=/products
What I am doing wrong?
Kind Regards.
Any specific reasons why you want html5mode off?
Here is something a lot of people have missed: Search engines (both Google and Bing) can now handle AJAX based content.
Their crawlers now understands pushstates, so if you just turn html5mode on you don't need any special handling to get your SEO working. You can load your content via AJAX, you can set title tags and meta tags with javascript and so on and so forth, and the crawlers will understand your content the same as if you had rendered things server-side. There is no need to do html-snapshotting or escaped_fragment handling for SEO anymore.
This has been announced on their developer blogs but unfortunately most of the documentation hasn't been updated with this information, so it's gone under the radar for a lot of people.
One word of warning though, Facebook does not handle pushstates, so if you want to support the Facebook crawler you still need to handle that separately.

Learning Yii: checking with ajax won't work

I have a small problem that seems to be big enough to hold me from my work.
As I said in the title, I am leaning Yii and after I developped my project, I realized that I don't have ajax check.
I tried to solve this by setting enableAjaxValidation to true and it didn't worked. I tried to make use of the method performAjaxValidation and, again, it didn't worked. The third way was to copy the content of performAjaxValidation and paste it inside my method (like in documentation and identical with the code generated by Yii.
I checked my js and they are loaded.
What could it be? How can I solve this? The problem is that I need my fields to modify while the user is completing the form.
Thank you!
PS: I checked some topics from stackoverflow but the only one who was related was Yii - Ajax Form with validations
Make sure the from that is being validated has the same ID that is being used in the performAjaxValidation function. For example if your form has the id product-form, the if statement should look like this:if(isset($_POST['ajax']) && $_POST['ajax']==='product-form')
If possible I recommend using Firefox with firebug extension so that you can debug whether the AJAX call is even being made, and what is being returned.

Facebook like or share with dynamic document title

I found this problem all over the net but no answer yet, so maybe here someone solved it ...?
I built a page relying heavily on jquery.address. It's got one index page and the rest loads dynamically via Ajax following Google's /#!/ scheme for crawlable pages. Now I want to add Facebooks Like or share button but I can't get it to grab the actual page title or url.
Whatever I do, it always falls back to title and url of the index page. It tried:
(obviously) changing title an openGraph meta on load of the new parts.
"linking" the crawler page (?_escaped_fragmet_=xyx) but specifying the #! page in meta
"sharing" with a given title and url.
I never get anything but a link to the index page or a blank "share" to the right url with title and thumbnail ignored.
Has anyone got a similar setup working?
Thanks for any hints,
thomas
Facebook is actually using #! now and it works! If you build your site so that http://site.de/?_escaped_fragment=something is identical to http://site.de/#!/something all you have to do is "share" the #! url and it'll display the info from the escaped fragment page.
Use this URL to check: http://developers.facebook.com/tools/debug
But: A much cleaner solution to the problem can be found here: http://github.com/browserstate/history.js/wiki/Intelligent-State-Handling
My guess would be that Facebook's crawler doesn't run Javascript and will always display whatever's actually in the page it gets from the server.
Facebook share has a BRUTAL cache, last time I checked it was impossible to change the title / description data once it was scraped :(
The issue I had was the og:url and the actual url of the page did not match. I also read a number of comments about the og data being just after the title element, but I don't think that solved anything.
With regard to issues of caching, it is true that Facebook's caching is "brutal", but it does not cache anything for the lint tool: http://developers.facebook.com/tools/debug.
I use no-hash-bang urls when sharing links. I process the hard links and redirect them to a hash bang client side using javascript. That way if a crawler goes to the hard linked page it will display the information just as it would if javascript were enabled.
Compare:
http://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Flikeapage.com%2F%23!%2FChristmas%2Fvs%2FBacon
and
http://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Flikeapage.com%2FChristmas%2Fvs%2FBacon
Hope this helps.

HTML codes showing in viewpage HTML data

I’m a new to Codeigniter. Just using it in my project from last 2 months. I’ve a comment section in my project. Where any one can give comments. Every things are going perfect but when ever any one putting HTML content(image/videos) & then when those are showing back in the comment section… direct HTML codes are showing in the comment page rather than HTML content(image/videos).
ex: when i’m saving any “embed youtube video code” in the comment box & save that the out put comes as “raw Embed Video codes” rather than Youtube Video…..
I feel like it must be a minor thing but really can’t understand where the fault has occurring. Plz, if any body have the solution reply me back as soon as possible.
Couldn't one devise a system where somebody just posts the youtube link itself and through a combination of regular expressions your own system generates the object/embed code itself so there's no security risk possible?
I had a similar problem a while back - wanting to give end users the ability to post YouTube videos, but not allow them to just post anything without some sort of XSS protection.
I ended up using htmlpurifier - http://htmlpurifier.org/ to filter the contents being submitted in the form.
There is a modification that can be made to the whitelist that allows YouTube code through the purifier.
http://htmlpurifier.org/docs/enduser-youtube.html
So far, that's working well, but my system is still in development.
As a quick hack you can do htmlspecialchars_decode when displaying the comment in your view. This is very dangerous though without the use of sanitization when you receive the comment - search xss_clean on this page. You should also use strip_tags to remove all the HTML tags you don't need (everything except the video tags) prior to inserting the comment in the database.

Resources