Facebook Like Button Comments Not Working - https issue - https

I'm using FBML and everything was working before, then I noticed that when you click like the comment box would not show up anymore. I checked out the dev site and saw that the way it's done now has changed.
I updated my code but it still doesn't show the comments box but the actual liking action works fine.
Here is a link: http://fez.nu/Oniir
EDIT 2: I'm using XFBML, I don't know if that makes a difference. I've heard that FBML is deprecated
EDIT 3: I browse with https on Facebook. I turned it off and the comment boxes show up on my site. So that problem is solved but how do I make it work with users that are using secure browsing on Facebook when my site is not?

You can use an HTML inline frame.
<iframe src="https://mytab.example.com/tabs/"></iframe>
This question has produced an accepted solution to what appears to be your problem.
Abstract
You can avoid SSL warnings for domains that support SSL by not being
specific about the transport protocol. e.g. instead of including
http:// or https:// use //
<!-- Instead of this: -->
<iframe src="http://www.facebook.com/plugins/like.php?params"></iframe>
<!-- Do this: -->
<iframe src="//www.facebook.com/plugins/like.php?params"></iframe>
Security considerations
Please note that there are some security considerations to this approach. I recommend you read the below articles and questions.
Are there security issues with embedding an https iframe on an http page (SE question)
HTTP and HTTPS iframe (SO question)
Other considerations when serving mixed (http/https) content (external site)
Edit 1
In the FAQ for the Like Button, it's stated that it will need 400 pixels in width to give the user the option to add a comment. If you don't have 400 pixels available, I think you would have to sacrifice the option to post a comment, or use a popup-window instead.

This problem started last night.
I thought it was my code at first but a quick rollback to code I KNOW (QA Tested) worked manifested the issue.
I went to the source itself, Facebook's OWN Like button generation tool and THEY have the same problem.
Then I picked BOTH a random CNN & Yahoo news article and that is what you call a trifecta. All 3 sites could not properly render the comments UI elements for the like button.
If it walks, quacks and moves its head like a duck... WTF? :))

Related

Why does my Linkedin share button not work?

I want to create a share button for Linkedin, the GUI button is all set up, but it doesn't work when clicking it. I researched a bit and came to the conclusion that using the same sharing mechanism, other sites work but mine doesn't.
I narrowed the problem down and now I'm trying to figure out why google.com works but my site doesn't. I don't use my real company website because it's personal information, but it's a website that has been on the internet for more than 10 years (in case this information is useful). When I go to the links, my website throws an error, but Google works fine.
Ⓧ https://www.linkedin.com/cws/share/?url=https://www.my-company-website.com
〇 https://www.linkedin.com/cws/share/?url=https://www.google.com
Is there any pre-requisite I'm missing, which makes my site not work?
I realized my server was blocking Linkedin (to reduce traffic from Linkedin bots). That's why it wasn't working.
As hint: I was working on closed webpage (for outside users) and that also causing problems with Linkedin share button.
Hint 2: Website uses Lets Encrypt! SSL and information mentioned here is a fake news https://wordpress.org/support/topic/linkedin-share-button-not-working-3/ Work's fine!
If you ever get stuck on trying to figure out why your page simply doesn't populate nice preview data on your LinkedIn share page, then check out the LinkedIn Post Inspector.
Insert the URL of your page (i.e., example.com), not the URL you are using to share (i.e., linkedin.com/share?url=example.com). You'll get detailed information on how your site will appear and why, like, for instance, sharing wikipedia.org...
Hope this helps someone else with a LinkedIn share issue!

SEO with angularjs and asp.net restfull service

I have developed a website using angularjs and web api.
The problem is that the ajax rendered content is not crawable by google. And no one can find the website using google search.
After reading many articles regarding this issue, including:
This one with all links of explanation going out,
Google ajax crawling protocol, and also stack over flow question, I couldn't find the proper solution. Those that mention asp.net solutions, are talking about mvc, and I need only the simple REST by web api, other articles are not talking about asp.net.
Is there any simple explanation?
I'm the one who asked this same question long ago, so I will answer from my experience:
Firstly, if all your content are accessible via unique URIs (including the hashbang if you use it), modern search engines should index it just fine. In fact Google can index javascript generated content now. You can try that via the Google Webmaster tool and see how your site is indexed.
Secondly, there are libraries that help you to serve parsed content to search engines if you need to, but in my case I didn't bother much with it since Google is indexing js nicely.
I've seen others ask this question, and maybe I'm missing something or this is outdated, but I don't see why AngularJS needs to be an issue with SEO.
Say you have a landing page and it has a bunch of links. Assuming you're using html5 mode in AngularJS (and I'm not sure that's 100% necessary) and something like ng-route then the links on the landing page can work both as "angular" (JavaScript) links and "old school" (full page load) links.
If you're a human user you can click a link and it will do angular magic and adjust the content without loading the full page. Ok, all fine.
But if you instead copy the link and paste it in a new tab or new browser, it will still work - assuming you've set up routes correctly.
I'm not an SEO expert by any stretch of the imagination, but as I understand it, having links that load pages and having those pages have real and useful content is the core of SEO, and done this way, AngularJS should work fine. The key thing to check is if you copy and paste the link (not just click it) that it works.

get feedburner feed on httpS

We are grabbing our feed at feedburner by using the jquery jGFeed plugin.
this works great until the moment our users are on a httpS:// page.
When we try to load the feed on that page the user gets the message that there is mixed conteent, protected and unprotected on the page.
A solution would be to load the feed on https, but google doesn't allow that, the certificate isn't working.
$.jGFeed('httpS://feeds.feedburner.com/xxx')
Does anyone know a workaround for this. The way it functions now, we simply cannot server the feed in our pages when on httpS
At this time Feedburner does not offer feeds over SSL (https scheme). The message that you're getting regarding mixed content is by design; in fact, any and all content that is not being loaded from a secured connection will trigger that message, so making sure that all content is loaded over SSL is really your only alternative to avoid that popup.
As I mentioned, Feedburner doesn't offer feeds over SSL, so realistically you'll need to look into porting your feed to another service that DOES offer feeds over SSL. Keep in mind what I said above, however, with respect to your feed's content as well. If you have any embedded content that is not delivered via SSL then that content will also trigger the popup that you're trying to avoid.
This comes up from time to time with other services that don't have an SSL cert (Twitter's API is a bit of a mess that way too.) Brian's comment is correct about the nature of the message, so you've got a few options:
If this is on your server, and the core data is on your server too, then you've got end to end SSL capabilities; just point jGFeed to the local RSS feed that FeedBurner's already importing.
Code up a proxy on your server to marshall the call to Feedburner and return the response over SSL.
Find another feed service that supports SSL, and either pass it the original feed or the Feedburner one.
i have started using WordPress paid theme Schema for my several blogs. In general, it is a nice theme, fast and SEO friendly. However, since my blogs are all on HTTPS, then I noticed that if I had a widget of (Google Feedburner) in the sitebar. The chrome will show a security error for any secure page with an insecure form call on the page.
To fix this, it is really simple,
you would just need to change the file widget-subscribe.php located at /wp-content/themes/schema/functions/ and replace all “http://feedburner.google.com” to “https://feedburner.google.com”.
Save the file, and clear the cache, then your browser will show a green padlock.
and i fix this in my this blog www.androidloud.com

How does Facebook grab the text of the article when pasting the url?

Im a bit curious about this Facebook's useful functionality. When I paste a URL on the 'What's on your mind?' box, it almost perfectly gets the body of the article. How does Facebook do this?
Thanks!
It's part of how Facebook Share works.
The URL Linter is pretty helpful as well. For example, if we test it with this very question, you can scroll down and see where it's getting the data from
"Hello, Im a bit curious about this
Facebook's useful functionality. When
I paste a URL on the 'What's on your
mind?' box, it almost perfectly gets
the body of the article. How does
Facebook do this?" extracted from
<description> or first <p>
I can't speak for Facebook specifically, but there are entire companies dedicated to providing that kind of service. For example, Reddit recently outsourced preview generation to a 3rd party.
So, essentially, there's a certain amount of automation and a large amount of manual tweaking and configuration.
You might also look at the Readability tool, which extracts the main content of a web page - that might provide some insight into the processes involved.
You can put your own entries into the shared content, by using the things described in the OpenGraph protocol on Facebook developer website.
It basically goes to the page and begins sniffing for ID's in the HTML marked as Content or Main and probably a few other common terms people use when building a site and specifying where things like menus, content, main body, right menu, top menu, main article, etc are placed in the page when pulling it in dynamically (or non dynamically for that matter).
For example, look at the source of this page itself. You'll see an area that begins div id="content"
Bingo. That's where the facebook sniffer begins. It then grabs probably the first picture it finds within that area as well as the first bit of text in that area as well.

Facebook Connect XFBML not working

I'm making a website using Facebook Connect and decided to use Facebook's XFBML tags like "fb:profile-pic" since they are so easy to use.
I haven't been able to make them work no matter how hard I look online but then I noticed that it worked on all the browser's instead of Firefox.
I also realized that even on Facebook's own "The Run Around" sample app they don't work!! You can check it out here: http://www.somethingtoputhere.com/therunaround/index.php
If you log in with Firefox your picture is not shown, but if you use another browser it is shown. This happens with the fb:profile-pic tag or any other tag like fb:name.
I haven't found any information online so I'm asking other people that have worked with this: Are these tags simply not compatible with Firefox ? Do they have outages or something like that ? Has this happened to anyone before ? Any ideas on how to resolve this ?
I guess they do have "outages". I've spent the whole weekend trying to resolve this and now they post they had a problem and have resolved it.
From the Platform Live Status website:
http://developers.facebook.com/live_status.php#msg_497
We are experiencing a possible config
problem with api.connect.facebook.com.
If you are including Connect JS
library through
http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php,
all API requests through JavaScript
would fail. This affects rendering of
XFBML tags (such as fb:name and
fb:profile-pic) as well. While we are
fixing this issue, you can work around
the problem by changing
http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php
to
http://static.ak.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php.
It's also safe to keep url change
permanently because
connect.facebook.com is just an alias
to facebook.com.
I wish they had updated that sooner, now I'm looking for a place to find out about this stuff before I spend days working on something before realizing it's not a problem with my code!
Open up Firefox > Preferences > Privacy and make sure "Accept third party cookies" is checked. This is needed for Facebook Connect to work. Also, when using Connect, make sure all your tags are fully closed, i.e. <fb:profile-pic></fb:profile-pic> and not <fb:profile-pic/>. From the docs:
The user's browser must be set to
accept 3rd Party Cookies in order for
it to stay connected between clicks.
Source: http://wiki.developers.facebook.com/index.php/Logging_In_And_Connecting
FWIW, I wouldn't use "the run around" as a sample app. That thing has been the same since they introduced Connect and is pretty hacky.
do check in connect section under the canvas option.
there should be a link of your physical file.

Resources