Opening google play store with Iframe - joomla

I have to integrate an external link(https://play.google.com/store) into my joomla site. I used an "iframe" to load the external link page (wrapper module) . But it does'nt open the site... does google restricts in displaying their play store site to be embedded?

Is that your domain name have HTTPS security ?? If it's no the case, try your link with "http" instead of "https".
Some service has special way for to be embed in iframe. This way can be also prohibited. Check if this service bring an integration code.

Certain links can be opened in iFrame using the iframe-allow to allow links.

Related

Opening HTTP content within HTTPS

We have a HTTPS website and I need to display a HTTP website (any external website) into my page. The website used iframe for displaying it. We realised that it doesn't work in mozilla firefox. We are getting a "mixed content" error. I am searching for an alternative to iframe now. I understand that it makes no sense to bypass the security warning. We also do not want to change any browser settings as it is possible that all the users may not have permissions to change browser settings. Using tags like <embed> or redirecting in <div> tag also gives the same problem.
Is there any way to do this in C# code and not using HTML and scripting.
Response.redirect() does not work in our application. I do not have a problem if the page is redirected but I prefer a dialog/popup window for the external website to display.
This is simply a security consideration. Your HTTPS site is not truly safe when using mixed content.
Use HTTPS for your external site, period.
As Mozilla suggests:
The best strategy to avoid mixed content blocking is to serve all the content as HTTPS instead of HTTP.

CreateJS CDN link that uses https? For use in DoubleClick and other Ad networks

I like that Flash CC 2015 Canvas uses CreateJS, however it's not working in doubleclick as the CDN serving the .js files is being served http and doubleclick needs it to be served as https.
Is create JS aware of this and do they have updated CDN links that we can use when uploading html5 creative to doubleclick, sizemek or other ad networks?
Asset is not SSL-compliant. The following resources are
non-compliant: http://code.createjs.com/easeljs-0.8.1.min.js
http://code.createjs.com/tweenjs-0.6.1.min.js
Did you try removing the scheme http? All should be left is //code.createjs.com/easeljs-0.8.1.min.js. I got a similar complaint.
The security trick is to make all the http:// calls into https://. Just add the s.
Doubleclick now hosts CreateJS on their own CDN: https://support.google.com/richmedia/answer/6307288
Due to the irruption of RTB and Big Ad inventories, non secure protocol url are not allowed.
So, as said you could do both: // or http://
Also, for AdServers, many of them do not accept Folder structure. CreateJS creates an "image" folder for the assets, It is better if you have every asset at root level.

SEO-friendly static snapshots of single-page web apps without using a dynamic server

Scenario:
I'm hosting a single-page application on GitHub Pages. When accessed via browser, JavaScript processes #! URLs and updates the UI.
I also want my app's content to appear in search results. Since I'm using GulpJS already, I want to add a task to create and save pre-rendered HTML snapshots of my page to accommodate web crawlers.
Problem:
My content is served by GitHub Pages, so I don't have any way to handle the _escaped_fragment_ parameter that web crawlers send. Every tutorial I've found on AJAX SEO assumes you are hosting your content on e.g. a NodeJS or Apache server and have a way to process such URL parameters.
Question:
Can a single-page web app be SEO-friendly when hosted on a static file server (e.g. GitHub Pages)? Is there a special directory I can use? Some special Sitemap or robots.txt configuration? Something else?
What I've found already:
Frequently Asked Questions (via Google Webmasters)
How do I create an HTML snapshot? (via Google Webmasters)
AngularJS and SEO (via Yearofmoo)

How does download-them-all plugin works?

I was using download-them-all plugin in mozilla to download videos on a page in one go. It is very good at what it do.
But i could not understand how it do it over https websites ?
I first thought that it would be parsing the webpage html content and then hitting those urls.
I tried to do it in a separate python application with the url, but could not do it. That should be the issue of session cookies and all other things.
Is it possible only through a plugin ?
Can it be done with a web-application like a website where users give us the url and we can act on that ?

Google crawl ajax / dynamically generated content - SEO

I've got a very unique situation that I don't believe any of the other topics here can relate.
I have a ecommerce module that is dynamically loaded / embedded into third party sites, no iframe straight JSON to web client into content. I have no access to these third part sites at all, other then my javascript file being loaded from their page and dynamically generating the content.
I'm aware of the #! method, but that's no good here, my JS does generate "urls" within the embedded platform, but they're fake and for the address bar only, and I don't believe google crawlers can reach this far.
So my question is, is there a meta that we can set to point outside the url to i.e. back to my server with static crawlable content. I.e. pointing the canonical to my server... but again I don't think that would work.
If you implement #! then you have to make sure the url your embedded in supports the fragment parameter versions, which you probably can't. It's server side stuff.
You probably can't influence the canonical tag of the page either. It again has to be done server side. Any meta tag you set via JavaScript will not be seen by a bot.
Disqus solved the problem by providing an API so the embedding websites could get there comments server side and render then in plain html. WordPress has a plugin to do this. Disqus are also one of the few systems that Google has worked out how to crawl their AJAX pages.
Some plugins request people to also include a plain link with the JavaScript. Be careful with this as you may break Google Guidelines if you do it wrong. But you may be able to integrate the plain link with your plugin so that it directs bots and users to a crawlable version of the content.
Look into Google's crawlable ajax standard (and why it's a bad idea) and canonical URLs.
Now you can actually do this. A complete guide and examples can be found here: https://github.com/kubrickology/Logical-escaped_fragment

Resources