I was using download-them-all plugin in mozilla to download videos on a page in one go. It is very good at what it do.
But i could not understand how it do it over https websites ?
I first thought that it would be parsing the webpage html content and then hitting those urls.
I tried to do it in a separate python application with the url, but could not do it. That should be the issue of session cookies and all other things.
Is it possible only through a plugin ?
Can it be done with a web-application like a website where users give us the url and we can act on that ?
Related
We have a HTTPS website and I need to display a HTTP website (any external website) into my page. The website used iframe for displaying it. We realised that it doesn't work in mozilla firefox. We are getting a "mixed content" error. I am searching for an alternative to iframe now. I understand that it makes no sense to bypass the security warning. We also do not want to change any browser settings as it is possible that all the users may not have permissions to change browser settings. Using tags like <embed> or redirecting in <div> tag also gives the same problem.
Is there any way to do this in C# code and not using HTML and scripting.
Response.redirect() does not work in our application. I do not have a problem if the page is redirected but I prefer a dialog/popup window for the external website to display.
This is simply a security consideration. Your HTTPS site is not truly safe when using mixed content.
Use HTTPS for your external site, period.
As Mozilla suggests:
The best strategy to avoid mixed content blocking is to serve all the content as HTTPS instead of HTTP.
I have a project in meanjs.
It has html5mode disabled so my URLS are like that:
http://localhost:3000/#!/products
I am trying to implement AJAX snapshoots in order to allow Google Crawlers to see content generated by javascript on client side.
I installed a module called MEAN-SEO:
http://blog.meanjs.org/post/78474995741/mean-seo
Now when I access the following URL:
http://localhost:3000/?_escaped_fragment_=
I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/
And when I click on "products" or when I access directly, I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/products
After reading the Google specification detailed here https://developers.google.com/webmasters/ajax-crawling/docs/getting-started , what I need is to get is something without hashbangs, like the following:
http://localhost:3000/?_escaped_fragment_=/products
What I am doing wrong?
Kind Regards.
Any specific reasons why you want html5mode off?
Here is something a lot of people have missed: Search engines (both Google and Bing) can now handle AJAX based content.
Their crawlers now understands pushstates, so if you just turn html5mode on you don't need any special handling to get your SEO working. You can load your content via AJAX, you can set title tags and meta tags with javascript and so on and so forth, and the crawlers will understand your content the same as if you had rendered things server-side. There is no need to do html-snapshotting or escaped_fragment handling for SEO anymore.
This has been announced on their developer blogs but unfortunately most of the documentation hasn't been updated with this information, so it's gone under the radar for a lot of people.
One word of warning though, Facebook does not handle pushstates, so if you want to support the Facebook crawler you still need to handle that separately.
I've got a very unique situation that I don't believe any of the other topics here can relate.
I have a ecommerce module that is dynamically loaded / embedded into third party sites, no iframe straight JSON to web client into content. I have no access to these third part sites at all, other then my javascript file being loaded from their page and dynamically generating the content.
I'm aware of the #! method, but that's no good here, my JS does generate "urls" within the embedded platform, but they're fake and for the address bar only, and I don't believe google crawlers can reach this far.
So my question is, is there a meta that we can set to point outside the url to i.e. back to my server with static crawlable content. I.e. pointing the canonical to my server... but again I don't think that would work.
If you implement #! then you have to make sure the url your embedded in supports the fragment parameter versions, which you probably can't. It's server side stuff.
You probably can't influence the canonical tag of the page either. It again has to be done server side. Any meta tag you set via JavaScript will not be seen by a bot.
Disqus solved the problem by providing an API so the embedding websites could get there comments server side and render then in plain html. WordPress has a plugin to do this. Disqus are also one of the few systems that Google has worked out how to crawl their AJAX pages.
Some plugins request people to also include a plain link with the JavaScript. Be careful with this as you may break Google Guidelines if you do it wrong. But you may be able to integrate the plain link with your plugin so that it directs bots and users to a crawlable version of the content.
Look into Google's crawlable ajax standard (and why it's a bad idea) and canonical URLs.
Now you can actually do this. A complete guide and examples can be found here: https://github.com/kubrickology/Logical-escaped_fragment
Here's how I make develop a bookmarklet, get the input control value on web page ,
I write a javascript function, add the bookmarklet to my browser, load my test web page, is test the bookmarklet, the result is ok,
but then i test the bookmarklet on HTTPS website ,the bookmarklet can not get the input control value, why? the bookmarklet doesn't work on the HTTPS website?? Is there any way to make the bookmarklet work on https sites?
3 questions :
Why cant you get the input value : there is no reason why it does not work, almost certainly you are looking for the wrong id.
Do bookmarklets work on HTTPS : absolutely, HTTPS is not the problem
Can I make it work on https sites : if you provide a code sample, we might be able to tell you what is wrong with it.
I know this is a pretty old question, but since I came across it while searching for a similar problem, I will add my thoughts. If you wrote your own bookmarklet, this is most likely caused by your bookmarklet trying to access insecure content. If you have other static content that your bookmarklet references on your own server, such as HTML, JS, CSS, or image files, the browser will block that content from loading. This is because of the Same Origin Policy. This question is also discussed in this question. If you, or someone else viewing this is having the same problem, attempt to serve your content up as https or access only other content that is https.
I am developing a website, which is currently running on my test server (IIS7). I can access the web site from any browser (including different versions of Firefox), but one specific Firefox does this:
http://www.mysite.com/www.mysite.com
I have no clue what to look for… Has anyone had such a problem?
You must have to have
link
or
link
but not
link
Some browsers do "smart" thing to correct these urls, but it's bad practice.