We have an ASP.NET website on IIS. We have a Lead Forensics link. Which has been working fine prior to switching to require SSL on all pages. It is something similar to:
<script type="text/javascript" src="http://lead-123.com/js/8303.js"></script>
Since requiring SSL however, the tracking no longer seems to be working.
Obviously this is caused by the request to http link from the original https page. But the following two attempts are also failing:
src="//lead-123.com/js/8303.js"
src="https://lead-123.com/js/8303.js"
Visiting the https URL to the tracking script shows that it is being served (albeit with security errors).
I'm sure Lead Forensics have considered this. Does anyone know if there are any conventions or workarounds that can somehow be used so that security errors aren't reported on the site and for tracking to work? I can't find any documentation on this, and attempts to contact them haven't proven successful to date.
**
Update
I'm not sure the script is hosted on the https link after all. (It only responds in my browser after I have successfully received a response from the http link). Nevertheless, I am still looking for a convention on how to handle this situation, or whether a separate link is provided if using SSL, or indeed whether the technology is even capable of working over SSL.
Call Lead Forensics support. They can configure a secure endpoint for the tracker upon request:
<script type="text/javascript" src="https://secure.leadforensics.com/js/XXXXX.js"></script>
<noscript>
<img src="https://secure.leadforensics.com/XXXXX.png" style="display:none;" />
</noscript>
There is nothing you can do about this. The CN (Common Name) name assigned to this certificate is *.leadforensics.com; however, they kept giving other domain names bound to this certificate.
ERR_CERT_COMMON_NAME_INVALID is the error which we get.
As this entire process runs in background, thus browser doesn't open JS and PNG file, and tracking doesn't happen.
I am not sure how Lead Forensics can even do this!
We can easily create a workaround, by using HttpWebRequest class and overriding X509 event to always true - but creating such a workaround would violate the security norms and may mask other vulnerabilities.
So I've asked Lead Forensics to correct it.
Related
Interestingly, this is apparently the official way to reach Google API support? (...akin to Microsoft/SO's documentation partnership?) Interesting — but obviously this limits the private information that I can include in my "support request"...
I have added-then-verified 400+ domains (with each of their http/https/www/no-www variations, for 800+ total) on Google Search Console via the related API's, without issue.
One domain is giving me a problem with verification via 'HTML File Upload', even though it's triple-checked to be set-up the same as the other 825 that verified without issue.
I compared WHOIS and intodns.com DNS Health report and I also cleared the DNS Cache and waiting a couple days to see if it was a caching issue.
I've tried multiple verification methods, but this error persists on both the http:// and http://www. versions of the one site. The site itself works fine and I can't see any anomalies with it on my end.
I'm not sure if this could be related but the webmaster's site list, does include one strange property that is apparently verified (in addition to the two unverified versions of the problem domain):
(I've masked the ID number since I have no idea what it represents.)
How can I get my ownership of this site verified on Google Search Console?
You can verify your site ownership by the alternate method. By inserting HTML tag you can verify your ownership easily. From search console you will get the HTML Tag. The Other way is to verify the ownership is Google Tag Manager and Google Analytics.
HTML Tag Sample is: <meta name="google-site-verification" content="String_we_ask_for">
Is there a way to make sure Magento calls secure urls when its in the checkout process? The problem is the web browser complains when over httpS because not all resources are secure. In the source I have things like <script type="text/javascript" src="httP://something"> which triggers this error. I'm afraid customer won't think the site is secure.
I know I can use this <?php $this->getUrl('something/', array('_secure'=>true)) ?> However I don't want all my javascript resources to be secure all the time, just in the checkout process.
It seems Magento should handle this automatically when you configure it use frontend SSL, but apparently not.
So my question is what is the best way to handle this?
Thanks
The customer would be correct - the page content is not secure.
If you hardcode protocols in markup or incorrectly specify protocols in code, the system delivers what you ask. It's incumbent on the implementer to make sure the markup is correct.
That said, asset sources can use relative protocols in markup:
<script src="//cdn.com/some.js"></script>
Also, secured/non-secured status can be passed dynamically to arguments.
Magento serves out everything secure that it controls. The problems usually come from scripts that load content from other sites. Magento doesn't have any control over these. It would have to literally rewrite the script in order to do that.
It's your responsibility to see that the scripts are properly written or else banished to pages where they belong so the browser doesn't complain about insecure content.
A case where relative protocols did not work. --->> We took on Authorize.NET and chewed them out because of their security badge causing Internet Explorer to pop up the insecure content warning during cart operations, the very place you want the badge to show so the customer knows their credit card info is being properly handled. They had the problem fixed within two weeks after we told them people were not ordering and actually complaining about site security when we showed their badge in the cart.
It was caused because the script they gave you at the time, which we tried to modify for relative protocol, then turned around and called yet another script that retrieved plain ole port 80 insecure content.
Facebook can go like itself on another page, it doesn't belong in cart operations (another script menace we had to deal with).
I had some time off recently and thought it would be a neat exercise to see how quickly I could put together a working program to automatically retrieve '.torrent' files for me. I'm aware there are existing solutions, but this was more of a programming exercise.
All was well, it ran, checked the sites for new torrents, and attempted to download them. But this is where I'm running into a problem; one of the sites that I'm trying to download the .torrent file from is giving me a file containing this instead of the torrent file when I try to download it;
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
<hr>
<address>Apache/2.2.3 (CentOS) Server at forums.mvgroup.org Port 80</address>
</body></html>
My first thought was maybe a broken link, so I went and successfully downloaded the file in my browser, so it's not a broken link.. My next thought is that maybe I'm not downloading the file correctly.. This is the example that I used, and this is actual code that's doing the downloading in my program.
I have a sneaking suspicion this is going to turn out to be one of those brain-dead simple gotchas, but I'm having a heck of a time figuring it out. Does anyone know why I'm getting a 400, or how to fix this?
A broken link should return a 404 Not Found error. Because you can retrieve the file with a browser I see there are two other possible issues: Either you are missing handling redirects in your code that the browser handles automatically, or you are missing needed session IDs or cookies or some state value. Again, a browser will handle those but your code will not unless you write it in, or take advantage of the right gem.
The sample code you link to at http://snippets.dzone.com/posts/show/2469 is rudimentary, but is not wired to follow redirects, which is what I suspect you need. I glanced at your code and it doesn't handle them either. The "Following Redirection" sample code in the docs for Net::HTTP shows how to do it.
Rather than write the code to retrieve the URL yourself, amounting to reinventing the wheel, I recommend using Ruby's Open::URI, because it handles redirects automatically along with time-out retries. It's easy to use and a good work horse for those normal "get a URL" jobs.
If you want to have a gem that handles redirects and cookies and session IDs, look at Mechanize. It's a very good gem for general purpose tasks, though it is really designed for navigating web sites.
For more robust tasks, Curb and Typhoeus are good because they can handle multiple requests, though you'll need to write a bit more code for managing the files and navigating sites. For a file download they'd be fine.
You need a logging proxy in between, so you can see which bytes go over the wire.
If you use Eclipse, it has a http proxy available. I believe it is part of the Eclipse Java EE download.
I'm totally stumped here, so any ideas would be appreciated.
I have a RichFaces application, which recently became non-functional when used from IE6. The problem began when I included the following line in my main template:
<a4j:loadScript src="resource://jquery.js"/>
This results in the following generated HTML:
<script src="/AgriShare/a4j/g/3_3_3.Finaljquery.js.jsf" type="text/javascript"></script>
By "non-functional" I mean that pages no longer load, b/c the first page appears to hang the browser for a long time, and then all references to jQuery say that the object was not defined. Eventually this appears to put IE6 in a state where further clicks do nothing.
After a lot of trial and error I have established the following:
The app still works in Chrome, Firefox and IE8
The app still works in IE6, if I switch to HTTP. So, the problem appears to be related to HTTPS, which I can't dispose of.
I further narrowed down the problem by trying to manually request 3_3_3.Finaljquery.js.jsf in IE6 address bar. It asks me if I want to save the file (so it can see it is there), but when I say 'Save', it hangs for about 5 seconds and then says:
Internet Explorer cannot download 3_3_3.Finaljquery.js.jsf from [host_name].
The connection with the server was reset.
Doing the same download over HTTP succeeds.
Gradually reducing the size of the file, I noticed that the download eventually succeeds over HTTPS, if I get the files size below ~ 110KB. There is no specific size it works at though. I tried the same trick with prototype.js and it worked at a different size value.
I can't trace the SSL session, b/c I cannot get access to the certificate's private key, so now I have absolutely no clue what to try next.
Any ideas would be greatly appreciated.
Try using Fiddler for debugging. It can handle SSL.
You might also want to consider hosting the server yourself and taking a look at the server log.
The problem was solved by turning off compression of javascript files in Web Cache.
Sounds like the problem might be related to this: http://support.microsoft.com/default.aspx?scid=kb;en-us;327286
We are grabbing our feed at feedburner by using the jquery jGFeed plugin.
this works great until the moment our users are on a httpS:// page.
When we try to load the feed on that page the user gets the message that there is mixed conteent, protected and unprotected on the page.
A solution would be to load the feed on https, but google doesn't allow that, the certificate isn't working.
$.jGFeed('httpS://feeds.feedburner.com/xxx')
Does anyone know a workaround for this. The way it functions now, we simply cannot server the feed in our pages when on httpS
At this time Feedburner does not offer feeds over SSL (https scheme). The message that you're getting regarding mixed content is by design; in fact, any and all content that is not being loaded from a secured connection will trigger that message, so making sure that all content is loaded over SSL is really your only alternative to avoid that popup.
As I mentioned, Feedburner doesn't offer feeds over SSL, so realistically you'll need to look into porting your feed to another service that DOES offer feeds over SSL. Keep in mind what I said above, however, with respect to your feed's content as well. If you have any embedded content that is not delivered via SSL then that content will also trigger the popup that you're trying to avoid.
This comes up from time to time with other services that don't have an SSL cert (Twitter's API is a bit of a mess that way too.) Brian's comment is correct about the nature of the message, so you've got a few options:
If this is on your server, and the core data is on your server too, then you've got end to end SSL capabilities; just point jGFeed to the local RSS feed that FeedBurner's already importing.
Code up a proxy on your server to marshall the call to Feedburner and return the response over SSL.
Find another feed service that supports SSL, and either pass it the original feed or the Feedburner one.
i have started using WordPress paid theme Schema for my several blogs. In general, it is a nice theme, fast and SEO friendly. However, since my blogs are all on HTTPS, then I noticed that if I had a widget of (Google Feedburner) in the sitebar. The chrome will show a security error for any secure page with an insecure form call on the page.
To fix this, it is really simple,
you would just need to change the file widget-subscribe.php located at /wp-content/themes/schema/functions/ and replace all “http://feedburner.google.com” to “https://feedburner.google.com”.
Save the file, and clear the cache, then your browser will show a green padlock.
and i fix this in my this blog www.androidloud.com