I'm fixing some really old software.
It's written in VFP9 and uses MSXML2.XMLHTTP for accessing web APIs.
It works fine using unsecured (HTTP) sites, but not HTTPS sites, which is required.
My assumption is that MSXML2.XMLHTTP is using only obsolete SSL or something. I tried MSXML2.XMLHTTP.6.0 and same thing.
Are there updated XMLHTTP COM objects available that can talk to modern HTTPS servers?
Or is there a better option for VFP9 consuming HTTPS?
I just tried using West Wind's wwclient.zip, and it was updated this year, and that doesn't do modern HTTPS either. So that's out. HTTP works fine. HTTPS gets nothing.
Also, what I'm testing on is Windows XP. That might be an issue.
Update: It's definitely WinXP because the same code works on newer Windows. So the question is really how to update MSXML2.XMLHTTP on WinXP to work with modern HTTPS.
This is a known issue with old versions of Windows and MSXML2.XMLHTTP, thanks to Rick Strahl for the tip:
https://west-wind.com/wconnect/weblog/ShowEntry.blog?id=937&id=937
Related
I have a C++ Windows application, developed using Visual studio 2017 on windows 10 system. This application uses cpprestsdk to post request to REST Server. My application works perfectly fine on windows 10 machine with properly posting request over ssl to rest server. I did not create any local certificate to make my application work on windows 10.
However same application when ported on windows 7 (64 bit), is not able to POST request with SSL protocol to rest server.
Same request works without SSL, (http://HOST/API works)
but (https://host/api fails)
by giving following error.
winhttpsendrequest 12029 a connection with the server could not be established.
From same windows 7, PostMan can successfully post https request.
No clue of what could be wrong with the implementation.
Can any one share what could be the reason of failing the POST request on windows 7?
I'm a bit late with the answer but I hope that it might help others who face the same problem ...
I think that your server insists on using tls version higher than 1.0 which is a default on Windows 7. Unfortunately cpprestsdk cannot be configured to use a specific tls version. On Windows cpprestsdk uses WinHTTP which exposes two handles but only one of them can be used to configure tls (I do not remember handle names at the moment). Unfortunately the native handle that cpprestsdk has access to cannot be used to configure tls.
The only workaround is to configure Windows 7 (and indirectly WinHTTP) to use a specific tls version as default. Instructions on how to do that can be found here: https://support.microsoft.com/en-ca/help/3140245/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-wi.
Today when I woke up to continue my developing process I got Firefox update and then I wasn't able to reach my localhost websites and redirecting to HTTPS protocol.
We all know that Google did the same while before but as many of us using Firefox mostly we (at least me) didn't care and continued our works with Firefox, now that Firefox decided to play with us (developers) here is some unanswered questions for me here:
Questions
How do we add HTTPS to our localhost?
Should we buy SSL certificate for our local environment?
How do I add SSL to my laravel project on localhost?
What will happen if I develop application with SSL and when I move it to host my domain doesn't have SSL (will be any conflict there?)
Concerns
My most concerns goes to:
What if I don't want to buy SSL certificate for my local environment and Publish my projects data (such as names etc.) with others (basically SSL companies).
What if I develop with HTTPS and my live site is HTTP
UPDATE
As I'm working on Windows and also I'm suing Laragon (i don't know about mapps,xampp etc.) here is how I solved my issue But still looking for answer to my other questions
First of all I turned on my laragon ssl certificate, then i changed my domains to pp now my sites loads like domain.pp
PS: I also tested same way with .local, .test and .app it didn't worked but pp worked.
You can also change the domain suffix.
just like
.localhost
.invalid
.test
.example
The folks that created DesktopServer (which I ***highly**** recommend over MAMP/XAMPP) registered the domain .dev.cc for local development use when Google did its thing with dev, which, as we all know, now requires https for local work when you use Chrome or Firefox. When you use DesktopServer to install a new instance of a site locally, DS will append the .dev.cc TLD which will only exist on your local computer. DesktopServer modifies all instances of .dev.cc to the correct production domain when you push your site to live. But, even if you don't use DS, you can use the .dev.cc domain.
We have a product which relies on a thin client installed on users machine. We make an ajax get request to a domain pointing to local host which has a real ssl. This fails in edge, works in every other browser including IE11. Note that same works if there is no ssl involved. It also works on Windows 10 Home edition.
Adding a datatype, content-type or request method does not resolve this. Only way to fix this seems to be running following command.
CheckNetIsolation LoopbackExempt -a -n="Microsoft.MicrosoftEdge_8wekyb3d8bbwe"
If this is expected behavior, can someone explain why microsoft would block this on a enterprise version but it works on home edition ?
Microsoft Edge, and Windows 10 apps in general, use AppContainer Isolation:
Isolating the application from network resources beyond those
specifically allocated, AppContainer prevents the application from
'escaping' its environment and maliciously exploiting network
resources. Granular access can be granted for Internet access,
Intranet access, and acting as a server.
Your thin-client is running on win10 enterprise edge against an intranet ssl service (localhost), so access is by default restricted by this mechanism. With the command
CheckNetIsolation LoopbackExempt -a -n="Microsoft.MicrosoftEdge_8wekyb3d8bbwe"
you are disabling network isolation on that host for the loopback network adapter (localhost) for MS Edge so your app client (and any other locally sourced app) can run on it without restriction against any localhost service.
This fails in edge, works in every other browser including IE11.
They clearly wanted to improve the default security policy of previous versions. It's never too late, MS :) There is actually an Enhanced Protected Mode (EPM) that could prevent your app from running on IE too. Chrome has its Google Chrome Sandbox that can also be tuned like this. Safari and Firefox also have sand-boxing features although I am not familiar with their particularities.
Note that same works if there is no ssl involved.
Typically, if you are using ssl is because you are dealing with sensitive data and/or a critical service. If you are not it is ok to be more lax. Again, just a matter of security policy.
It also works on Windows 10 Home edition. If this is expected behavior, can someone explain why microsoft would block this on a enterprise version but it works on home edition?
Enterprise versions of any product are known to be more restrictive since their target users are more security concerned (IT people typically don't want to expose their company's intranet payroll db service to external attackers, and things like that). Also, in this case the default behavior can be easily defined/altered by experts on the IT department (check out domain security policies) so it's better to leave the default settings to "paranoid" mode and let the experts tweak according to the company's needs.
Note there are other mechanisms at work when you are running a thin client on the browser that make this kind of protection redundant (same domain policy, XSS protection and so on). Nevertheless one can never be too safe: There are ways to work around those defenses such as Self-XSS that require isolation between the browser and the local network to avoid compromising the system. In the end, less exposed surface means less attack vectors, so isolation is good if you can afford it :)
I am writing a webdav server for embedded system. Everything goes normal until I tested it with windows client, the miniredir.
It became extremely slow when accessing the data with miniredir. I captured the network traffic and found that everytime I made a move, the miniredir tried to connect to the server via SMB first. (SYN package sent to 137,138,139,445) and the expolrer view would not show until the SMB request failed a few times, which takes more than 20s.
I also tried miniredir with Apache+mod_dav, same delay was observed (make sure the server machine disabled SMB service).
Is there anyone solved this problem? or if there's any work around solution for XP?
BTW: After a few days' debugging, now I believe MS Miniredir is not a qualified WebDAV client. A lot of bugs and shorting comings were reported, but MS didn't do much improvement. http://www.greenbytes.de/tech/webdav/webdav-redirector-list.html
Significant delays can be encountered when accessing WebDAV resources if Internet Explorer is configured to auto-detect proxy servers. Try following these instructions for disabling proxy auto-detection and see if that helps.
After a few days' debugging, now I believe MS Miniredir is not a qualified WebDAV client.
I think this is an overstatement. The only documented issue in XP/SP3 is a by-default lack of support for basic auth, and there is a workaround for this. "When you hear hoofbeats, look for horses, not zebras."
I heard that on Windows you can login from a web browser to the web server without going through the usual login entering username and password but using instead the credentials from Windows directly, using the NTLM protocol.
How is this achieved? Does the web server need to support some additional authentication?
Update: I'm asking for a generic web server, not just IIS. How to do that on Apache for instance?
The webserver just needs to be configured to support Windows authentication (which will be NTLM, or - better - Kerberos if both client and server are W2K or later). I believe that IIS or Apache can be configured to do that.
The browser also has to support this - at least IE does so (not sure about the others, it may be possible). edit: looks like firefox has some support for this too, and safari on MacOS
edit: for details on apache, google modules for NTLM authentication. Kerberos modules also exist. as per other answers, this really only works on an Intranet - not just because the browser needs to be in an Intranet zone (only applies to IE), but because any intervening firewall will typically stop this working, and because the necessary interdomain trusts will probably not exist. It's also a bit trickier to make work if the apache server is on UNIX, and especially if you also have Kerberos servers on UNIX in the mix, but still possible.
It will only be seamless in a specific situation; namely the webserver needs to support NTLM (for example, IIS), and it needs to be in a zone that the client is configured to trust (The "Intranet Zone" in IE parlance, unless the end user has tweaked their settings)
If your webserver and client pc's are on a network secured by Active Directory or similar, you can set 'Windows Integrated Security' in IIS on the web server for the website which automatically logs in all I.E clients (That are allowed).
As stated previously, NTLM is typically used if your back end is Windows Managed (MS Active Directory). However, there are also modules available for Apache that will tie into this: mod_ntlm.
Since this is it's own protocol, it is required that the browser is able to understand this protocol and reply to the authentication challenges. I don't know which browsers support this off hand, but my assumption would be that most do.
From my experience, kerberos is more of a prefered method, but I have not worked with it much, so unfortunately, I don't have much advise as far as that goes.
On a side note, I recall reading somewhere that the JRE also has ways of tying into NTLM on your web server in order to obtain identity information for the authenticated user. As stated previously, .NET has support for this as well.
Also, Firefox does not support NTLM by default but it can be configured using the following tut: http://www.crossedconnections.org/w/?p=89
If you set the IIS settings to require authentication then your users will need to log in to access the page. They then have any rights (if not an interface) to anything on that server that they would if they logged in the normal way (from the console).
Other than this, I am not sure what you are referring to.
Yes this is possible. It is often used in intranet applications where users are. windows uses NTLM or Kerberos to authorize the user against a central service, typically Active Directory on the windows platform. On the .NET platform the current user information can be accessed through the System.Threading.Thread.CurrentPrincipal.Identity instance.
You might also want to look into Jespa. It seems a little bit more straight forward than Kerberos but provides good ntlm sso capabilities.
I was looking for more information about Kerberos (because NTLM, even v2, become deprecated with AD 2008), and I found this article, explaining how make it work with Apache (as you mentionned it).
http://blog.scottlowe.org/2006/08/10/kerberos-based-sso-with-apache/
This question is probably outdated (or at least solved), but if it can help someone ...