AppAuth loopback authentication fails on macOS with Chrome - macos

We're using AppAuth for a macOS application to authenticate Google accounts. This has been working for years, except recently Chrome has started to block all http connections by default. The loopback server in AppAuth is hard-coded to work with http connections only. The following issue also seems to have gone unanswered: https://github.com/openid/AppAuth-iOS/issues/624
What other options do we have for using a https loopback server on macOS for OAuth2 authentication? We need the loopback server to be able to extract parameters Google sends back after authentication. Asking users to switch from Chrome is not desirable.

Interesting - with loopback desktop logins there are two URLs involved:
The URL in the desktop app, which is meant to be HTTP according to OAuth standards, since it runs on end user PCs. Using HTTPS would require the entire user base to host SSL certificates, which is highly impractical. Typically a loopback URL is a value such as http://localhost:8000, where the port number is often calculated at runtime.
The URL used to invoke the system browser is a value such as https://myauthserver/authorize?client_id=xxx&redirect_uri=http://localhost:8000..., and this should be HTTPS of course.
PROBLEM DIAGNOSIS
I'd be very surprised if Google have blocked this if you are using standard desktop logins, since it has been referenced on their Native Apps Page for years.
Are you sure something else is not the cause? One possibility might be lack of a user gesture in the system browser. Is the problem consistent and are there any differences in these cases:
Make Safari Browser the default before login
Make Chrome Browser the default before login
Make Chrome Browser the default before login and clear browser cache
Let me know and I may be able to suggest some next steps ...

Related

Windows authentication box pops up with integrated authentication on web page

I am running two Windows server 2016s with IIS 10.0.14393. One server for staging purposes, and one for production.
The application has one "front-end app" and one "back-end REST api" running on the same IIS server. The front end communicates with the backend (suprise!). The difficulty I am facing is that the staging server works as expected, i.e no "Sign in" box appears when entering the front-end web page (React). However, on the production server this box pops-up.
When the page is loaded, there is javascript that fetches some information from the API, and it seems that this async fetch is causing the pop-up to occur (the request is in pending mode until login).
I have studied the configuration of IIS on the two servers but can't seem to find any obvious differences.
Both instances have both windows authentication and anonymous authentication turned on for both front-end and backe-end. I need this as the API has different types of authentication for the endpoints.
Anyone that has solved a similar issue?
Thanks
If someone experiences a similar issue the following link may help: https://support.microsoft.com/en-us/help/258063/internet-explorer-may-prompt-you-for-a-password
In my case I was sending the request to the api with the full domain url. The problem was fixed by just using the machine name (and port in my case) when sending the request. If the whole domain with punctuation is used, the system believes that the request is meant for the Internet and not the intranet, and will not include any credentials.
Another, and probably more robust solution, is to add the site in question to: Internet properties -> security -> Local intranet -> sites -> advanced.

Edge AJAX calls fail to a domain with SSL pointing to localhost

We have a product which relies on a thin client installed on users machine. We make an ajax get request to a domain pointing to local host which has a real ssl. This fails in edge, works in every other browser including IE11. Note that same works if there is no ssl involved. It also works on Windows 10 Home edition.
Adding a datatype, content-type or request method does not resolve this. Only way to fix this seems to be running following command.
CheckNetIsolation LoopbackExempt -a -n="Microsoft.MicrosoftEdge_8wekyb3d8bbwe"
If this is expected behavior, can someone explain why microsoft would block this on a enterprise version but it works on home edition ?
Microsoft Edge, and Windows 10 apps in general, use AppContainer Isolation:
Isolating the application from network resources beyond those
specifically allocated, AppContainer prevents the application from
'escaping' its environment and maliciously exploiting network
resources. Granular access can be granted for Internet access,
Intranet access, and acting as a server.
Your thin-client is running on win10 enterprise edge against an intranet ssl service (localhost), so access is by default restricted by this mechanism. With the command
CheckNetIsolation LoopbackExempt -a -n="Microsoft.MicrosoftEdge_8wekyb3d8bbwe"
you are disabling network isolation on that host for the loopback network adapter (localhost) for MS Edge so your app client (and any other locally sourced app) can run on it without restriction against any localhost service.
This fails in edge, works in every other browser including IE11.
They clearly wanted to improve the default security policy of previous versions. It's never too late, MS :) There is actually an Enhanced Protected Mode (EPM) that could prevent your app from running on IE too. Chrome has its Google Chrome Sandbox that can also be tuned like this. Safari and Firefox also have sand-boxing features although I am not familiar with their particularities.
Note that same works if there is no ssl involved.
Typically, if you are using ssl is because you are dealing with sensitive data and/or a critical service. If you are not it is ok to be more lax. Again, just a matter of security policy.
It also works on Windows 10 Home edition. If this is expected behavior, can someone explain why microsoft would block this on a enterprise version but it works on home edition?
Enterprise versions of any product are known to be more restrictive since their target users are more security concerned (IT people typically don't want to expose their company's intranet payroll db service to external attackers, and things like that). Also, in this case the default behavior can be easily defined/altered by experts on the IT department (check out domain security policies) so it's better to leave the default settings to "paranoid" mode and let the experts tweak according to the company's needs.
Note there are other mechanisms at work when you are running a thin client on the browser that make this kind of protection redundant (same domain policy, XSS protection and so on). Nevertheless one can never be too safe: There are ways to work around those defenses such as Self-XSS that require isolation between the browser and the local network to avoid compromising the system. In the end, less exposed surface means less attack vectors, so isolation is good if you can afford it :)

safari wont load windows authentication on a spesific network

I have an Intranet Web Application which uses Windows Authentication. All windows/PC users can login in fine.
There happen to be 2 Mac Users that use Safari as their default browser and prefer it over chrome for Mac.
When trying to access the Intranet Web Application on the network, nothing happens. But when trying to access the Intranet Web Application from another network or source of internet outside the network, the user is able to login.
Is there something I have missed? Any thoughts on this please.
Converting them to Chrome is the last option in this option.
Thanks in advance
If anyone stumbles across this, the answer was simple. In IIS under the Authentication. Setting the providers. I removed Negotiate and kept NTLM. Apple seems to accept this security method.

Kerberos SSO authentication in browser in windows

In a company network there is a web page, which uses kerberos single sign on. I am connecting to this network via VPN.
When using Mac, I can just write in the console kinit username#REALM.LOCAL, I get the ticket and after this I can open the web page in a browser and it works.
The other story happens on windows. I have my PC, I don't want it to become a member of the company domain. Via MIT Kerberos Client I can get a kerberos ticket, but of course no browser is aware of its existence.
Is there a way to feed this ticket to a browser on windows?
Safari is very friendly, it will give your kerberos tickets to anybody. IE and firefox need to be configured to do this and I'm not sure if they will have access to the kerberos tickets unless your windows box is in the AD domain.
Basically, you need to configure your browser to support SPNEGO. With firefox, you need to tweak some variables in about:config See
http://www.microhowto.info/howto/configure_firefox_to_authenticate_using_spnego_and_kerberos.html
for the exact details. IE is a whole lot trickier.

How to identify computer which I have redirected

I have the following problem to solve:
I few months ago I startet a website where you can watch youtube videos which aren't available in your country. Everythings works fine but now I want to offer a new method where I route all the requests directly over my server. Therefore I will later use a custom DNS-Server. Right now I use the hosts file for testing but I have really no idea how i can identify the user. I can promp the user user to login on a website but I will that it works systemwide so if he use a youtube downloader for example it have to work there either and not only in the browser where I could use a session system with cookies. I want a solution where the user can identify himself once in a time like a website or something like this but how can my server detect if this is a user which is logged in or if he is not?
There are several ways that this could be accomplished with varying levels of difficulty.
standard proxy server over https. Your service could simply be a proxy server and then every "client" would update their browser to point to your proxy server. You could also simplify this by using a proxy PAC file (proxy auto config).
An anonomyzing interface. The end user would not be able to use their standard search tools etc, instead they would have to use a web page much like what google translate does.
A browser plugin. There are already firefox plugins which do something similar to this. They change the way that the browser resolves DNS. This may be the best bet for you but would require development work.
An actual install utility that you have your users install on their machines which update the dns servers.

Resources