If a site is secured via SSL, can a network sniffer still read the URLs being requested? - ssl-security

Can URLs be sniffed even though a client communicates with a server over SSL? I'm asking because I'm doing remote login & redirect to a physically different server via URL, and wondered if securing the communication via SSL would prevent replay attacks and the like.

The sniffer will know the IP (and probably hostname) of the server you're requesting from, and the timing/quantity of information transferred, but nothing else.
Yes, replay (and man in the middle) attacks are prevented by SSL if you don't trust a compromised root certificate.

An attacker can observe both the hostname (by watching your DNS traffic) and the IP address you're connecting to. The username, password and path part of the URL should not be available, however.
Of course, the client themselves always has access to this information.

The network sniffer would need both the public and private key to decrypt the SSL traffic.

SSL sets up an encrypted session between the two machines and then runs "ordinary" HTTP over that encrypted connection so they can see what physical machine you are connected to but beyond that can't see anything at all in your connection.
As others have said they can look at the DNS requests most likely to determine the hostname.
Also there are products out there which bypass this protection in a business environment by installing a new root certificate on the client machine and having a proxy server make the connection on your behalf, and then generating a "fake" certificate for the site generated using their root key to make the session to the browser so you appear to have a secure SSL connection to the server but in fact it's only to the proxy. You can look at the certificate chain for the connection to determine if this is happening but few people will bother.
So to answer your question - no the full URL can't be sniffed - but with access to the client machine it is possible to do it part way.,

Related

How can I ensure my domain is accessible from public networks the same as it is from private networks (Linux - Apache stack)

The Problem
browsers have no problem accessing my domain's website (https protocol) from a private network (home WiFi, personal hotspot), but if I try to access from a public network (my University's WiFi or Ethernet network, Costco WiFi, etc.), I get the following error message (happens on Chrome, FireFox, Edge, Safari (error messages are a bit different on Safari and FireFox)):
NET::ERR_CERT_AUTHORITY_INVALID
terp.app normally uses encryption to protect your information. When
Chrome tried to connect to terp.app this time, the website sent
back unusual and incorrect credentials. This may happen when an
attacker is trying to pretend to be terp.app, or a Wi-Fi sign-in
screen has interrupted the connection. Your information is still
secure because Chrome stopped the connection before any data was
exchanged.
You cannot visit terp.app right now because the website uses HSTS.
Network errors and attacks are usually temporary, so this page will
probably work later.
(By the way, it would give me the "because the website uses HSTS" message even before I implemented HSTS)
Background
I have set up an Apache2 web-server on my Linux VPS (Ubuntu 20.04). I recently configured everything so a domain I've purchased is accessible and working on this server:
DNSs redirect to server (domain is with Google Domains; VPS is hosted by Hostinger)
Redirect is achieved with custom records, which are two A records:
1st A record: Host Name: terp.app | Data: <VPS IP address>
2nd A record: Host Name: www.terp.app | Data: <VPS IP address>
Apache server set up with virtual hosts
set up an SSL certificate through letsencrypt.org (certbot)
I selected the option during this process to redirect http to https
configured DNS certificate authority authorization (CAA) so only letsencrypt.org-issued certificates are accepted
Achieved with one CAA record:
CAA record: Host Name: terp.app | Data: 0 issue "letsencrypt.org"
I enabled Strict Transport Security (HSTS)
Not sure if the fact that it is a .app domain has some sort of an impact
Bottom line, when I use ssllabs.com/ssltest/, I get an A+ grade: everything checks out. https connection has zero issues on private networks. I can confirm the problem from public networks doesn't have to do with a captive portal since I'm able to access any other website from these public networks.
Looking for a Solution
Yes, I know that I can navigate to chrome://net-internals/#hsts, and enter my desired domain name into "Delete Domain Security Policies." This is unequivocally not the solution I'm looking for. I want my site to work for users without them needing to do something like that. Other high-profile websites obviously don't have this issue, so...
How can I ensure my users will be able to access my site even when they're on public networks? Could it be because I'm using letsencrypt.org?

How a dns proxy works? (smart dns)

I am trying to build a new DNS, which will act as a proxy for certain domain names and uses a public DNS as upstream.
My understanding of DNS:
Client asks DNS (x.x.x.x) about example.com
DNS will look up inside its zones (or parent and root) and find example.com can be found at i.i.i.i
DNS will send i.i.i.i to the client.
Now, client asks the ip address of restricted.test and DNS server knows it is a restricted website, so instead of giving the direct ip to the website, it gives it's own proxy address p.p.p.p to the client.
Please correct me if I'm wrong till now, but when the client tries to connect to p.p.p.p how the proxy server knows which website the client wants to go in?
I really want to know how these work under the hood
Thanks in advance.
This mechanism you are asking about is the Proxy Auto-Configuration (PAC) file.
Read more about it here :
https://developer.mozilla.org/en-US/docs/Web/HTTP/Proxy_servers_and_tunneling/Proxy_Auto-Configuration_PAC_file
And here :
https://www.websense.com/content/support/library/web/v76/pac_file_best_practices/PAC_explained.aspx
Essentially in corporate networks, a PAC file is pushed out to every computer, and browser settings are also configured to enable the PAC file. But it can also be done manually. Just check your browser proxy settings to see the location of the PAC file it is pointed to.

Script for automatic proxy http traffic authorization

I have a virtual machine (local program) that works through a proxy, but does not support entering the login and password for the proxy. My proxy has a username and password. I need connect to the proxy from login and password so my virtual machine (local programm) from the intermediate server receives ready-made authorized traffic. That is, I need an analogue of the router in the local port.
I found it, but iyt not work for me
https://github.com/sjitech/proxy-login-automator
Please tell me some solutions for me.

How to use direct connection applications behind a kerberos proxy

I have a corporate proxy using Squid and kerberos for authentication, the proxy is configured for standard use, I.E allow http, https, a few others and block everything else. Now, there are many applications that support basic proxy authentication, but do not support Kerberos based authentication and many others that connect directly to the internet. I used Proxifier before the upgrade to kerberos to make my applications use the proxy, but I cannot do so now. I then installed an application called PX to create a proxy that connects to kerberos, but the proxy it creates is a simple HTTP Proxy and proxifier doesn't work correctly with it. Anyone has a setup for a situation like this?. I use Windows 10 and I obviously don't have access to the server where squid is configured. The application I need to connect to the internet uses standard https ports, it's not a torrent application nor anything that uses the ports blocked by squid. Thanks in advance.
Ok, for this particular case I've found the following setup to solve 99% of my problems.
First get Px here https://github.com/genotrance/px
Next get Fiddler: http://www.getfiddler.com/dl/Fiddler4BetaSetup.exe
Configure PX with your user and your domain and run it. By default it creates a running proxy on 127.0.0.1:3128
Configure your sistem proxy to use the proxy supplied by PX.
Execute fiddler, it should create ANOTHER proxy at 127.0.0.1:8888
Use this proxy in your apps. Proxifier should work as well.
Why use fiddler and not the direct 127.0.0.1:3128?, PX creates a pure http proxy and fiddler allows to tunnel https and connect request through it.
Any requests will pass through fiddler which will redirect them to the PX proxy which will redirect them to the squid proxy (So expect very slow speeds).
In the end since you're just redirecting your apps towards your proxy, if your proxy bans using regex expressions or direct IP connections some apps will NOT work, and in these cases using TOR or a VPN is the only real solution. Hope it helps someone avoid all the headaches I went through.

Is self-signed HTTPS + WHITELISTED IP safe for RPC

There is a HTTPS server with self-signed certificate on IP A and a https client on IP B. The server only allow access from IP B in IPTABLES. The client access server with the correct domain name and IP(defined in local host file).
Is this a safe pattern? I want to use this pattern for remote procedure call between two hosts with public IP. Is there any security problems. Can it prevent man-in-middle attack?
MitM attacks are still possible as long as the https client doesn't verify the certificate somehow (e.g. by comparing the fingerprint).
Man-in-the-Middle means that an attacker is between A and B: For A it seems as if A is talking directly to B and for B vice versa, but in reality both are talking with the attacker.
Verifying the SSL certificate (e.g., by trusting a CA or verifying a fingerprint) B (the client) can verify that it is really talking to A an not to an attacker.

Resources