Https config dosn't work in zuul routing - https

I have a router application with zuul and many services that are run in the backend and requests from client are routed to their services by zuul.
Everything is working well over http but when I configure the router and all services to https the following error is raised:
javax.net.ssl.SSLPeerUnverifiedException: Certificate for <127.0.0.1> doesn't match any of the subject alternative names: []
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:467) ~[httpclient-4.5.3.jar:4.5.3]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:397) ~[httpclient-4.5.3.jar:4.5.3]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355) ~[httpclient-4.5.3.jar:4.5.3]
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ~[httpclient-4.5.3.jar:4.5.3]
The zuul yml file :
zuul:
ignoredPatterns: /reza,/we
routes:
trp:
path: /micro1/**
sensitiveHeaders:
url: https://127.0.0.1:8080/micro1
server:
compression:
enabled: true
port: 80
ssl:
key-store: classpath:keystore.jks
key-store-password: password
key-password: matin1234
And the yml file of one of those services:
server:
compression:
enabled: true
port: 8080
ssl:
key-store: classpath:keystore.jks
key-store-password: password
key-password: matin1234
First I want to know that the concept of https over zuul works properly and secondly I want to know how I fix my problem.
Note: I don't have Eureka server registration.

For consuming https service, you have to trust the certificate provided by the service if they are using self-signed certificate. For that you have to configure trust-store, not key-store
If the self-signed certificate DN and domain name of the service are not matching, you have to configure zuul to disable that validation - but please note that this is not recommended in production
Configuration is given below:
zuul.sslHostnameValidationEnabled=false

This happens due to your certificates being self-signed with "Subject name (DN)" not matching 127.0.0.1. You have two options here:
Create a certificate with DN=127.0.0.1
Disable host verification
UPDATE:
Just noticed your question:
I want to know that the concept of https over zuul
Usually, nobody supports HTTPS between zuul and the underlying microservices because:
It affects performance. Imagine all your microservices use HTTPS for internal communication. HTTPS encryption, decryption, handshakes, etc. much more consumes resources comparing to plain HTTP communication.
Supporting HTTPS for all the microservices will make you cry. In large systems where you have hundreds (or thousands of microservices) changing certificates because some of them are expired will be a headache.
A common use case is to have your API Gateway running over HTTPS. But communication from the gateway to the underlying services as well as intercommunication between microservices should be over HTTP. The thing is that anyway you have to focus on secure networking for your microservices instead of secure communication between them. You system under the gateway should use a private network where nobody should have an access to.
There might be a case where you must have HTTPS between microservices for one more layer of security, but it's uncommon and related mostly to Banking. On the other hand, you could add HTTPS to services with a very sensitive data while the rest of them can stay on HTTP. It's more a question of the requirements that you have.

Related

Kong - connect to upstream via HTTP/2

I try to solve this use case:
browser client connects to the Kong API Gateway by HTTP/2.
Kong proxy the HTTP/2 connection to the backend microservice and keep it open.
The use case result should be, that client is connected via HTTP/2 with the microservice.
It seems, Kong accept HTTP/2 call from the client, but than call the microservice by common HTTP.
Is there any solution for this case? I know Kong should be able to keep connection with the upstream in gRPC case.
Setup in docker compose:
#In docker-compose.yml
....
# I call running container at localhost:9081 with http2
KONG_PROXY_LISTEN: 0.0.0.0:9081 http2, 0.0.0.0:9082 http2 ssl
Setup in kong configuration file (using DBless)
#In Kong.yml
services:
- name: target-service
host: target-api-test #docker container name
port: 9000
routes:
- name: target-api-route
paths:
- /microservice-api
I think this is primarily because nginx doesn't have support for HTTP/2 for upstream servers as specified here.
Q: Will you support HTTP/2 on the upstream side as well, or only support HTTP/2 on the client side?
A: At the moment, we only support HTTP/2 on the client side. You can’t configure HTTP/2 with proxy_pass.

Restrict insecure web socket protocol connections in PCF

We are hosting an application in the preprod azure PCF environment which exposes websocket endpoints for client devices to connect to. Is there a prescribed methodology to secure the said websocket endpoint using TLS/SSL when hosted on PCF and running behind the PCF HAProxy?
I am having trouble interpreting this information, as in, are we supposed to expose port 4443 on the server and PCF shall by default pick it up to be a secure port that ensures unsecured connections cannot be established? Or does it require some configuration to be done on HAProxy?
Is there a prescribed methodology to secure the said websocket endpoint using TLS/SSL when hosted on PCF and running behind the PCF HAProxy?
A few things:
You don't need to configure certs or anything like that when deploying your app to PCF. The platform takes care of all that. In your case, it'll likely be handled by HAProxy, but it could be some other load balancer or even Gorouter depending on your platform operations team installed PCF. The net result is that TLS is first terminated before it hits your app, so you don't need to worry about it.
Your app should always force users to HTTPS. How you do this depends on the language/framework you're using, but most have some functionality for this.
This process generally works by checking to see if the incoming request was over HTTP or HTTPS. If it's HTTP, then you issue a redirect to the same URL, but over HTTPS. This is important for all apps, not just ones using WebSockets. Encrypt all the things.
Do keep in mind that you are behind one or more reverse proxies, so if you are doing this manually, you'll need to consider what's in x-forwarded-proto or x-forwarded-port, not just the upstream connection which would be Gorouter, not your client's browser.
https://docs.pivotal.io/platform/application-service/2-7/concepts/http-routing.html#http-headers
If you are forcing your user's to HTTPS (#1 above), then your users will be unable to initiate an insecure WebSocket connection to your app. Browsers like Chrome & Firefox have restrictions to prevent an insecure WebSocket connection from being made when the site was loaded over HTTPS.
You'll get a message like The operation is insecure in Firefox or Cannot connect: SecurityError: Failed to construct 'WebSocket': An insecure WebSocket connection may not be initiated from a page loaded over HTTPS. in Chrome.
I am having trouble interpreting this information, as in, are we supposed to expose port 4443 on the server and PCF shall by default pick it up to be a secure port that ensures unsecured connections cannot be established? Or does it require some configuration to be done on HAProxy?
From the application perspective, you don't do anything different. Your app is supposed to start and listen on the assigned port, i.e. what's in $PORT. This is the same for HTTP, HTTP, WS & WSS traffic. In short, as an app developer you don't need to think about this when deploying to PCF.
The only exception would be if your platform operations team uses a load balancer that does not natively support WebSockets. In this case, to work around the issue they need to separate traffic. HTTP and HTTPS go on the traditional ports 80 and 443, and they will route WebSockets on a different port. The PCF docs recommend 4443, which is where you're probably seeing that port. I can't tell you if your platform is set up this way, but if you know that you're using HAproxy, it is probably not.
https://docs.pivotal.io/platform/application-service/2-8/adminguide/supporting-websockets.html
At any rate, if you don't know just push an app and try to initiate a secure WebSocket connection over port 443 and see if it works. If it fails, try 4443 and see if that works. That or ask your platform operations team.
For what it's worth, even if your need to use port 4443 there is no difference to your application that runs on PCF. The only difference would be in your JavaScript code that initiates the WebSocket connection. It would need to know to use port 4443 instead of the default 443.

How to configure HTTPS services/api on kong

Previously I configured api/services on Kong as HTTP and it was working fine. Now I made api/services as HTTPS in back end and I changed protocol http to https for all api/services on Kong. But after changing http to https protocol i unable to access api's.
Can you please tell me what I have to do?
Here is my services configuration on kong
Route
Please help me.
HTTPS is used to protect data exchanges from anyone looking into them.
You are configuring data exchange between your gateway and upstream servers.
Your microservices are most likely deployed into the same closed virtual private network where kong gateway is located.
It is unlikely that anyone could sniff on data traffic which goes between API gateway and your microservices.
Setup of encryption in your virtual private network will just waste computational resources which you could allocate to extra workers which do usefull things.
What you probably need is to configure SSL certificate at kong gateway public interface.
To do this you can add your SSL certificate in Konga GUI in CERTIFICATES section.

How to handle HTTPS with spring-boot after google load balancer has been configured to handle https?

I have gotten the picture that if the google load balancer has been configured to handle HTTPS (by adding SSL certificate) that I don't need to have a ssl certificate on my compute engine instances. From my understanding the load balancer gets the secure request and then just forwards with http to an instance.
Now the frontend for the load balancer is configured for two ports. 8080 for regular HTTP protocol and 443 for HTTPS protocol. If I only want to handle HTTPS is setting the spring-boot application to listen to port 443 the only thing I have to do to make it work? Simply adding the following this to application.properties.
server.port = 443
Or is there more configuration needed from the spring part? I'm genuinely interested in learning this and have researched and tried reading up on this but I can't seem to find any good resources doing something similar. I get the feeling that a lot of the knowledge around these kind of problems is gotten through practical experiences.
If you want the Google load balancer to terminate HTTPS and forward HTTP to your backend services, simply configure the load balancer with a HTTP backend. If you're using a HTTPS backend you'll have to listen to and handle HTTPS traffic in your app.
The difference is if the traffic between the load balancer and your backend (inside GCP) is encrypted or not. usually HTTPS termination at the load balancer level is enough.

Azure: security between web roles

In Azure, if you choose to use internal endpoint (instead of input endpoint), https is not an option. http & tcp are the only options. Does it mean internal endpoint is 100% secure and you don't need encryption.
Then it comes to another question. If i choose to use input endpoint between mvc application and wcf service. Is it really necessary to have https between them? Is it OK if i have 2 input endpoints for wcf. One with http on port 80, which is supposed to be used by mvc application. Another with https on port 443, which can be used by somebody else. (not our own application)
Do you need to encrypt internal endpoints?
No, a web/worker role cannot connect to an internal endpoint in another deployment. The Azure network prevents this, so man-in-the-middle attacks shouldn't be possible. Therefore, it's not necessary to enable SSL on internal endpoints.
Is is necessary to enable HTTPS on WCF endpoints?
It's certainly possible to configure your application in that way. Why not make the port 80 endpoint on the WCF service an internal one? Or - why not host the WCF application on the same Role, then you can just use the loopback address?
You need to think about the security requirements of your application and go from there.

Resources