I try to solve this use case:
browser client connects to the Kong API Gateway by HTTP/2.
Kong proxy the HTTP/2 connection to the backend microservice and keep it open.
The use case result should be, that client is connected via HTTP/2 with the microservice.
It seems, Kong accept HTTP/2 call from the client, but than call the microservice by common HTTP.
Is there any solution for this case? I know Kong should be able to keep connection with the upstream in gRPC case.
Setup in docker compose:
#In docker-compose.yml
....
# I call running container at localhost:9081 with http2
KONG_PROXY_LISTEN: 0.0.0.0:9081 http2, 0.0.0.0:9082 http2 ssl
Setup in kong configuration file (using DBless)
#In Kong.yml
services:
- name: target-service
host: target-api-test #docker container name
port: 9000
routes:
- name: target-api-route
paths:
- /microservice-api
I think this is primarily because nginx doesn't have support for HTTP/2 for upstream servers as specified here.
Q: Will you support HTTP/2 on the upstream side as well, or only support HTTP/2 on the client side?
A: At the moment, we only support HTTP/2 on the client side. You can’t configure HTTP/2 with proxy_pass.
Related
I am aware that Azure application gateway supports websockets. However, I cant figure out from the samples and documentation how websocket access is reflected in the Access Logs.
I have been going over Azure gateway documentation for Access logs over here
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-diagnostics#for-application-gateway-and-waf-v2-sku
There is no protocol field - which usually carry ws or wss to indicate websocket access.
Thanks for help in advance
There is no user-configurable setting to selectively enable or disable WebSocket support in Application gateway. WebSocket protocols are designed to work over traditional HTTP ports of 80 and 443. You can continue using a standard HTTP listener on port 80 or 443 to receive WebSocket traffic. WebSocket traffic is then directed to the WebSocket enabled backend server using the appropriate backend pool as specified in application gateway rules.
Here is a clear documentation explaining about Websocket workflow in Application Gateway.
Can I have Google send http/2 requests to my server in cloud run?
I am not sure how google would know my server supports it since google terminates the SSL on the loadbalancer and sends http to the stateless servers in cloud run.
If possible, I am thinking of grabbing a few pieces from webpieces and creating a pure http/2 server with no http1.1 for microservices that I 'know' will only be doing http/2.
Also, if I have a pure http/2 server, is there a way that google translates from http1 requests to http/2 when needed so I could host websites as well?
The only info I could find was a great FAQ that seems to be missing the does it support http/2 on the server side(rather than client)...
https://github.com/ahmetb/cloud-run-faq
thanks,
Dean
Cloud Run container contract requires your application to serve on an unencrypted HTTP endpoint. However, this can be either HTTP/1 or HTTP/2.
Today, gRPC apps work on Cloud Run and gRPC actually uses HTTP/2 as its transport. This works because the gRPC servers (unless configured with TLS certificates) use the H2C (HTTP/2 unencrypted cleartext) protocol.
So, if your application can actually serve traffic unencrypted using h2c protocol, the traffic between Cloud Run load balancer <=> your application can be HTTP/2, without ever being downgraded to HTTP/1.
For example, in Go, you can use https://godoc.org/golang.org/x/net/http2/h2c package to automatically detect and upgrade http2 connections.
To test if your application implements h2c correctly, you need to locally run:
curl -v --http2-prior-knowledge http://localhost:8080
and see < HTTP/2 200 response.
(I'll make sure to add this to the FAQ repo.)
I have a router application with zuul and many services that are run in the backend and requests from client are routed to their services by zuul.
Everything is working well over http but when I configure the router and all services to https the following error is raised:
javax.net.ssl.SSLPeerUnverifiedException: Certificate for <127.0.0.1> doesn't match any of the subject alternative names: []
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:467) ~[httpclient-4.5.3.jar:4.5.3]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:397) ~[httpclient-4.5.3.jar:4.5.3]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355) ~[httpclient-4.5.3.jar:4.5.3]
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ~[httpclient-4.5.3.jar:4.5.3]
The zuul yml file :
zuul:
ignoredPatterns: /reza,/we
routes:
trp:
path: /micro1/**
sensitiveHeaders:
url: https://127.0.0.1:8080/micro1
server:
compression:
enabled: true
port: 80
ssl:
key-store: classpath:keystore.jks
key-store-password: password
key-password: matin1234
And the yml file of one of those services:
server:
compression:
enabled: true
port: 8080
ssl:
key-store: classpath:keystore.jks
key-store-password: password
key-password: matin1234
First I want to know that the concept of https over zuul works properly and secondly I want to know how I fix my problem.
Note: I don't have Eureka server registration.
For consuming https service, you have to trust the certificate provided by the service if they are using self-signed certificate. For that you have to configure trust-store, not key-store
If the self-signed certificate DN and domain name of the service are not matching, you have to configure zuul to disable that validation - but please note that this is not recommended in production
Configuration is given below:
zuul.sslHostnameValidationEnabled=false
This happens due to your certificates being self-signed with "Subject name (DN)" not matching 127.0.0.1. You have two options here:
Create a certificate with DN=127.0.0.1
Disable host verification
UPDATE:
Just noticed your question:
I want to know that the concept of https over zuul
Usually, nobody supports HTTPS between zuul and the underlying microservices because:
It affects performance. Imagine all your microservices use HTTPS for internal communication. HTTPS encryption, decryption, handshakes, etc. much more consumes resources comparing to plain HTTP communication.
Supporting HTTPS for all the microservices will make you cry. In large systems where you have hundreds (or thousands of microservices) changing certificates because some of them are expired will be a headache.
A common use case is to have your API Gateway running over HTTPS. But communication from the gateway to the underlying services as well as intercommunication between microservices should be over HTTP. The thing is that anyway you have to focus on secure networking for your microservices instead of secure communication between them. You system under the gateway should use a private network where nobody should have an access to.
There might be a case where you must have HTTPS between microservices for one more layer of security, but it's uncommon and related mostly to Banking. On the other hand, you could add HTTPS to services with a very sensitive data while the rest of them can stay on HTTP. It's more a question of the requirements that you have.
i would like to replace my node-http-proxy module with nginx proxy_pass module. Is it possible with new released nginx version, as i have read, that it supports HTTP/1.1 out of the box. I saw some threads struggeling with that problem, that websockets are not supported by nginx.
In my case im running several node projects in background and want to route my websocket connections from port 80 to 8000-8100, depending on domain. Is there a native way to do websocket proxy/reverse proxy without using the tcp_module addon?
I tried to setup an upstream in nginx.conf with proxy_passing to it, but if i try to connect to port 80 over websocket, i get an 502 Gateway error.
Anyone facing the same problem?
Does anyone have a working example for nginx + spcket.io, proxying over port 80?
No, this is not yet possible; nginx 1.2 incorporates stuff from the 1.1.x development branch which indeed includes HTTP/1.1 reverse proxying. Websocket connections are established using the HTTP/1.1 "Upgrade" header, but the fact that nginx now supports this kind of headers does not mean it supports websockets (websockets are a different protocol, not HTTP).
(I tried this myself using the 1.1.x branch (which I found to be stable enough for my purpose) and it doesn't work without the tcp_module)
Websockets will probably be supported in 1.3.x ( http://trac.nginx.org/nginx/roadmap ).
Your alternatives are:
keep using node-http-proxy
use nginx without tcp module; socket.io won't use websockets but something else (e.g. long polling)
nginx with tcp module: in this case I think you need an additional port for this module (never tried this myself)
put something else in front as a reverse proxy: I use HAProxy (which supports websockets) in front of nginx and node. Nginx now simply acts as a static fileserver, not a proxy. Varnish is another option, if you want additional caching.
In relation to NginX with TCP module there are few problems I have encountered.
But the most tricky one is trying to run your websockets with nginx on port 80 on EC2 instance.
I described whole configuration here
I have an event-machine websocket application (using the em-websocket gem) and it runs fine. The problem is that I need to deploy it using port 80 through nginx (can't compile it with tcp proxy module). Is it possible to use a simple nginx proxy_pass pointing to a thin server and have the thin server to pass the requests to my websocket server?
From what I understand you can't proxy websocket traffic with a proxy_pass.
Since web sockets are done over HTTP 1.1 connections (where the handshake and upgrade are completed), your backend needs to support HTTP 1.1, and from what I have researched, they break the HTTP 1.0 spec...
I've seen some people try to do the same thing with socket.io and HAProxy (see links). I would guess that you could try to swap out the socket.io with em-websockets and expect similar results.
1: http://www.letseehere.com/reverse-proxy-web-sockets
2: HAProxy + WebSocket Disconnection