How to handle compression static resources over https in Quarkus - https

Problem
I want to compress static resources in Quarkus like js, css, and image. I was activated configuration for compression quarkus.http.enable-compression:true. It's working perfectly on HTTP mode but does not working over https.
Expected behavior
Content will be compressed as GZIP over HTTPS
Actual behavior
No GZIP compression over HTTPS
To Reproduce
Steps to reproduce the behavior:
Git pull from quarkus-demo I made earlier
Create certificate for enable SSL in localhost with mkcert
Compile with command mvn clean package -Dquarkus.profile=prod for running over HTTPS. If you want to test over HTTP please run with this command mvn quarkus:dev
Run quarkus app with this command java -jar target\quarkus-app\quarkus-run.jar
Finally, open your browser to access https://localhost or http://localhost:8080 and then please inspect element to check loaded resources details at Network tab
application.yml
quarkus:
application:
version: 1.0.0-SNAPSHOT
http:
port: 8080
enable-compression: true
application-prod.yml
quarkus:
http:
port: 8080
ssl-port: 443
ssl:
certificate:
file: D:\system\server\localhost.pem
key-file: D:\system\server\localhost-key.pem
insecure-requests: redirect
enable-compression: true
HTTP
Request URL: http://localhost:8080/js/chunk-vendors.e96189d0.js
Request Method: GET
Status Code: 200 OK
Remote Address: 127.0.0.1:8080
Referrer Policy: strict-origin-when-cross-origin
accept-ranges: bytes
content-encoding: gzip
content-type: text/javascript;charset=UTF-8
date: Thu, 10 Mar 2022 07:41:46 GMT
transfer-encoding: chunked
HTTPS
Request URL: https://localhost/js/chunk-vendors.e96189d0.js
Request Method: GET
Status Code: 200
Remote Address: [::1]:443
Referrer Policy: strict-origin-when-cross-origin
accept-ranges: bytes
cache-control: public, immutable, max-age=86400
content-length: 880682
content-type: text/javascript;charset=UTF-8
date: Thu, 10 Mar 2022 07:45:07 GMT
last-modified: Thu, 10 Mar 2022 07:45:07 GMT
vary: accept-encoding
FYI : I've tried using vert.x filter but doesn't help :(
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
#ApplicationScoped
public class FilterRegistrator {
void setUpFilter(#Observes Filters filters) {
filters.register((rc) -> {
rc.next();
if (rc.normalizedPath().matches("^.*\\.(js|css|svg|png)$")) {
rc.response().headers().add("content-encoding", "gzip");
}
}, 0);
}
}

This issue was solved on Quarkus 2.7.5.Final

Related

How do I adjust Go's http Client (or Transport) to mimic curls --http2-prior-knowledge flag?

I'm interacting with a server (that is out of my control) in which protocol upgrade is not performed by the server if a request contains content (POST, PUT, PATCH with payload). It's unclear exactly what the issue with the server is, but I noticed that when I query with --http2-prior-knowledge, the protocol is upgraded:
❯ curl -i -PUT --http2-prior-knowledge http://localhost:8081/document/v1/foo -d '{"fields": {"docid": "123"}}'
HTTP/2 200
date: Tue, 08 Nov 2022 13:26:50 GMT
content-type: application/json;charset=utf-8
vary: Accept-Encoding
content-length: 78
The same request without --http2-prior-knowledge is stuck at HTTP/1.1. This seems closer to the default behaviour of Go's HTTP client
❯ curl -i -PUT --http2 http://localhost:8081/document/v1/foo -d '{"fields": {"docid": "123"}}'
HTTP/1.1 200 OK
Date: Tue, 08 Nov 2022 01:37:17 GMT
Content-Type: application/json;charset=utf-8
Vary: Accept-Encoding
Content-Length: 78
When I call this same API using Go's default client, the protocol is not upgraded. I've tried setting ForceAttemptHTTP2: true on the transport, but each http.Response contains a .Proto of HTTP/1.1
I think what I need to understand is how I can mimic curl's prior-knowledge flag in Go. Is this possible?
I solved this issue by specifying a custom http2.Transport which skipped TLS dial. The ideal solution, in retrospect, is to use an SSL certificate (self-signed is sufficient) which would better guarantee the use of HTTP2. Leaving some links for posterity
c := &http.Client{
// Skip TLS Dial
Transport: &http2.Transport{
AllowHTTP: true,
DialTLS: func(netw, addr string, cfg *tls.Config) (net.Conn, error) {
return net.Dial(netw, addr)
},
},
}
And links:
Why do web browsers not support h2c (HTTP/2 without TLS)?
https://github.com/golang/go/issues/14141

Nginx Brotli header not added

I'm pulling my hairs for days trying to serve brotli compressed files through my local nginx install.
My configuration :
MacOS 12.6, Homebrew, Laravel Valet for managing sites and ssl
default nginx install replaced with nginx-full homebrew formulae that allows recompiling nginx with modules -> installed with the brotli module
I have tried different nginx brotli configuration, like this one
I think I do not have to do this, but I still tried to add specific proxy configurations for the files I want served with brotli
location ~ [^/]\.data\.br(/|$) {
add_header Content-Encoding br;
default_type application/octet-stream;
}
location ~ [^/]\.js\.br(/|$) {
add_header Content-Encoding br;
default_type application/javascript;
}
In the end, the http response does not contain content-encoding:br
nginx shows the module is installed :
$ nginx -V 2>&1 | tr ' ' '\n' | egrep -i 'brotli'
--add-module=/usr/local/share/brotli-nginx-module
When testing with curl it works for gzip but not for brotli :
HTTP/2 200
server: nginx/1.23.1
date: Thu, 20 Oct 2022 09:57:20 GMT
content-type: text/html; charset=UTF-8
vary: Accept-Encoding
x-powered-by: PHP/8.1.10
access-control-allow-origin: *
content-encoding: gzip
HTTP/2 200
server: nginx/1.23.1
date: Thu, 20 Oct 2022 09:57:21 GMT
content-type: text/html; charset=UTF-8
vary: Accept-Encoding
x-powered-by: PHP/8.1.10
access-control-allow-origin: *
HERE IT SHOULD BE "content-encoding: br" BUT IT'S NOT
Any idea is welcome, I don't understand what is going on... cheers.

trying to deregister a service from consult not working?

I am using a consul client to deregister a service from my junit tests. I am using vert-consul-client . the consul version i am using is 1.11.1 . the service is not registered with the consul , but just testing what will happen if we try to deregister a service that is not registered.
from the logs i get this error
Status message: 'Not Found'. Body: 'Unknown service "BadService"'
strangely i dont get this error when testing with 1.10.6 consul version.
appreciate if you can help
thanks
strangely i dont get this error when testing with 1.10.6 consul version.
Consul recently changed the HTTP response code that is sent when an attempt is made to deregister a non-existent service.
Prior to Consul 1.11.0, and when ACLs were disabled, Consul would respond with a HTTP 200 response code and no response body when deregistering a non-existent service.
$ curl --include \
--request PUT http://localhost:8500/v1/agent/service/deregister/test
HTTP/1.1 200 OK
Vary: Accept-Encoding
X-Consul-Default-Acl-Policy: allow
Date: Wed, 05 Jan 2022 03:07:35 GMT
Content-Length: 0
This behavior was changed in Consul 1.11.0 by PR hashicorp/consul#10632 wherein Consul now returns a HTTP 404 response code if a service does not exist, regardless of whether ACLs are enabled. (See diff of consul/agent/acl.go).
$ curl --include \
--request PUT http://localhost:8500/v1/agent/service/deregister/test
HTTP/1.1 404 Not Found
Vary: Accept-Encoding
X-Consul-Default-Acl-Policy: allow
Date: Wed, 05 Jan 2022 03:24:21 GMT
Content-Length: 22
Content-Type: text/plain; charset=utf-8
Unknown service "test"
You're obviously not seeing an error in vertx-consul-client when communicating to Consul 1.10.6 because the HTTP 200 code indicates that the request was successful, whereas the HTTP 404 response correctly signals that the resource does not exist, and an error is correctly raised (see vert-consul-client/src/main/java/io/vertx/ext/consul/impl/ConsulClientImpl.java#L1320-L1333).
Interestingly in Consul 1.10.x, when ACLs are enabled on the cluster, Consul would reply with a HTTP 500 error code and a corresponding error message instead of the 200 response code. This is because when ACLs are enabled, the vetServiceUpdateWithAuthorizer function does not prematurely return (if authz == nil { return nil }) and proceeds with checking whether service exists, then raising an error because it does not (see consul/agent/acl.go#L96-L104).
$ curl --include \
--header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
--request PUT http://localhost:8500/v1/agent/service/deregister/test
HTTP/1.1 500 Internal Server Error
Vary: Accept-Encoding
X-Consul-Default-Acl-Policy: deny
Date: Wed, 05 Jan 2022 03:14:52 GMT
Content-Length: 22
Content-Type: text/plain; charset=utf-8
Unknown service "test"
If you had tested 1.10.6 with ACLs enabled, you would've also received a similar error as you are seeing with 1.11.1.

Why do I see nginx headers when ddev is configured to use apache?

I updated ddev to version 1.3.0 and ran ddev config. After that I changed the configuration from nginx-fpm to apache-fpm. After starting ddev and checked the HTTP headers, there is nginx/1.15.3 used. Is there something else to do, to get Apache working?
My config.yaml:
APIVersion: v1.3.0
name: example
type: typo3
docroot: public
php_version: "7.2"
webserver_type: apache-fpm
router_http_port: "8080"
router_https_port: "8443"
xdebug_enabled: true
additional_hostnames: []
additional_fqdns: []
provider: default
hooks:
post-start:
- exec: composer install -d /var/www/html
- exec: ../vendor/bin/typo3cms cache:flush
- exec: ../vendor/bin/typo3cms database:updateschema
- exec: yarn --cwd typo3conf/ext/theme/Resources/Private install
That is such a good question! I know because I already got stumped by it myself when writing tests.
The answer is: Apache runs in the web container but when you use the http://*.ddev.local URL, it goes through ddev-router, which is an nginx reverse proxy, and that's why you see the nginx headers. But rest assured you are using Apache. You can confirm that these ways:
ddev ssh and ps -ef to see what's running
Hit the 127.0.0.1 URL reported by ddev start and ddev describe. That URL goes directly to the web container, for example http://127.0.0.1:33221 - You'll see the apache headers on that one.
Your question is so good - Could you please edit the title to something like "Why do I see nginx headers when ddev is configured to use apache?" - I think other people will find it that way.
$ curl -I http://127.0.0.1:33224
HTTP/1.1 200 OK
Date: Fri, 12 Oct 2018 02:18:26 GMT
Server: Apache/2.4.25 (Debian)
Cache-Control: must-revalidate, no-cache, private
X-Drupal-Dynamic-Cache: HIT
X-UA-Compatible: IE=edge
Content-language: en
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Vary:
X-Generator: Drupal 8 (https://www.drupal.org)
X-Drupal-Cache: MISS
Content-Type: text/html; charset=UTF-8

Varnish won't cache - Age 0

I seem to be having some problems with my Varnish set up. I have a clean install of Varnish and Nginx running on ubuntu, everything seems to be running, but I don't seem to be actually caching anything.
This is what im seeing:
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
X-Powered-By: PHP/5.5.9-1ubuntu4.14
Cache-Control: no-cache
Date: Tue, 02 Feb 2016 10:15:17 GMT
Content-Encoding: gzip
X-Varnish: 196655
Age: 0
Via: 1.1 varnish-v4
Accept-Ranges: bytes
Connection: keep-alive
I'm almost certain the problem is to do with the "age" response being 0. I have read that the Cache-Control header can be the culprit and have spent some time configuring both nginx and my vcl file with solutions I have read on-line, none of which have worked.
I'm open to any ideas even ones I have tried before (hence why im not listing the steps I have already taken).
Thanks in advance for any thoughts you might have.
Remove "no-cache" and set "max-age=120" (in seconds) in the Cache-Control header instead.
Also note that if the request contains any cookies or if the response sets any cookies than by default varnish is not gonna cache.

Resources