How do I adjust Go's http Client (or Transport) to mimic curls --http2-prior-knowledge flag? - go

I'm interacting with a server (that is out of my control) in which protocol upgrade is not performed by the server if a request contains content (POST, PUT, PATCH with payload). It's unclear exactly what the issue with the server is, but I noticed that when I query with --http2-prior-knowledge, the protocol is upgraded:
❯ curl -i -PUT --http2-prior-knowledge http://localhost:8081/document/v1/foo -d '{"fields": {"docid": "123"}}'
HTTP/2 200
date: Tue, 08 Nov 2022 13:26:50 GMT
content-type: application/json;charset=utf-8
vary: Accept-Encoding
content-length: 78
The same request without --http2-prior-knowledge is stuck at HTTP/1.1. This seems closer to the default behaviour of Go's HTTP client
❯ curl -i -PUT --http2 http://localhost:8081/document/v1/foo -d '{"fields": {"docid": "123"}}'
HTTP/1.1 200 OK
Date: Tue, 08 Nov 2022 01:37:17 GMT
Content-Type: application/json;charset=utf-8
Vary: Accept-Encoding
Content-Length: 78
When I call this same API using Go's default client, the protocol is not upgraded. I've tried setting ForceAttemptHTTP2: true on the transport, but each http.Response contains a .Proto of HTTP/1.1
I think what I need to understand is how I can mimic curl's prior-knowledge flag in Go. Is this possible?

I solved this issue by specifying a custom http2.Transport which skipped TLS dial. The ideal solution, in retrospect, is to use an SSL certificate (self-signed is sufficient) which would better guarantee the use of HTTP2. Leaving some links for posterity
c := &http.Client{
// Skip TLS Dial
Transport: &http2.Transport{
AllowHTTP: true,
DialTLS: func(netw, addr string, cfg *tls.Config) (net.Conn, error) {
return net.Dial(netw, addr)
},
},
}
And links:
Why do web browsers not support h2c (HTTP/2 without TLS)?
https://github.com/golang/go/issues/14141

Related

Nginx Brotli header not added

I'm pulling my hairs for days trying to serve brotli compressed files through my local nginx install.
My configuration :
MacOS 12.6, Homebrew, Laravel Valet for managing sites and ssl
default nginx install replaced with nginx-full homebrew formulae that allows recompiling nginx with modules -> installed with the brotli module
I have tried different nginx brotli configuration, like this one
I think I do not have to do this, but I still tried to add specific proxy configurations for the files I want served with brotli
location ~ [^/]\.data\.br(/|$) {
add_header Content-Encoding br;
default_type application/octet-stream;
}
location ~ [^/]\.js\.br(/|$) {
add_header Content-Encoding br;
default_type application/javascript;
}
In the end, the http response does not contain content-encoding:br
nginx shows the module is installed :
$ nginx -V 2>&1 | tr ' ' '\n' | egrep -i 'brotli'
--add-module=/usr/local/share/brotli-nginx-module
When testing with curl it works for gzip but not for brotli :
HTTP/2 200
server: nginx/1.23.1
date: Thu, 20 Oct 2022 09:57:20 GMT
content-type: text/html; charset=UTF-8
vary: Accept-Encoding
x-powered-by: PHP/8.1.10
access-control-allow-origin: *
content-encoding: gzip
HTTP/2 200
server: nginx/1.23.1
date: Thu, 20 Oct 2022 09:57:21 GMT
content-type: text/html; charset=UTF-8
vary: Accept-Encoding
x-powered-by: PHP/8.1.10
access-control-allow-origin: *
HERE IT SHOULD BE "content-encoding: br" BUT IT'S NOT
Any idea is welcome, I don't understand what is going on... cheers.

How to handle compression static resources over https in Quarkus

Problem
I want to compress static resources in Quarkus like js, css, and image. I was activated configuration for compression quarkus.http.enable-compression:true. It's working perfectly on HTTP mode but does not working over https.
Expected behavior
Content will be compressed as GZIP over HTTPS
Actual behavior
No GZIP compression over HTTPS
To Reproduce
Steps to reproduce the behavior:
Git pull from quarkus-demo I made earlier
Create certificate for enable SSL in localhost with mkcert
Compile with command mvn clean package -Dquarkus.profile=prod for running over HTTPS. If you want to test over HTTP please run with this command mvn quarkus:dev
Run quarkus app with this command java -jar target\quarkus-app\quarkus-run.jar
Finally, open your browser to access https://localhost or http://localhost:8080 and then please inspect element to check loaded resources details at Network tab
application.yml
quarkus:
application:
version: 1.0.0-SNAPSHOT
http:
port: 8080
enable-compression: true
application-prod.yml
quarkus:
http:
port: 8080
ssl-port: 443
ssl:
certificate:
file: D:\system\server\localhost.pem
key-file: D:\system\server\localhost-key.pem
insecure-requests: redirect
enable-compression: true
HTTP
Request URL: http://localhost:8080/js/chunk-vendors.e96189d0.js
Request Method: GET
Status Code: 200 OK
Remote Address: 127.0.0.1:8080
Referrer Policy: strict-origin-when-cross-origin
accept-ranges: bytes
content-encoding: gzip
content-type: text/javascript;charset=UTF-8
date: Thu, 10 Mar 2022 07:41:46 GMT
transfer-encoding: chunked
HTTPS
Request URL: https://localhost/js/chunk-vendors.e96189d0.js
Request Method: GET
Status Code: 200
Remote Address: [::1]:443
Referrer Policy: strict-origin-when-cross-origin
accept-ranges: bytes
cache-control: public, immutable, max-age=86400
content-length: 880682
content-type: text/javascript;charset=UTF-8
date: Thu, 10 Mar 2022 07:45:07 GMT
last-modified: Thu, 10 Mar 2022 07:45:07 GMT
vary: accept-encoding
FYI : I've tried using vert.x filter but doesn't help :(
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
#ApplicationScoped
public class FilterRegistrator {
void setUpFilter(#Observes Filters filters) {
filters.register((rc) -> {
rc.next();
if (rc.normalizedPath().matches("^.*\\.(js|css|svg|png)$")) {
rc.response().headers().add("content-encoding", "gzip");
}
}, 0);
}
}
This issue was solved on Quarkus 2.7.5.Final

trying to deregister a service from consult not working?

I am using a consul client to deregister a service from my junit tests. I am using vert-consul-client . the consul version i am using is 1.11.1 . the service is not registered with the consul , but just testing what will happen if we try to deregister a service that is not registered.
from the logs i get this error
Status message: 'Not Found'. Body: 'Unknown service "BadService"'
strangely i dont get this error when testing with 1.10.6 consul version.
appreciate if you can help
thanks
strangely i dont get this error when testing with 1.10.6 consul version.
Consul recently changed the HTTP response code that is sent when an attempt is made to deregister a non-existent service.
Prior to Consul 1.11.0, and when ACLs were disabled, Consul would respond with a HTTP 200 response code and no response body when deregistering a non-existent service.
$ curl --include \
--request PUT http://localhost:8500/v1/agent/service/deregister/test
HTTP/1.1 200 OK
Vary: Accept-Encoding
X-Consul-Default-Acl-Policy: allow
Date: Wed, 05 Jan 2022 03:07:35 GMT
Content-Length: 0
This behavior was changed in Consul 1.11.0 by PR hashicorp/consul#10632 wherein Consul now returns a HTTP 404 response code if a service does not exist, regardless of whether ACLs are enabled. (See diff of consul/agent/acl.go).
$ curl --include \
--request PUT http://localhost:8500/v1/agent/service/deregister/test
HTTP/1.1 404 Not Found
Vary: Accept-Encoding
X-Consul-Default-Acl-Policy: allow
Date: Wed, 05 Jan 2022 03:24:21 GMT
Content-Length: 22
Content-Type: text/plain; charset=utf-8
Unknown service "test"
You're obviously not seeing an error in vertx-consul-client when communicating to Consul 1.10.6 because the HTTP 200 code indicates that the request was successful, whereas the HTTP 404 response correctly signals that the resource does not exist, and an error is correctly raised (see vert-consul-client/src/main/java/io/vertx/ext/consul/impl/ConsulClientImpl.java#L1320-L1333).
Interestingly in Consul 1.10.x, when ACLs are enabled on the cluster, Consul would reply with a HTTP 500 error code and a corresponding error message instead of the 200 response code. This is because when ACLs are enabled, the vetServiceUpdateWithAuthorizer function does not prematurely return (if authz == nil { return nil }) and proceeds with checking whether service exists, then raising an error because it does not (see consul/agent/acl.go#L96-L104).
$ curl --include \
--header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
--request PUT http://localhost:8500/v1/agent/service/deregister/test
HTTP/1.1 500 Internal Server Error
Vary: Accept-Encoding
X-Consul-Default-Acl-Policy: deny
Date: Wed, 05 Jan 2022 03:14:52 GMT
Content-Length: 22
Content-Type: text/plain; charset=utf-8
Unknown service "test"
If you had tested 1.10.6 with ACLs enabled, you would've also received a similar error as you are seeing with 1.11.1.

Varnish won't cache - Age 0

I seem to be having some problems with my Varnish set up. I have a clean install of Varnish and Nginx running on ubuntu, everything seems to be running, but I don't seem to be actually caching anything.
This is what im seeing:
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
X-Powered-By: PHP/5.5.9-1ubuntu4.14
Cache-Control: no-cache
Date: Tue, 02 Feb 2016 10:15:17 GMT
Content-Encoding: gzip
X-Varnish: 196655
Age: 0
Via: 1.1 varnish-v4
Accept-Ranges: bytes
Connection: keep-alive
I'm almost certain the problem is to do with the "age" response being 0. I have read that the Cache-Control header can be the culprit and have spent some time configuring both nginx and my vcl file with solutions I have read on-line, none of which have worked.
I'm open to any ideas even ones I have tried before (hence why im not listing the steps I have already taken).
Thanks in advance for any thoughts you might have.
Remove "no-cache" and set "max-age=120" (in seconds) in the Cache-Control header instead.
Also note that if the request contains any cookies or if the response sets any cookies than by default varnish is not gonna cache.

marathon rest api returns no data

I have 3 marathon servers running in HA. when i reach the rest api on the leader, it returns good data. But when i try it against one of the non leader nodes, I do not get any data back...no strings at all. The headers say 200...but no data. Has anybody experienced this before?
here is what i see on the leader
# curl -i http://10.0.0.1:8080/v2/apps
HTTP/1.1 200 OK
X-Marathon-Leader: http://x1-master-0:8080
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Content-Type: application/json; qs=2
Connection: close
Server: Jetty(8.y.z-SNAPSHOT)
{"apps":[]}
here is the data from the non leader
# curl -i http://10.0.0.2:8080/v2/apps
HTTP/1.1 200 OK
Connection: close
Server: Jetty(8.y.z-SNAPSHOT)
the problem was that the marathon servers could not resolve each other by name. Adding the hostnames of the other marathon servers to each marathon's /etc/hosts file fixed the problem.

Resources