Can't send http/2 request in Gatling - http2

I've got a problem using gatling tool when trying to send http/2 request.
I've enable http/2 setting in protocol settings, even added mapping to make sure that client will communicate with server using http2 but still request is send using http/1.1.
In console output you'll see that server can communicate using http2 but for some reason request are sent only with http 1.1.
All headers are written according to the ones from browser.
Could someone help me please with this issue?
package test
import scala.concurrent.duration._
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import io.gatling.jdbc.Predef._
class RecordedSimulation extends Simulation {
val httpProtocol = http
.baseUrl("https://sitename")
.inferHtmlResources(BlackList(""".*\.js""", """.*\.css""", """.*\.gif""", """.*\.jpeg""", """.*\.jpg""", """.*\.ico""", """.*\.woff""", """.*\.woff2""", """.*\.(t|o)tf""", """.*\.png""", """.*detectportal\.firefox\.com.*"""), WhiteList())
.acceptHeader("*/*")
.acceptEncodingHeader("gzip, deflate")
.acceptLanguageHeader("en-US,en;q=0.5")
.userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:85.0) Gecko/20100101 Firefox/85.0")
.enableHttp2
.http2PriorKnowledge(Map("sitename" -> true))
.disableCaching
.disableWarmUp
val headers_0 = Map(
"Accept" -> "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Cache-Control" -> "max-age=0",
"sec-fetch-dest" -> "document",
"sec-fetch-mode" -> "navigate",
"sec-fetch-site" -> "none",
"sec-fetch-user" -> "?1",
"Upgrade-Insecure-Requests" -> "1")
val scn = scenario("RecordedSimulation")
.exec(http("request_0")
.get("/auth?r=%2F&m=NOT_AUTHENTICATED")
.headers(headers_0))
.pause(2)
setUp(scn.inject(atOnceUsers(1))).protocols(httpProtocol)
}
In console I see next:
DEBUG io.gatling.http.client.impl.DefaultHttpClient - ALPN led to HTTP/2 with remote sitename
DEBUG io.gatling.http.client.impl.Http2AppHandler - Write request WritableRequest{request=DefaultFullHttpRequest(decodeResult: success, version: HTTP/1.1, content: UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 0, cap: 0))
GET https://sitename/auth?r=%2F&m=NOT_AUTHENTICATED HTTP/1.1
sec-fetch-site: none
Upgrade-Insecure-Requests: 1
If-Modified-Since: Fri, 29 Jan 2021 06:40:06 GMT
sec-fetch-dest: document
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Cache-Control: max-age=0
accept-encoding: gzip, deflate
If-None-Match: "1d6f60994776ee0"
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:85.0) Gecko/20100101 Firefox/85.0
accept-language: en-US,en;q=0.5
sec-fetch-mode: navigate
sec-fetch-user: ?1
accept: */*

Those are internal Gatling logs. We use Netty in a way that it converts HTTP/1.1 payloads into HTTP/2 ones.
From the logs you provided, it looks that Gatling is correctly using HTTP/2 here.

Related

Jersey How to mask the password from the request

I am using spring boot with jersey implementation in that we are logging every request and response. Now it is logging everything but I need to mask the password in the request. in the configuration file, we are using this
register(new LoggingFeature(java.util.logging.Logger.getLogger(LoggingFeature.DEFAULT_LOGGER_NAME),
Level.INFO, LoggingFeature.Verbosity.PAYLOAD_ANY, 10000));**
I cannot able to mask the specific field from the POST request. Please help
SAMPLE LOGS:
2 > POST http://localhost:9092/sampleapi/login
2 > accept: */*
2 > accept-encoding: gzip, deflate, br
2 > connection: keep-alive
2 > content-length: 69
2 > content-type: application/json
2 > host: localhost:9092
2 > postman-token: 24c72655-6f97-4e19-9503-6f013c859e5f
2 > user-agent: PostmanRuntime/7.26.8
{
"userId": "1111",
"password": "TEST#1",
"yob": "1997"
}
In this, I need to mask the password field alone

ElasticSearch.net/NEST SniffingConnectionPool switches to port 9200 when using custom port behind proxy

When using the SniffingConnectionPool it seems that Elasticsearch.net switches to port 9200 after the initial http.settings request?
I'm setting up the ConnectionPool with an IEnumerable as follows:
var nodes = cfg.Nodes.Select(x => x.Uri);
var pool = new SniffingConnectionPool(nodes);
The uris passed uses port 92. When debugging the requests, I can see that the first request is correctly made and we get 200 OK. However, the following HEAD request uses port 9200?
11 200 HTTP X.X:X.X:92 /_nodes/http,settings?flat_settings&timeout=500ms 5 121 application/json; charset=UTF-8
12 502 HTTP X.X.X.X:9200 / 512 no-cache, must-revalidate text/html; charset=UTF-8
Do I miss something? Worth to notice is that our cluster is reversed proxied by Nginx, and uses 9200/9300 to communicate internally.
Edit: The http property of http.settings looks like the following:
"http" : {
"bound_address" : [
"[::]:9200"
],
"publish_address" : "X.X.X.X:9200",
"max_content_length_in_bytes" : 104857600
}
Maybe the SniffingConnectionPool parses that content and starts using 9200?

CXF JAX-RS client always sends empty PUT requests in chunking mode regardles of AllowChunking setting

We perform PUT request to our party using CXF JAX-RS client. Request body is empty.
A simple request invocation leads to server response with code 411.
Response-Code: 411
"Content-Length is missing"
Our party's REST-server requires Content-Length HTTP-header to be set.
We switched chunking off according to note about chunking but this did not solve the problem. The REST-server still answers with 411 error.
Here is our conduit configuration from cxf.xml file
<http-conf:conduit name="{http://myhost.com/ChangePassword}WebClient.http-conduit">
<http-conf:client AllowChunking="false"/>
</http-conf:conduit>
Line in the log confirms that execution of our request bound to our conduit configuration:
DEBUG o.a.cxf.transport.http.HTTPConduit - Conduit '{http://myhost.com/ChangePassword}WebClient.http-conduit' has been configured for plain http.
Adding Content-Length header explicitly also did not help.
Invocation.Builder builder = ...
builder = builder.header(HttpHeaders.CONTENT_LENGTH, 0);
A CXF Client's log entry confirms header setting, however when we sniffed packets, we have surprisingly found that header setting has been completely ignored by CXF client. Content-Length header was not sent.
Here is the log. Content-Length header is present:
INFO o.a.c.i.LoggingOutInterceptor - Outbound Message
---------------------------
ID: 1
Address: http://myhost.com/ChangePassword?username=abc%40gmail.com&oldPassword=qwerty123&newPassword=321ytrewq
Http-Method: PUT
Content-Type: application/x-www-form-urlencoded
Headers: {Accept=[application/json], client_id=[abcdefg1234567890abcdefg12345678], Content-Length=[0], Content-Type=[application/x-www-form-urlencoded], Cache-Control=[no-cache], Connection=[Keep-Alive]}
--------------------------------------
DEBUG o.apache.cxf.transport.http.Headers - Accept: application/json
DEBUG o.apache.cxf.transport.http.Headers - client_id: abcdefg1234567890abcdefg12345678
DEBUG o.apache.cxf.transport.http.Headers - Content-Length: 0
DEBUG o.apache.cxf.transport.http.Headers - Content-Type: application/x-www-form-urlencoded
DEBUG o.apache.cxf.transport.http.Headers - Cache-Control: no-cache
DEBUG o.apache.cxf.transport.http.Headers - Connection: Keep-Alive
And here is an output of the packet sniffer. Content-Length header is not present:
PUT http://myhost.com/ChangePassword?username=abc%40gmail.com&oldPassword=qwerty123&newPassword=321ytrewq HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Accept: application/json
client_id: abcdefg1234567890abcdefg12345678
Cache-Control: no-cache
User-Agent: Apache-CXF/3.1.8
Pragma: no-cache
Host: myhost.com
Proxy-Connection: keep-alive
Does anyone know how actually disable chunking?
Here is our code:
public static void main(String[] args)
{
String clientId = "abcdefg1234567890abcdefg12345678";
String uri = "http://myhost.com";
String user = "abc#gmail.com";
Client client = ClientBuilder.newBuilder().newClient();
WebTarget target = client.target(uri);
target = target.path("ChangePassword").queryParam("username", user).queryParam("oldPassword", "qwerty123").queryParam("newPassword", "321ytrewq");
Invocation.Builder builder = target.request("application/json").header("client_id", clientId).header(HttpHeaders.CONTENT_LENGTH, 0);
Response response = builder.put(Entity.form(new Form()));
String body = response.readEntity(String.class);
System.out.println(body);
}
Versions:
OS: Windows 7 Enterprise SP1
Arch: x86_64
Java: 1.7.0_80
CXF: 3.1.8
I had a very similar issue that I was not able to solve as you did by trying to turn off chunking.
What I ended up doing was setting the Content-Length to 1 and adding some white space " " as the body. For me it seemed that the proxy servers before the server application was rejected the request and by doing that got me past the proxy servers and the server was able to process the request as it was only operating based on the URL.

WebSocket handshake Error with caddy proxy

I'm trying to initiate a websocket connection between chrome browser client and server.
Overview of my implementation :
There are set of different up and running projects. The main project is the hub to all other projects and it handle all http requests, routes and proxy to other sub projects. These all projects use load balancers. My attempt is to create a websocket connection from chrom browser to one sub project.
caddy version : 0.9.3
websocket library : github.com/gorilla/websocket
The main project's caddy configs :
https://{$DOMAIN_NAME}/analytics/ {
tls ../resources/security/server.pem ../resources/security/server.key
proxy / https://localhost:8107/analytics {
websocket
insecure_skip_verify
}
}
The sub project's caddy configs :
localhost:{$ANALYTICS_CADDY_PORT}/analytics {
root webapps/analytics
gzip
ext .html
tls {$ANALYTICS_CERTIFICATE_FILE} {$ANALYTICS_KEY_FILE}
proxy /api https://localhost:{$ANALYTICS_HTTPS_PORT} {
websocket
insecure_skip_verify
}
}
Inside the analytics sub project, " /api/ws " would trigger CreateSocketConnection() method.
//Starting the API server
router := routes.NewRouter()
http.Handle("/", router)
http.HandleFunc("/api/ws", api.CreateSocketConnection)
CreateSocketConnection implementation :
func CreateSocketConnection(w http.ResponseWriter, r *http.Request) {
var upgrader = websocket.Upgrader{
ReadBufferSize: 1024,
WriteBufferSize: 1024,
}
_, err = upgrader.Upgrade(w, r, nil)
if err != nil {
log.Fatal("upgrader failed :", err.Error())
}
//controllers.HandleSocket(ws)
}
Client side implementation :
conn = new WebSocket("wss://xxxx.com/analytics/api/ws");
Issue is I'm not getting any error log in backend, but the socket connection fails on browser.
WebSocket connection to 'wss://xxxx.com/analytics/api/ws' failed: Error during WebSocket handshake: Unexpected response code: 502
Request header :
Accept-Encoding:gzip, deflate, sdch, br
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:Upgrade
Cookie:username=admin; tenantid=1; tenantdomain=super.com;
DNT:1
Host:xxxx.com
Origin:https://xxxx.com
Pragma:no-cache
Sec-WebSocket-Extensions:permessage-deflate; client_max_window_bits
Sec-WebSocket-Key:O/DS1lRHzXptoWz5WR131A==
Sec-WebSocket-Version:13
Upgrade:websocket
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.59 Safari/537.36
But the response header is as follows :
Content-Encoding:gzip
Content-Length:40
Content-Type:text/plain; charset=utf-8
Date:Sat, 29 Oct 2016 03:13:23 GMT
Server:Caddy
Vary:Accept-Encoding
X-Content-Type-Options:nosniff
Please note that I'm getting the request header inside the CreateSocketConnection method as follows :
map[
Connection:[Upgrade]
X-Forwarded-For:[127.0.0.1, 127.0.0.1] Dnt:[1]
Origin:[https://xxxx.com]
Pragma:[no-cache]
Sec-Websocket-Extensions:[permessage-deflate; client_max_window_bits]
Sec-Websocket-Version:[13]
Accept-Encoding:[gzip]
User-Agent:[Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.59 Safari/537.36]
Cache-Control:[no-cache]
Sec-Websocket-Key:[O/DS1lRHzXptoWz5WR131A==]
Upgrade:[websocket]
Cookie:[username=admin; tenantid=1; tenantdomain=super.com; ]
Accept-Language:[en-US,en;q=0.8]]
Am I missing something in my implementation?
Thanks in advance
I had a similar issue, what I was missing was the transparent tag.
Ex.
https://{$DOMAIN_NAME}/analytics/ {
tls ../resources/security/server.pem ../resources/security/server.key
proxy / https://localhost:8107/analytics {
transparent
websocket
insecure_skip_verify
}
}
transparent specifies that all the headers should be sent with it, so this matters if you have authentication.
transparent:
Passes thru host information from the original request as
most backend apps would expect. Shorthand for:
header_upstream Host {host}
header_upstream X-Real-IP {remote} header_upstream X-Forwarded-For {remote}
header_upstream X-Forwarded-Port {server_port}
header_upstream X-Forwarded-Proto {scheme}
Source: https://caddyserver.com/docs/proxy

Tyrus websocket server handshake issue?

I followed the User Guide available here : i added this in my pom :
<dependency>
<groupId>org.glassfish.tyrus</groupId>
<artifactId>tyrus-server</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>org.glassfish.tyrus</groupId>
<artifactId>tyrus-container-grizzly</artifactId>
<version>1.2</version>
</dependency>
I wrote this in my main class :
Server server = new Server("localhost", 8624, "/", EchoEndPoint.class);
try
{
server.start();
BufferedReader reader = new BufferedReader(new InputStreamReader(System.in));
System.out.print("Please press a key to stop the server.");
reader.readLine();
}
catch(Exception ex) { ex.printStackTrace(); }
finally
{
server.stop();
}
The content of my EchoEndPoint class is the same as described in the guide.
I tried to connect to this with a HTML5 websocket :
var ws = new WebSocket("ws://localhost:8624/echo");
It seems that, browser side, it doesn't connect (it calls the onClose callback directly). And, server side, i get this in the console :
Grave: Invalid Connection header returned: 'keep-alive'
org.glassfish.tyrus.websockets.HandshakeException: Invalid Connection header returned: 'keep-alive'
at org.glassfish.tyrus.websockets.HandShake.validate(HandShake.java:254)
at org.glassfish.tyrus.websockets.HandShake.checkForHeader(HandShake.java:246)
at org.glassfish.tyrus.websockets.HandShake.<init>(HandShake.java:97)
at org.glassfish.tyrus.websockets.draft06.HandShake06.<init>(HandShake06.java:63)
[...]
org.glassfish.grizzly.filterchain.DefaultFilterChain execute
Avertissement: Exception during FilterChain execution
java.lang.ClassCastException: org.glassfish.grizzly.http.HttpContent cannot be cast to org.glassfish.tyrus.websockets.DataFrame
at org.glassfish.tyrus.container.grizzly.WebSocketFilter.handleWrite(WebSocketFilter.java:330)
If it's of any help, i copy the request header caught with the browser inspector :
GET /echo HTTP/1.1
Host: localhost:8624
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:22.0) Gecko/20100101 Firefox/22.0 FirePHP/0.7.2
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
DNT: 1
Sec-WebSocket-Version: 13
Origin: null
Sec-WebSocket-Key: yhGPwJ26c5fYEZ5/abvtqw==
x-insight: activate
Connection: keep-alive, Upgrade
Pragma: no-cache
Cache-Control: no-cache
Upgrade: websocket
Is this a handshake problem ?
EDIT : i've tried in Chrome (28.0.1500.72) and it's working. Maybe the issue comes from Firefox when it builds the header ?
Tyrus is complaining about the Connection: keep-alive, Upgrade header.
Firefox isn't doing anything wrong here.
Tyrus is being too restrictive and not following the WebSocket Spec (RFC-6455) with regards to how to handle the Connection header.
The RFC states in Section 4.1:
6. The request MUST contain a |Connection| header field whose value
MUST include the "Upgrade" token.
and
3. If the response lacks a |Connection| header field or the
|Connection| header field doesn't contain a token that is an
ASCII case-insensitive match for the value "Upgrade", the client
MUST _Fail the WebSocket Connection_.
This seems like a bug in Tyrus.

Resources