Unable to upgrade https to websocket in envoy (https to ws upgrade) - websocket

I'm using the following code to upgrade https connection websocket connection receiving 403 error.
https to ws upgrade
Configuration:
match:
safe_regex:
google_re2: {}
regex: "/api/1/(web|rest)/ws.*"
upgrade_configs:
- upgrade_type: websocket
Access logs shows as upgrade_failed with 403 code and debug logs shows the following
':authority', 'localhost'
':path', '/api/1/rest/ws'
':method', 'GET'
'sec-websocket-version', '13'
'sec-websocket-key', 'winoU3MVKB9q2s02lVj7ug=='
'connection', 'Upgrade'
'upgrade', 'websocket'
'sec-websocket-extensions', 'permessage-deflate; client_max_window_bits'
[2022-11-01 15:48:09.757][21026][debug][http] [external/envoy/source/common/http/filter_manager.cc:883] [C2183][S8415942046573715112] Sending local reply with details upgrade_failed
[2022-11-01 15:48:09.757][21026][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1400] [C2183][S8415942046573715112] closing connection due to connection close header
[2022-11-01 15:48:09.757][21026][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1455] [C2183][S8415942046573715112] encoding headers via codec (end_stream=true):
':status', '403'
'date', 'Tue, 01 Nov 2022 15:48:09 GMT'
'server', 'envoy'
'connection', 'close'
Any insights what might be the issue.

Related

Quarkus grpc is throwing start up error: Unable to start the gRPC server: java.nio.channels.UnresolvedAddressException

I am trying to start the grpc server with the property
quarkus.grpc.server.use-separate-server=true
in that case, i am getting the below error during server start up
2023-01-19 13:12:51,762 WARN [io.qua.grp.run.GrpcServerRecorder] (main) Using legacy gRPC support, with separate new HTTP server instance. Switch to single HTTP server instance usage with quarkus.grpc.server.use-separate-server=false property
2023-01-19 13:12:51,824 INFO [io.qua.grp.run.GrpcServerRecorder] (vert.x-eventloop-thread-0) Registering gRPC reflection service
2023-01-19 13:12:51,934 ERROR [io.qua.grp.run.GrpcServerRecorder] (vert.x-eventloop-thread-0) Unable to start the gRPC server: java.nio.channels.UnresolvedAddressException
at java.base/sun.nio.ch.Net.checkAddress(Net.java:149)
at java.base/sun.nio.ch.Net.checkAddress(Net.java:157)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:330)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:141)
But when I start the grpc server with property
quarkus.grpc.server.use-separate-server=false
the grpc server starts but the client is not able to access the server
I am getting the below error on the client side
13:54:28 ERROR line=111 traceId=, parentId=, spanId=, sampled= [qu.ms.of.OfferResource] (executor-thread-0) Exception: UNAVAILABLE: upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111: io.grpc.StatusRuntimeException: UNAVAILABLE: upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165)
How do we overcome this issue?

NiFi InvokeHTTP POST invalid request

I am trying to execute a simple POST request from nifi using the InvokeHTTP processor. The target server responds with the response "error: invalid request". I am able to POST successfully with curl from nifi's host.
I have set the processor's HTTP method to "POST", and the URL to "http://myhost:1234", other fields are set to the defaults. The incoming flowfile's mime.type is application/json and the flowfile content is valid json.
Here's what I tried (server names etc. were replaced with fake names):
I confirmed that the target server is ok with curl using seemingly equal parameters to the InvokeHTTP processor (with and without the --http2 flag, equally on the InvokeHTTP processor I tried with the "HTTP/2 Disabled" property on true and false):
curl -v --http2 -POST -H "content-type: application/json" http://myhost:1234/ -d '\[{"key":"value"\]'
For the data I used the actual content of the flowfile used by InvokeHTTP.
* Trying <ip>...
* TCP_NODELAY set
* Connected to myhost (<ip>) port 1234 (#0)
> POST / HTTP/1.1
> Host: myhost:1234
> User-Agent: curl/7.61.1
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
> content-type: application/json
> Content-Length: 17
>
* upload completely sent off: 17 out of 17 bytes
< HTTP/1.1 201 Created
< Server: <servername>
< Content-Length: 0
<
* Connection #0 to host myhost left intact
With InvokeHTTP, the response is routed to the NoRetry output with the following attributes added to the flowfile:
invokehttp.response.body
error: invalid request
invokehttp.response.url
http://myhost:1234/
invokehttp.status.code
400
invokehttp.status.message
Forbidden
I tried logging the request by setting the org.apache.nifi.processors.standard.InvokeHTTP logger to DEBUG. The resulting logs:
2022-11-17 11:22:03,384 DEBUG [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.InvokeHTTP InvokeHTTP[id=<guid>]
Request to remote service:
http://myhost:1234/
date: Thu, 17 Nov 2022 11:22:03 GMT
user-agent:
2022-11-17 11:22:03,384 DEBUG [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.InvokeHTTP InvokeHTTP[id=<guid>]
Request to remote service:
http://myhost:1234/
date: Thu, 17 Nov 2022 11:22:03 GMT
user-agent:
2022-11-17 11:22:03,391 DEBUG [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.InvokeHTTP InvokeHTTP[id=<guid>]
Response from remote service:
http://myhost:1234/
content-length: 23
server: <servername>
2022-11-17 11:22:03,391 DEBUG [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.InvokeHTTP InvokeHTTP[id=<guid>]
Response from remote service:
http://myhost:1234/
content-length: 23
server: <servername>
At this point I don't know what to do. I don't know if the logged requests are purposefully limited to these fields or if there's actually a lot of information missing from the requests themselves such as the payload and the content type. I'm also wondering why the requests are logged twice, or whether they're actually sent twice (I'm on a single node environment).
I expect this processor to be able to perform such a simple request without much trouble, and have confirmed that the target server is not the issue. Did I miss something? How can I debug this further (e.g. see the actual raw request sent by InvokeHTTP)?
Thank you.
Check the mime.type attribute. That is automatically translated to a header for you, and you could be sending form-encoded parameters as the mime type or something like that instead of the expected json.

Apache2/Laravel responding 301 on some agents

I have wifidog installed on a TP-LINK
(openwrt 18.06.2)
I have wifidog-auth-laravel installed on a OVH Debian
github /wifidog/wifidog-auth-laravel)
If I use curl, chrome and wget; I get the pong response for the authetication url
But if wifidog attempts to get the pong response I get a 301 Permanantly moved response.
How can that be?
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:302) Level 1: Connecting to auth server example.com:80
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:331) Level 1: Successfully connected to auth server example.com:80
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:141) Unlocking config
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:141) Config unlocked
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:147) Connected to auth server
[6][Mon May 20 13:44:54 2019][5977](wd_util.c:116) AUTH_ONLINE status became ON
[7][Mon May 20 13:44:54 2019][5977](simple_http.c:77) Sending HTTP request to auth server: [GET /ping/?gw_id=EC086B35444C&sys_uptime=1820&sys_memfree=6096&sys_load=0.70&wifidog_uptime=3 HTTP/1.0
User-Agent: WiFiDog 1.2.1
Host: example.com
]
[7][Mon May 20 13:44:54 2019][5977](simple_http.c:87) Reading response
[7][Mon May 20 13:44:54 2019][5977](simple_http.c:111) Read 725 bytes
[7][Mon May 20 13:44:54 2019][5977](simple_http.c:124) HTTP Response from
Server: [HTTP/1.1 301 Moved Permanently
Date: Mon, 20 May 2019 13:44:54 GMT
Server: Apache/2.4.25 (Debian)
Location: http://example.com/ping?gw_id=EC086B35444C&sys_uptime=1820&sys_memfree=6096&sys_load=0.70&wifidog_uptime=3
Content-Length: 415
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved here.</p>
<hr>
<address>Apache/2.4.25 (Debian) Server at example.com Port 80</address>
</body></html>
]
[4][Mon May 20 13:44:54 2019][5977](ping_thread.c:191) Auth server did NOT say Pong!
[7][Mon May 20 13:44:54 2019][5977](firewall.c:140) Marking auth server down
I found the solution:
The Wifidog userAgent formed the url different from the other user agents:
a workaround is in the wifidog.conf
AuthServer {
Hostname example.com
SSLAvailable yes
Path /
PingScriptPathFragment ping?
LoginScriptPathFragment login?
PortalScriptPathFragment portal?
MsgScriptPathFragment gw_message.php?
AuthScriptPathFragment auth?
}

Chrome closes itself just after it loads for mobile web automation

Chrome browser opens and automatically closes while mobile web automation.
The code doesn't proceeds any further. I'm using real device and appium version 1.6 and chrome version 54.
Desired Capabilities :
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(CapabilityType.BROWSER_NAME, "chrome");
capabilities.setCapability("deviceName", "Testing device");
capabilities.setCapability("platformVersion", "5.0");
capabilities.setCapability("platformName", "Android");
capabilities.setCapability(MobileCapabilityType.NEW_COMMAND_TIMEOUT, 7200);
Driver Initialisation :
AndroidDriver<AndroidElement> aDriver = new AndroidDriver<AndroidElement>(new URL(
"http://127.0.0.1:4723/wd/hub"), setCapabilities());
System.out.println(aDriver.getSessionId());
aDriver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
aDriver.get("https://www.google.com");
Appium log :
[debug] [UiAutomator] Moving to state 'online'
[AndroidBootstrap] Android bootstrap socket is now connected
[AndroidDriver] Starting a chrome-based browser session
[debug] [Chromedriver] Changed state to 'starting'
[Chromedriver] Set chromedriver binary as: /usr/local/lib/node_modules/appium/node_modules/appium-android-driver/node_modules/appium-chromedriver/chromedriver/mac/chromedriver
[Chromedriver] Killing any old chromedrivers, running: pkill -15 -f "/usr/local/lib/node_modules/appium/node_modules/appium-android-driver/node_modules/appium-chromedriver/chromedriver/mac/chromedriver.*--port=9515"
[AndroidBootstrap] [BOOTSTRAP LOG] [debug] json loading complete.
[AndroidBootstrap] [BOOTSTRAP LOG] [debug] Registered crash watchers.
[AndroidBootstrap] [BOOTSTRAP LOG] [debug] Client connected
[Chromedriver] No old chromedrivers seemed to exist
[Chromedriver] Spawning chromedriver with: /usr/local/lib/node_modules/appium/node_modules/appium-android-driver/node_modules/appium-chromedriver/chromedriver/mac/chromedriver --url-base=wd/hub --port=9515 --adb-port=5037
[Chromedriver] [STDOUT] Starting ChromeDriver 2.21.371459 (36d3d07f660ff2bc1bf28a75d1cdabed0983e7c4) on port 9515
Only local connections are allowed.
[JSONWP Proxy] Proxying [GET /status] to [GET http://127.0.0.1:9515/wd/hub/status] with no body
[Chromedriver] [STDERR] [warn] kq_init: detected broken kqueue; not using.: Undefined error: 0
[JSONWP Proxy] Got response with status 200: "{\"sessionId\":\"\",\"stat...
[JSONWP Proxy] Proxying [POST /session] to [POST http://127.0.0.1:9515/wd/hub/session] with body: {"desiredCapabilities":{"ch...
[JSONWP Proxy] Got response with status 200: {"sessionId":"e1cc8c5acefcf...
[debug] [Chromedriver] Changed state to 'online'
[Appium] New AndroidDriver session created successfully, session 4cd47498-3a35-495e-8cdd-1fd02e69427c added to master session list
[MJSONWP] Responding to client with driver.createSession() result: {"platform":"LINUX","webSto...
[HTTP] <-- POST /wd/hub/session 200 21605 ms - 875
[HTTP] --> POST /wd/hub/session/4cd47498-3a35-495e-8cdd-1fd02e69427c/timeouts {"type":"implicit","ms":10000}
[MJSONWP] Driver proxy active, passing request on via HTTP proxy
[JSONWP Proxy] Proxying [POST /wd/hub/session/4cd47498-3a35-495e-8cdd-1fd02e69427c/timeouts] to [POST http://127.0.0.1:9515/wd/hub/session/e1cc8c5acefcf0ef250abfcf6b59b61d/timeouts] with body: {"type":"implicit","ms":10000}
[JSONWP Proxy] Got response with status 200: {"sessionId":"e1cc8c5acefcf...
[JSONWP Proxy] Replacing sessionId e1cc8c5acefcf0ef250abfcf6b59b61d with 4cd47498-3a35-495e-8cdd-1fd02e69427c
[HTTP] <-- POST /wd/hub/session/4cd47498-3a35-495e-8cdd-1fd02e69427c/timeouts 200 12 ms - 220
Could it be the some version problem of chrome installed or chrome-driver?

Magento Nginx 502 Gateway Error

After updating Magento with Patch 6788 and update the Intenso Theme I am getting the following error on some of the product URLs:
2015/11/14 16:55:12 [notice] 1594#0: signal process started
2015/11/14 16:56:10 [error] 1602#0: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 69.204.XXX.XXX, server: localhost, request: "GET /houseware/karu-non-stick-bowl-w-g-16cm.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:7161", host: "107.170.XXX.XXX", referrer: "http://107.170.XXX.XXX/houseware.html"
Basically I am getting a 502 on some of the product pages.
does anyone knows how to resolve this.
I have NGINX server + PHP_FPM + MYSQL setup.

Resources