Why is the following sequence of events resulting in a protocol error? - http2

I am playing around with an HTTP2 client/server implementation and I'm running into a protocol_error but I'm not sure why.
Received frame: {:length=>18, :type=>:settings, :flags=>[], :stream=>0, :payload=>[[:settings_max_concurrent_streams, 128], [:settings_initial_window_size, 65536], [:settings_max_frame_size, 16777215]]}
Sent frame: {:type=>:settings, :stream=>0, :payload=>[], :flags=>[:ack]}
Received frame: {:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>2147418112}
Sent frame: {:type=>:headers, :flags=>[:end_headers, :end_stream], :payload=>{":scheme"=>"https", ":method"=>"GET", ":path"=>"/index", ":authority"=>"www.example.com"}, :stream=>1}
Received frame: {:length=>8, :type=>:goaway, :flags=>[], :stream=>0, :last_stream=>0, :error=>:protocol_error}
I'm almost certain this is a problem with stream IDs but I'm really new to the HTTP2 protocol so I'm actually not sure what's going wrong or why I'm getting the protocol error.

I would guess it is because you have not sent your Settings Frame - you have only acknowledged the server Settings Frame.
The spec could be clearer on this:
A SETTINGS frame MUST be sent by both endpoints at the start of a connection
Does an acknowledgement Settings Frame count?
However this section states:
This sequence MUST be followed by a SETTINGS frame (Section 6.5), which MAY be empty.
...
The SETTINGS frames received from a peer as part of the connection preface MUST be acknowledged (see Section 6.5.3) after sending the connection preface.
So I’m taking that as you must send your Settings Frame and then acknowledge the server Settings Frame.

Try with flags "chrome.exe --disable-http2" it will be vanished if it is related with http2 protocol error.

Related

RabbitMQ operation basic.ack caused a channel exception precondition_failed: unknown delivery tag 3 - Golang [duplicate]

We have a PHP app that forwards messages from RabbitMQ to connected devices down a WebSocket connection (PHP AMQP pecl extension v1.7.1 & RabbitMQ 3.6.6).
Messages are consumed from an array of queues (1 per websocket connection), and are acknowledged by the consumer when we receive confirmation over the websocket that the message has been received (so we can requeue messages that are not delivered in an acceptable timeframe). This is done in a non-blocking fashion.
99% of the time, this works perfectly, but very occasionally we receive an error "RabbitMQ PRECONDITION_FAILED - unknown delivery tag ". This closes the channel. In my understanding, this exception is a result of one of the following conditions:
The message has already been acked or rejected.
An ack is attempted over a channel the message was not delivered on.
An ack is attempted after the message timeout (ttl) has expired.
We have implemented protections for each of the above cases but yet the problem continues.
I realise there are number of implementation details that could impact this, but at a conceptual level, are there any other failure cases that we have not considered and should be handling? or is there a better way of achieving the functionality described above?
"PRECONDITION_FAILED - unknown delivery tag" usually happens because of double ack-ing, ack-ing on wrong channels or ack-ing messages that should not be ack-ed.
So in same case you are tying to execute basic.ack two times or basic.ack using another channel
(Solution below)
Quoting Jan Grzegorowski from his blog:
If you are struggling with the 406 error message which is included in
title of this post you may be interested in reading the whole story.
Problem
I was using amqplib for conneting NodeJS based messages processor with
RabbitMQ broker. Everything seems to be working fine, but from time to
time 406 (PRECONDINTION-FAILED) message shows up in the log:
"Error: Channel closed by server: 406 (PRECONDITION-FAILED) with message "PRECONDITION_FAILED - unknown delivery tag 1"
Solution <--
Keeping things simple:
You have to ACK messages in same order as they arrive to your system
You can't ACK messages on a different channel than that they arrive on If you break any of these rules you will face 406
(PRECONDITION-FAILED) error message.
Original answer
It can happen if you set no-ack option of a Consumer to true that means you souldn't call ack function manually:
https://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.consume.no-ack
The solution: set no-ack flag to false.
If you aknowledge twice the same message you can have this error.
A variation of what they said above about acking it twice:
there is an "obscure" situation where you are acking a message more than once, which is when you ack a message with multiple parameter set to true, which means all previous messages to the one you are trying to ack, will be acked too.
And so if you try to ack one of the messages that were "auto acked" by setting multiple to true then you would be trying to "ack" it multiple times and so the error, confusing but hope you understand it after a few reads.
Make sure you have the correct application.properties:
If you use the RabbitTemplate without any channel configuration, use "simple":
spring.rabbitmq.listener.simple.acknowledge-mode=manual
In this case, if you use "direct" instead of "simple", you will get the same error message. Another one looks like this:
spring.rabbitmq.listener.direct.acknowledge-mode=manual

pyav / libav / ffmpeg what happens when frame from live source are not processed fast enough

I am using pyav to process a live RTSP stream:
import av
import time
URL = "RTSP_url"
container = av.open(
url, 'r',
options={
'rtsp_transport': 'tcp',
'stimeout': '5000000',
'max_delay': '5000000',
}
)
for packet in self.container.demux(video=0):
for frame in packet.decode():
# do something
time.sleep(10)
What happens if I do something too slow? Are frames / packets dropped or are they buffered?
I guess the same question would apply to libav or ffmpeg.
tcp is a guaranteed delivery protocol with built-in flow control. If you do not process the incoming data as fast as it is received, the tcp stack will buffer the data until its buffers are full at which time the tcp protocol will let the sender know that it cannot receive any more data. If this continues, the sender's output buffers will eventually fill up and then it is up to the sender to decide what to do.
An IP camera at that point may throw frames away or it may even drop the connection. Most IP cameras also use a keep-alive mechanism typically via RTCP packets sent over the RTSP stream. The camera may send Sender Reports and the receiver should send back Receiver Reports. If the camera does not get a Receiver Report within a timeout, it will drop the connection. I would have to assume that either the av library or ffmpeg is doing that.
You probably do not want to do time.sleep(10).
If you really feel that you need to discard packets, then you could examine your packets before calling decode to see if you are falling behind. If you are getting too far behind, you can discard packets that are not key frames until you catch up. The effect will be that the video will have jumps in it.
In my experience gstreamer could store in a buffer old frames and return them even minutes later. Not sure if PyAv would do the same.

How http frame are sent over socket?

After sending HTTP preface I am getting the following error:
First received frame was not SETTINGS. Hex dump for first 5 bytes
Did not know how to send http frames over socket;

Am I doing something wrong, or does echo.websocket.org not echo back empty payloads?

According to the spec for websockets protocol 13 (RFC 6455), the payload length for any given frame can be 0.
frame-payload-data ; n*8 bits in
; length, where
; n >= 0
I am building a websocket client to this spec, but when I sent echo.websocket.org a frame with an empty payload, I get nothing back. I experience the same using their GUI:
This is troublesome for me, since the way I'm building my client somewhat requires me to send empty frames when I FIN a multi-frame message.
Is this merely a bug in the Echo Test server? Do a substantial number of server implementations drop frames with empty payloads?
And if this is a bug in Echo Test, does anyone know how I might get in touch with them? The KAAZING site only has tech support contact info for their own products.
If you send a data-frame with no payload, there is nothing to echo back. This behaviour is fully correct. However, it might be standard-conform, too, to send back a dataframe with 0 payload, too. The main question is, if the application layer is informed at all, when a dataframe with no payload is received. This is probably not the case in most implementations.
With TCP this is similar: A TCP-Keepalive is a datagram with 0 payload. It is ack'd by the remote TCP-stack, but the application layer is not informed about it (i.e. select() does not return or a read()-syscall remains blocking), which is the expected behaviour.
An application-layer protocol should not rely on the datagrams to structure the data, but should merely expect a stream of bytes without taking regard on how these are transported.
I just tried the echo test on websocket.org with empty payloads and it seems to work fine using Chrome, Safari and Firefox (latest versions of each). Which browser are you using?
Btw, that demo program doesn't abide by any "echo protocol" (afaik), so there's no formal specification that dictates what to do on empty data in a WebSocket set of frames.
If you need help using WebSocket, there are Kaazing forums: http://developer.kaazing.com/forums.

Need Help Understanding X11 Protocol Errors

I've just started building a minimal X server for Windows from scratch. As I work through it I'm sure I'll run into all kinds of errors and glitches as I work the bugs out and learn more about the protocol.
Here's an example of an error I have seen printed by a client:
X Error of failed request: 0
Major opcode of failed request: 0 ()
Serial number of failed request: 0
Current serial number in output stream: 3
The major opcode meaning seems pretty obvious, but where are the "X Error" codes defined?
What are the serial numbers of the failed request and output stream? Are these supposed to match each other? By output stream, does that mean what was sent to the xserver or what was sent to the xclient? Is this related to sequence numbers?
grep the source...
in libX11, XlibInt.c, _XPrintDefaultError() you can find this error message.
Most of what's printed is from the error event, which is presumably sent by your server.
The current serial is dpy->request which is in Xlibint.h:
unsigned long request; /* sequence number of last request. */
i.e. the last X request that was sent. This may or may not be the same as the request causing the error. (event->serial is supposed to be the request that caused the error, but your server may not have gotten this right)
To hope to code an X server I think you'll be digging into the source code a lot - the docs are not precise or thorough enough... really, you may as well use some of the existing code, the license is liberal enough.
Error codes are defined in the X Protocol specification chapter called Errors.
The other items in an error response are defined in the first chapter Protocol Formats. The actual values and layout of the error messages are found in the Errors section of the Protocol Encoding appendix.
From the contents of that message though it appears you're sending a response filled with zeros when the client isn't expecting a response - most requests to the X server should not get a response sent back through the protocol unless they failed.

Resources