NATS does not raise an exception when disconnecting - python-asyncio

I use an almost standard example of using NATS on python asyncio. I want to receive a message, process it, and send the result
back to the queue, but when the NATS is disconnected (e.g. reboot gnats), the exception does not raise. I even did await asyncio.sleep (1, loop = loop) to change the context
and the disconnect -> reconnect exception was thrown, but this does not happen. What am I doing wrong? May be it's a bug?
import asyncio
from nats.aio.client import Client as NATS
import time
async def run(loop):
nc = NATS()
await nc.connect(io_loop=loop)
async def message_handler(msg):
subject = msg.subject
reply = msg.reply
data = msg.data.decode()
print("Received a message on '{subject} {reply}': {data}".format(
subject=subject, reply=reply, data=data))
# Working
time.sleep(10)
# If nats disconnects at this point, the exception will not be caused
# and will be made attempt to send a message by nc.publish
await asyncio.sleep(2, loop=loop)
print("UNSLEEP")
await nc.publish("test", "test payload".encode())
print("PUBLISHED")
# Simple publisher and async subscriber via coroutine.
await nc.subscribe("foo", cb=message_handler)
while True:
await asyncio.sleep(1, loop=loop)
await nc.close()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(run(loop))
loop.close()

NATs is built on top of TCP.
TCP has no reliable disconnection signal by definition.
To solve it any messaging system should use a kind of ping messages and drop the connection if timeout occurs.
Strictly speaking you will get disconnection event sometimes, but it may take up to 2 hours (depends on your OS settings).

Related

Asyncio: Fastapi with aio-pika, consumer ignores Await

I am trying to hook my websocket endpoint with rabbitmq (aio-pika). Goal is to have listener in that endpoint and on any new message from queue pass the message to browser client over websockets.
I tested the consumer with asyncio in a script with asyncio loop. Works as I followed and used aio-pika documentation. (source: https://aio-pika.readthedocs.io/en/latest/rabbitmq-tutorial/2-work-queues.html, worker.py)
However, when I use it in fastapi in websockets endpoint, I cant make it work. Somehow the listener:
await queue.consume(on_message)
is completely ignored.
This is my attempt (I put it all in one function, so its more readable):
#app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
print("Entering websockets")
await manager.connect(websocket)
print("got connection")
# params
queue_name = "task_events"
routing_key = "user_id.task"
con = "amqp://rabbitmq:rabbitmq#rabbit:5672/"
connection = await connect(con)
channel = await connection.channel()
await channel.set_qos(prefetch_count=1)
exchange = await channel.declare_exchange(
"topic_logs",
ExchangeType.TOPIC,
)
# Declaring queue
queue = await channel.declare_queue(queue_name)
# Binding the queue to the exchange
await queue.bind(exchange, routing_key)
async def on_message(message: IncomingMessage):
async with message.process():
# here will be the message passed over websockets to browser client
print("sent", message.body)
try:
######### Not working as expected ###########
# await does not await and websockets finishes, as there is no loop
await queue.consume(on_message)
#############################################
################ This Alternative code atleast receives some messages #############
# If I use this part, I atleast get some messages, when I trigger a backend task that publishes new messages to the queue.
# It seems like the messages are somehow stuck and new task releases all stucked messages, but does not release new one.
while True:
await queue.consume(on_message)
await asyncio.sleep(1)
################## one part #############
except WebSocketDisconnect:
manager.disconnect(websocket)
I am quite new to async in python. I am not sure where is the problem and I cannot somehow implement async consuming loop while getting inspired with worker.py from aio-pika.
You could use an async iterator, which is the second canonical way to consume messages from a queue.
In your case, this means:
async with queue.iterator() as iter:
async for message in iter:
async with message.process():
# do something with message
It will block as long as no message is received and will be suspended again after processing a message.
The solution was simply.
aio-pika queue.consume even though we use await is nonblocking, so
this way we consume
consumer_tag = await queue.consume(on_message, no_ack=True)
and at the end of connection we cancel
await queue.cancel(consumer_tag)
The core of the solution for me, was to make something asyncio blocking, so I used
this part of the code after consume
while True:
data = await websocket.receive_text()
x = await manager.send_message(data, websocket)
I dont use this code, but its useful as this part of the code waits for frontend websocket response. If this part of the code is missing, then what happens is that client connects just to get disconnected (the websocket endpoit is succefully executed), as there is nothing blocking

How to combine callback-based library with asyncio library in Python?

I have the following issue. I want to read out keystrokes with the pynput library and send them over websockets. Pynput proposes the following usage
from pynput import keyboard
def on_press(key):
try:
print('alphanumeric key {0} pressed'.format(
key.char))
except AttributeError:
print('special key {0} pressed'.format(
key))
def on_release(key):
print('{0} released'.format(
key))
if key == keyboard.Key.esc:
# Stop listener
return False
# Collect events until released
with keyboard.Listener(
on_press=on_press,
on_release=on_release) as listener:
listener.join()
# ...or, in a non-blocking fashion:
listener = keyboard.Listener(
on_press=on_press,
on_release=on_release)
listener.start()
(taken from https://pynput.readthedocs.io/en/latest/keyboard.html)
In contrast to that, the websocket-client library is called as follows:
import asyncio
import websockets
async def hello():
uri = "ws://localhost:8765"
async with websockets.connect(uri) as websocket:
name = input("What's your name? ")
await websocket.send(name)
print(f"> {name}")
greeting = await websocket.recv()
print(f"< {greeting}")
asyncio.get_event_loop().run_until_complete(hello())
(taken from https://websockets.readthedocs.io/en/stable/intro.html).
I am now struggling how this can be done as the websocket library is asynchronous and pynput is synchronous. I somehow have to inject a "websocket.send()" into on_press/on_release - but currently I am struggling with this.
Note that your pynput example contains two different variants of using pynput, from which you need to choose the latter because it is easier to connect to asyncio. The keyboard listener will allow the program to proceed with the asyncio event loop, while invoking the callbacks from a separate thread. Inside the callback functions you can use call_soon_threadsafe to communicate the key-presses to asyncio, e.g. using a queue. For example (untested):
def transmit_keys():
# Start a keyboard listener that transmits keypresses into an
# asyncio queue, and immediately return the queue to the caller.
queue = asyncio.Queue()
loop = asyncio.get_event_loop()
def on_press(key):
# this callback is invoked from another thread, so we can't
# just queue.put_nowait(key.char), we have to go through
# call_soon_threadsafe
loop.call_soon_threadsafe(queue.put_nowait, key.char)
pynput.keyboard.Listener(on_press=on_press).start()
return queue
async def main():
key_queue = transmit_keys()
async with websockets.connect("ws://localhost:8765") as websocket:
while True:
key = await key_queue.get()
await websocket.send(f"key pressed: {key}")
asyncio.run(main())

When using asyncio.Queue() how do I cancel the gets?

I'm writing a client in asyncio and using q.get() to wait for responses from the server. When I receive a response from the server I put it on the queue. If the server connection is lost I will no longer being doing those puts and could have any number of await q.get()'s hanging around.
How should I cancel them? I noticed that when I delete the queue the await gets are still waiting.
Does this look like what you are trying to do? You have two options I think:
If you keep a count of outstanding gets then when you are done with the queue you can just put(None) that many times?
Or if None is a valid response then keep a list of the outstanding futures and call cancel on them yourself.
import asyncio
async def qget(q):
try:
x = await q.get()
q.task_done()
print("qget done ",x)
except asyncio.CancelledError as e:
print("qget cancel exception ",e)
except Exception as e:
print("qget exception ",e)
async def run():
q = asyncio.Queue()
futs = []
futs.append( asyncio.ensure_future( qget(q) ) )
futs.append( asyncio.ensure_future( qget(q) ) )
num = 2
await asyncio.sleep(0.1)
# Keep the number of outstanding gets and put None for each one
if 1:
for x in range(num):
q.put_nowait(None)
# Or keep the futures in a list and cancel them
if 0:
for f in futs:
f.cancel()
await asyncio.sleep(1)
print("run loop done")
asyncio.run(run())
If you look at the python code for the queue it does keep a list called _getters, but there is no public api for accessing it.

Asyncio wait not running all coroutines

I have a speech to text client set up as follows. Client sends audio packets to server and server returns the text results back, which client prints on stdout.
async def record_audio(websocket):
# Audio recording parameters
rate = 16000
chunk = int(rate / 10) # 100ms
with BufferedMicrophoneStream(rate, chunk) as stream:
audio_generator = stream.generator() # Buffer is a asynchronous queue instance
for message in audio_generator:
await websocket.send(message)
async def collect_results(websocket):
async for message in websocket:
print(message)
async def combine():
uri = "ws://localhost:8765"
async with websockets.connect(uri) as websocket:
await asyncio.wait([
record_audio(websocket),
collect_results(websocket)
])
def main():
loop = asyncio.get_event_loop()
loop.run_until_complete(combine())
loop.close()
As you can see, both the coroutines are infinite loops.
When I run the program, the server runs correctly for either of the waited coroutines, that is, if I only pass the record_audio or collect_results, I am able to confirm that they work individually, but not simultaneously.
However, if I put a asyncio.sleep(10) statement in record_audio inside the loop, then I do see output from collect_results and audio chunks are sent to server in a burst every 10th second.
What gives?
Thanks.
Update #1:
I replaced the above code with the following, still no avail:
async with websockets.connect(uri) as websocket:
futures = [
await loop.run_in_executor(executor, record_audio, websocket),
await loop.run_in_executor(executor, collect_results, websocket)
]
await asyncio.gather(*futures)

pynng: how to setup, and keep using, multiple Contexts on a REP0 socket

I'm working on a "server" thread, which takes care of some IO calls for a bunch of "clients".
The communication is done using pynng v0.5.0, the server has its own asyncio loop.
Each client "registers" by sending a first request, and then loops receiving the results and sending back READY messages.
On the server, the goal is to treat the first message of each client as a registration request, and to create a dedicated worker task which will loop doing IO stuff, sending the result and waiting for the READY message of that particular client.
To implement this, I'm trying to leverage the Context feature of REP0 sockets.
Side notes
I would have liked to tag this question with nng and pynng, but I don't have enough reputation.
Although I'm an avid consumer of this site, it's my first question :)
I do know about the PUB/SUB pattern, let's just say that for self-instructional purposes, I chose not to use it for this service.
Problem:
After a few iterations, some READY messages are intercepted by the registration coroutine of the server, instead of being routed to the proper worker task.
Since I can't share the code, I wrote a reproducer for my issue and included it below.
Worse, as you can see in the output, some result messages are sent to the wrong client (ERROR:root:<Worker 1>: worker/client mismatch, exiting.).
It looks like a bug, but I'm not entirely sure I understand how to use the contexts correctly, so any help would be appreciated.
Environment:
winpython-3.8.2
pynng v0.5.0+dev (46fbbcb2), with nng v1.3.0 (ff99ee51)
Code:
import asyncio
import logging
import pynng
import threading
NNG_DURATION_INFINITE = -1
ENDPOINT = 'inproc://example_endpoint'
class Server(threading.Thread):
def __init__(self):
super(Server, self).__init__()
self._client_tasks = dict()
#staticmethod
async def _worker(ctx, client_id):
while True:
# Remember, the first 'receive' has already been done by self._new_client_handler()
logging.debug(f"<Worker {client_id}>: doing some IO")
await asyncio.sleep(1)
logging.debug(f"<Worker {client_id}>: sending the result")
# I already tried sending synchronously here instead, just in case the issue was related to that
# (but it's not)
await ctx.asend(f"result data for client {client_id}".encode())
logging.debug(f"<Worker {client_id}>: waiting for client READY msg")
data = await ctx.arecv()
logging.debug(f"<Worker {client_id}>: received '{data}'")
if data != bytes([client_id]):
logging.error(f"<Worker {client_id}>: worker/client mismatch, exiting.")
return
async def _new_client_handler(self):
with pynng.Rep0(listen=ENDPOINT) as socket:
max_workers = 3 + 1 # Try setting it to 3 instead, to stop creating new contexts => now it works fine
while await asyncio.sleep(0, result=True) and len(self._client_tasks) < max_workers:
# The issue is here: at some point, the existing client READY messages get
# intercepted here, instead of being routed to the proper worker context.
# The intent here was to open a new context only for each *new* client, I was
# assuming that a 'recv' on older worker contexts would take precedence.
ctx = socket.new_context()
data = await ctx.arecv()
client_id = data[0]
if client_id in self._client_tasks:
logging.error(f"<Server>: We already have a task for client {client_id}")
continue # just let the client block on its 'recv' for now
logging.debug(f"<Server>: New client : {client_id}")
self._client_tasks[client_id] = asyncio.create_task(self._worker(ctx, client_id))
await asyncio.gather(*list(self._client_tasks.values()))
def run(self) -> None:
# The "server" thread has its own asyncio loop
asyncio.run(self._new_client_handler(), debug=True)
class Client(threading.Thread):
def __init__(self, client_id: int):
super(Client, self).__init__()
self._id = client_id
def __repr__(self):
return f'<Client {self._id}>'
def run(self):
with pynng.Req0(dial=ENDPOINT, resend_time=NNG_DURATION_INFINITE) as socket:
while True:
logging.debug(f"{self}: READY")
socket.send(bytes([self._id]))
data_str = socket.recv().decode()
logging.debug(f"{self}: received '{data_str}'")
if data_str != f"result data for client {self._id}":
logging.error(f"{self}: client/worker mismatch, exiting.")
return
def main():
logging.basicConfig(level=logging.DEBUG)
threads = [Server(),
*[Client(i) for i in range(3)]]
for t in threads:
t.start()
for t in threads:
t.join()
if __name__ == '__main__':
main()
Output:
DEBUG:asyncio:Using proactor: IocpProactor
DEBUG:root:<Client 1>: READY
DEBUG:root:<Client 0>: READY
DEBUG:root:<Client 2>: READY
DEBUG:root:<Server>: New client : 1
DEBUG:root:<Worker 1>: doing some IO
DEBUG:root:<Server>: New client : 0
DEBUG:root:<Worker 0>: doing some IO
DEBUG:root:<Server>: New client : 2
DEBUG:root:<Worker 2>: doing some IO
DEBUG:root:<Worker 1>: sending the result
DEBUG:root:<Client 1>: received 'result data for client 1'
DEBUG:root:<Client 1>: READY
ERROR:root:<Server>: We already have a task for client 1
DEBUG:root:<Worker 1>: waiting for client READY msg
DEBUG:root:<Worker 0>: sending the result
DEBUG:root:<Client 0>: received 'result data for client 0'
DEBUG:root:<Client 0>: READY
DEBUG:root:<Worker 0>: waiting for client READY msg
DEBUG:root:<Worker 1>: received 'b'\x00''
ERROR:root:<Worker 1>: worker/client mismatch, exiting.
DEBUG:root:<Worker 2>: sending the result
DEBUG:root:<Client 2>: received 'result data for client 2'
DEBUG:root:<Client 2>: READY
DEBUG:root:<Worker 2>: waiting for client READY msg
ERROR:root:<Server>: We already have a task for client 2
Edit (2020-04-10): updated both pynng and the underlying nng.lib to their latest version (master branches), still the same issue.
After digging into the sources of both nng and pynng, and confirming my understanding with the maintainers, I can now answer my own question.
When using a context on a REP0 socket, there are a few things to be aware of.
As advertised, send/asend() is guaranteed to be routed to the same peer you last received from.
The data from the next recv/arecv() on this same context, however, is NOT guaranteed to be coming from the same peer.
Actually, the underlying nng call to rep0_ctx_recv() merely reads the next socket pipe with available data, so there's no guarantee that said data is coming from the same peer than the last recv/send pair.
In the reproducer above, I was concurrently calling arecv() both on a new context (in the Server._new_client_handler() coroutine), and on each worker context (in the Server._worker() coroutine).
So what I had previously described as the next request being "intercepted" by the main coroutine was merely a race condition.
One solution would be to only receive from the Server._new_client_handler() coroutine, and have the workers only handle one request. Note that in this case, the workers are no longer dedicated to a particular peer. If this behavior is needed, the routing of incoming requests must be handled at application level.
class Server(threading.Thread):
#staticmethod
async def _worker(ctx, data: bytes):
client_id = int.from_bytes(data, byteorder='big', signed=False)
logging.debug(f"<Worker {client_id}>: doing some IO")
await asyncio.sleep(1 + 10 * random.random())
logging.debug(f"<Worker {client_id}>: sending the result")
await ctx.asend(f"result data for client {client_id}".encode())
async def _new_client_handler(self):
with pynng.Rep0(listen=ENDPOINT) as socket:
while await asyncio.sleep(0, result=True):
ctx = socket.new_context()
data = await ctx.arecv()
asyncio.create_task(self._worker(ctx, data))
def run(self) -> None:
# The "server" thread has its own asyncio loop
asyncio.run(self._new_client_handler(), debug=False)

Resources