How to check for a websocket bottleneck in a python project - websocket

I currently have a script that connects to a server, makes a websocket connection and receives high frequency messages.
I am quite sure that the processing on my client end cannot keep up with the messages and thus i am getting behind after small periods of time.
My understanding is the messages are queued in both the servers sending buffer and in my clients receive buffer too, and if i do not process them quick enough evenutally the buffer will fill up and i will lose messages which will cause an out of sequence issue, is my assumption correct?
My question is, what is the best way (tools) to go about tracing possible bottle necks and track down if the issue is the server or the client? I am working with python in Visual Studio and have the single process running for now using PM2.
I am looking for advice on way to trace low level bottlenecks even if it means using tools like wireshark etc.
thanks.

My advice is to use gevent and gevent-websocket so that all the connections are async. Then you can do multiple connections asynchonously.
With GIPC, you could launch an instance per cpu core and load balance between ports.
example:
from gevent import monkey, socket, Timeout, sleep
monkey.patch_all()
import sys
pyver = sys.version_info[0]
if pyver == 3:
import signal
from gevent import signal_handler as sig
else:
from gevent import signal
import bottle
from bottle import route, request, response, abort
import ujson as json
from gevent.pywsgi import WSGIServer
from geventwebsocket.handler import WebSocketHandler
from geventwebsocket import WebSocketError
import traceback
#route('/ws/app')
def handle_websocket():
global ws_users
ws = request.environ.get('wsgi.websocket')
if not ws:
abort(400, 'Expected WebSocket request.')
while 1:
message = None
try:
with Timeout(2, False) as timeout:
message = ws.receive()
if message:
message = json.loads(message)
# process message, report back with ws.send()
except WebSocketError:
break
except Exception as exc:
traceback.print_exc()
sleep(1)
if __name__ == '__main__':
print(socket.gethostname())
print('Started...')
botapp = bottle.app()
server = WSGIServer(("0.0.0.0", int(80)), botapp , handler_class=WebSocketHandler)
def shutdown():
print('Shutting down ...')
server.stop(timeout=60)
exit(signal.SIGTERM)
if pyver == 3:
sig(signal.SIGTERM, shutdown)
sig(signal.SIGINT, shutdown)
else:
signal(signal.SIGTERM, shutdown)
signal(signal.SIGINT, shutdown) #CTRL C
server.serve_forever()

Related

Inter-process communication between async and sync tasks using PyZMQ

On a single process I have a tasks running on a thread that produces values and broadcasts them and
several consumer async tasks that run concurrently in an asyncio loop.
I found this issue on PyZMQ's github asking async <-> sync communication
with inproc sockets which is what I also wanted and the answer was to use .shadow(ctx.underlying) when
creating the async ZMQ Context.
I prepared this example and seems to be working fine:
import signal
import asyncio
import zmq
import threading
import zmq.asyncio
import sys
import time
import json
def producer(ctrl):
# delay first push to give asyncio loop time
# to start
time.sleep(1)
ctx = ctrl["ctx"]
s = ctx.socket(zmq.PUB)
s.bind(ctrl["endpoint"])
v = 0
while ctrl["run"]:
payload = {"value": v, "timestamp": time.time()}
msg = json.dumps(payload).encode("utf-8")
s.send(msg)
v += 1
time.sleep(5)
print("Bye")
def main():
endpoint = "inproc://testendpoint"
ctx = zmq.Context()
actx = zmq.asyncio.Context.shadow(ctx.underlying)
ctrl = {"run": True, "ctx": ctx, "endpoint": endpoint, }
th = threading.Thread(target=producer, args=(ctrl,))
th.start()
try:
asyncio.run(amain(actx, endpoint))
except KeyboardInterrupt:
pass
print("Stopping thread")
ctrl["run"] = False
th.join()
async def amain(ctx, endpoint):
s = ctx.socket(zmq.SUB)
s.subscribe("")
s.connect(endpoint)
loop = asyncio.get_running_loop()
def stop():
try:
print("Closing zmq async socket")
s.close()
except:
pass
raise KeyboardInterrupt
loop.add_signal_handler(signal.SIGINT, stop)
while True:
event = await s.poll(1000)
if event & zmq.POLLIN:
msg = await s.recv()
payload = json.loads(msg.decode("utf-8"))
print("%f: %d" % (payload["timestamp"], payload["value"]))
if __name__ == "__main__":
sys.exit(main())
Is it safe to use inproc://* between a thread and asyncio task in this way? The 0MQ
context is thread safe and I'm not sharing sockets between the thread and the
asyncio task, so I would say in general that this is thread safe, right? Or am I
missing something that I should consider?
Q :Is it safe to use inproc://* between a thread and asyncio task in this way?""
A :First and foremost, I might be awfully wrong (not only here), yet having worked with ZeroMQ since native API 2.1.1+ I dare claim that unless newer "improvements" got lost the core principles ( ZeroMQ ZMTP/RFC-documented properties for building legal implementation of the still valid ZMTP-arsenal ), the answer here shall be YES, as much as the newer releases of pyzmq-binding kept all mandatory properties of the inproc:-Transport-Class without a compromise.
Q :" The 0MQ context is thread safe and I'm not sharing sockets between the thread and the asyncio task, so I would say in general that this is thread safe, right? "
A :Here my troubles start - ZeroMQ implementations were since ever developed based on Martin SUSTRIK's & Pieter HINTJENS' Zen-of-Zero -- i.e. also as Zero-sharing -- so never sharing was the principle ( though "share"-zmq.Context-instances were no problem to be used from different threads, to the contrary of the zmq.Socket-instances )
Python (since ever & still valid in 2022-Q1) used to use & still uses a total [CONCURRENT]-code-execution avoider -- prevented by GIL-lock, which principally avoids any & all kinds of problems, arising from [CONCURRENT]-code-execution to never happen insider Python GIL-lock re-[SERIAL]-ised flow of code-execution, so even if the asyncio-part is built as a pythonic (non-destructive) part of the ecosystem, your code shall never "meet" any kind of concurrency-related issue, as the unless it gains GIL-lock, it does nothing but "hanging in NOP-s cracking" ( nuts-cracking in idle loop ).
Being inside the same process, there seems no advantage to spawn another Context-instance at all ( this used to be the rock-solid certainty since ever, not to ever increase any kind of overheads - Zen-of-Zero ( almost )Zero-overhead ... ). The Sig/Msg core engine was, if performance or latency needs required, powered with more zmq.Context( IOthreads ) upon instantiations, yet these were zmq.Context-owned, not Python-GIL-governed/(b)locked threads, so the performance was pretty well scalable, without wasting any RAM/HWM/buffers/...-resources, without growing any overheads and very efficient, as the IO-threads were co-located for only indeed I/O-work, so not needed for inproc:-( protocol-less )-Transport-Class at all )
Q :" Or am I missing something that I should consider? "
A :Mixing asyncio, O/S-signals ( that are well documented how they interact with native ZeroMQ API ) and other layers of complexity is for sure possible, yet it comes at a cost - it makes the use-case less and less readable and more and more prone to conceptual-gaps and similar hard to decode "errors".
I remember using Tkinter-mainloop() as a cost-wise very cheap and a super-stable framework for rapid-prototyping an MVC-{ M-odel, V-isual, C-ontroller }-parts of many-actors' indeed distributed-system applications in Python. There were Zerop-problems to use ZeroMQ with a single Context-instance, passing the references of the respective AccessNodes' into whatever amount of event-handlers, supposing we kept the ZeroMQ Zen-of-Zero, i.e. no to "share" (meaning no two parts "use" (compete to use) one and the same AccessPoint "one-over-another")
This all was designed-in, at "Zero-cost", by the ZeroMQ by-definition, so unless spoilt in some later phase, re-wrapping a re-wrapped native API, all this ought still work in 2022-Q1, ought it not?

Correct use of streamz with websocket

I am trying to figure out a correct way of processing streaming data using streamz. My streaming data is loaded using websocket-client, after which I do this:
# open a stream and push updates into the stream
stream = Stream()
# establish a connection
ws = create_connection("ws://localhost:8765")
# get continuous updates
from tornado import gen
from tornado.ioloop import IOLoop
async def f():
while True:
await gen.sleep(0.001)
data = ws.recv()
stream.emit(data)
IOLoop.current().add_callback(f)
While this works, I find that my stream is not able to keep pace with the streaming data (so the data I see in the stream is several seconds behind the streaming data, which is both high volume and high frequency). I tried setting the gen.sleep(0.001) to a smaller value (removing it completely halts the jupyter lab), but the problem remains.
Is this a correct way of connecting streamz with streaming data using websocket?
I don't think websocket-client provides an async API and, so, it's blocking the event loop.
You should use an async websocket client, such as the one Tornado provides:
from tornado.websocket import websocket_connect
ws = websocket_connect("ws://localhost:8765")
async def f():
while True:
data = await ws.read_message()
if data is None:
break
else:
await stream.emit(data)
# considering you're receiving data from a localhost
# socket, it will be really fast, and the `await`
# statement above won't pause the while-loop for
# enough time for the event loop to have chance to
# run other things.
# Therefore, sleep for a small time to suspend the
# while-loop.
await gen.sleep(0.0001)
You don't need to sleep if you're receiving/sending data from/to a remote connection which will be slow enough to suspend the while loop at await statements.

Broadcasting message from grpc server to all/some connected clients in python

i am learning how to use grpc streams to exchange messages between clients and server in python. I found a base example that enables the simple message sending between server and client. I am trying to modify it so that i could keep track of all the clients connected to the grpc server (on the server side) and could do two things: 1) broadcast from server to all clients, 2) send message to a particular connected client.
Here is the .proto file
syntax = 'proto3';
service Scenario {
rpc Chat(stream DPong) returns (stream DPong) {}
}
message DPong {
string name = 1;
}
And here is the client.py that creates a daemon process to listen for incoming messages and waits for stdin for any outgoing messages
import threading
import grpc
import time
import scenario_pb2_grpc, scenario_pb2
# new changes
msgQueue = queue.Queue()
def run():
channel = grpc.insecure_channel('localhost:50052')
stub = scenario_pb2_grpc.ScenarioStub(channel)
print('client connected')
global queue
def inputStream():
while 1:
msg = input('>>Enter message\n>>')
yield scenario_pb2.DPong(name=msg)
input_stream = stub.Chat(inputStream())
def read_incoming():
while 1:
print('receivedFromServer: {}\n>>'.format(next(input_stream).name))
thread = threading.Thread(target=read_incoming)
thread.daemon = True
thread.start()
while 1:
time.sleep(1)
if __name__ == '__main__':
print('client starting ...')
run()
Below is the server.py
import random
import string
import threading
import grpc
import scenario_pb2_grpc
import scenario_pb2
import time
from concurrent import futures
clientList = []
class Scenario(scenario_pb2_grpc.ScenarioServicer):
def Chat(self, request_iterator, context):
clients = []
def stream():
while 1:
time.sleep(1)
msg = input('>>Enter message\n>>')
for i in clientList:
yield msg
output_stream = stream()
def read_incoming():
while 1:
received = next(request_iterator).name
if (context,request_iterator) not in clientList:
clientList.append((context, request_iterator))
print('receivedFromClient: {}'.format(received), len(clientList))
thread = threading.Thread(target=read_incoming)
thread.daemon = True
thread.start()
while 1:
msg = output_stream
yield scenario_pb2.DPong(name=next(msg))
if __name__ == '__main__':
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
scenario_pb2_grpc.add_ScenarioServicer_to_server(
Scenario(), server)
server.add_insecure_port('[::]:50052')
server.start()
print('listening ...')
while 1:
time.sleep(1)
So far, i have tried to maintain a list object clientList that contains the context & request_iterator object of the client, and is updated every time a new client joins the server. But how do i set these object from the clientList before sending out an outgoing message? I have tried to iterate the list but the server sends the message to the same client (the last client heard from) a number of times instead of sending it to all the clients once.
Any help is highly appreciated!
This is certainly possible. The problem that you're running into here is that each call to Scenario.Chat on the server side corresponds to a single client connection. That is, this function is called when the streaming RPC starts and as soon as the function exits, the RPC ends.
So if you want n connected clients, you'll need n instances of Scenario.Chat running concurrently, each on its own thread. This does mean that the number of concurrently connected clients is limited by the size of the threadpool with which you instantiate your server.
So, let's say you have n threads in your server process dedicated to maintaining client connections. Then you need another n+1th thread (perhaps the main thread) determining when the server will broadcast a message to all clients (maybe by looking for input from STDIN?). When this extra thread determines that a message should be broadcast, it needs to communicate this intent to all of the threads maintaining connections to a client. There are many ways to make this happen. A threading.Condition and a global collections.deque, or a collections.deque per client connection (somewhat like channels between goroutines) would be two ways. The tricky bit here is ensuring that each client connection will receive the message regardless of how long the client connection thread takes to wake up and how many messages the n+1th thread decides to send in the interim.
If this is still unclear, I can follow up with some actual code demonstrating the idea.
You can spin up multiple ports in one application.
gRPC can be running in port 50011 and flask with socket.io can be running in port 8080
with python, you can use the flask framework and flask_socketio library in your server.py
eg server.py
from flask import Flask
from flask_socketio import SocketIO, emit
app = Flask(__name__)
socketio = SocketIO(app)
#app.route('/')
def index():
return "Hello, World!"
if __name__ == '__main__':
app.run(port=8080)
app.run(debug=True)
socketio.run(app)
instead of using gRPC streaming API, use WebSocket to broadcast to all connected clients and specific/selected clients using rooms.
eg
#socketio.on('message')
def handle_message(data):
// logic to send large data in chunks the logic should call the
// emit function in socket.io and emit an event that send the large
// data in chunks eg emit('my response', chunkData)
gRPC is primarily built for one client request and response and WebSocket is for multiple clients.

Polling if I can PUSH or send in zmq?

By using 0mq, I am trying to detect if I have made a successful connection to a PULL port, and if I can PUSH. However, it didn't work as I had expected, see the example code below. Poller will return immediately even remote peer hasn't been started to accept connections. Is there a way to fix it?
import sys
import zmq
context = zmq.Context()
pusher = context.socket(zmq.PUSH)
pusher.connect("tcp://localhost:5555")
poller = zmq.Poller()
poller.register(pusher, zmq.POLLOUT)
socks = dict(poller.poll(timeout=1000))
if pusher in socks and socks[pusher] == zmq.POLLOUT:
print("Pusher can push")
else:
print("Failed to connect, exit.")
sys.exit(1)
You would be allowed to send as long as you haven't reached the High Water Mark ( HWM ) of the sending socket - the number of messages allowed to pile up on the sender side.
By default it is set to 1000 as far as I remember.
/Søren

Why is gevent.sleep(0.1) necessary in this example to prevent the app from blocking?

I'm pulling my hair out over this one. I'm trying to get the simplest of examples working with zeromq and gevent. I changed this script to use PUB/SUB sockets and when I run it the 'server' socket loops forever. If I uncomment the gevent.sleep(0.1) line then it works as expected and yields to the other green thread, which in this case is the client.
The problem is, why should I have to manually add a sleep call? I thought when I import the zmq.green version of zmq that the send and receive calls are non blocking and underneath do the task switching.
In other words, why should I have to add the gevent.sleep() call to get this example working? In Jeff Lindsey's original example, he's doing REQ/REP sockets and he doesn't need to add sleep calls...but when I changed this to PUB/SUB I need it there for this to yield to the client for processing.
#Notes: Code taken from slide: http://www.google.com/url?sa=t&rct=j&q=zeromq%20gevent&source=web&cd=27&ved=0CFsQFjAGOBQ&url=https%3A%2F%2Fraw.github.com%2Fstrangeloop%2F2011-slides%2Fmaster%2FLindsay-DistributedGeventZmq.pdf&ei=JoDNUO6OIePQiwK8noHQBg&usg=AFQjCNFa5g9ZliRVoN_yVH7aizU_fDMtfw&bvm=bv.1355325884,d.cGE
#Jeff Lindsey talk on gevent and zeromq
import gevent
from gevent import spawn
import zmq.green as zmq
context = zmq.Context()
def serve():
print 'server online'
socket = context.socket(zmq.PUB)
socket.bind("ipc:///tmp/jeff")
while True:
print 'send'
socket.send("World")
#gevent.sleep(0.1)
def client():
print 'client online'
socket = context.socket(zmq.SUB)
socket.connect("ipc:///tmp/jeff")
socket.setsockopt(zmq.SUBSCRIBE, '')
while True:
print 'recv'
message = socket.recv()
cl = spawn(client)
server = spawn(serve)
print 'joinall'
gevent.joinall([cl, server])
print 'end'
I thought when I import the zmq.green version of zmq that the send and receive calls are non blocking and underneath do the task switching.
zmq.green will only yield if these calls would block, it does not yield if they are ready (there's nothing to wait for). In your case the sender is always ready, so it never has a reason to yield.
Some pointers:
a minimal explicit yield is gevent.sleep(0), it doesn't need to be finite.
zmq.green only yields on blocking calls. That is, if a socket is always ready to send/recv when you ask it to, it will never yield.
socket.send only blocks when the socket is not ready to send (not (socket.events & zmq.POLLOUT)),
which can never actually be true of a PUB socket (you will see it at HWM for PUSH, DEALER, etc.).
in general, don't trust send to yield, because of the way zeromq works this will rarely be the case unless
you are exceeding the capacity of your configuration.
unlike send, recv regularly blocks in normal usage, so it yields on most calls. But if a peer is flooding your incoming buffer, repeated recv calls will not yield until there is nothing ready to receive, so you may again need to explicitly yield every so often to prevent starvation.
What zmq.green amounts to is turning send/recv into:
try:
socket.send(msg, zmq.NOBLOCK) # or recv
except zmq.ZMQError as e:
if e.errno == zmq.EAGAIN:
yield # and wait for socket to be ready, then try again
so if send/recv with NOBLOCK are always succeeding, the socket never yields.
To put it another way: If a socket has nothing to wait for, it won't wait.

Resources