I'm trying to send data to server from client but I get bad namespace error.
I searched for the error but didn't find any solution...
Can anyone point out what mistake I've done.
server side (app.py)
from flask import Flask, render_template, request, jsonify
from flask_socketio import SocketIO
app = Flask(__name__)
socket_app = SocketIO(app)
#socket_app.on('userdata', namespace='/test')
def username(data):
print(data)
print(request.sid)
if __name__ == '__main__':
socket_app.run(app, debug=True, host='127.0.0.1', port=5000)
client side(client.py)
import socketio
sio = socketio.Client(engineio_logger=True)
data = {}
data['name'] = 'Mark'
sio.connect('http://127.0.0.1:5000')
sio.emit('userdata', data , namespace='/test')
sio.wait()
error
Attempting polling connection to http://127.0.0.1:5000/socket.io/?transport=polling&EIO=4
Polling connection accepted with {'sid': '1ipsM7udDVMj0WdKAAAA', 'upgrades': ['websocket'], 'pingTimeout': 5000, 'pingInterval': 25000}
Sending packet MESSAGE data 0
Attempting WebSocket upgrade to ws://127.0.0.1:5000/socket.io/?transport=websocket&EIO=4
WebSocket upgrade was successful
Traceback (most recent call last):
File "client.py", line 41, in <module>
sio.emit('userdata', data , namespace='/test')
File "/home/amit/anaconda3/envs/ANPR/lib/python3.6/site-packages/socketio/client.py", line 329, in emit
namespace + ' is not a connected namespace.')
socketio.exceptions.BadNamespaceError: /test is not a connected namespace.
Received packet NOOP data
flask & socketio versions
python-socketio==5.0.4 python-engineio==4.0.0 Flask==1.0.2
Flask-SocketIO==5.0.1
It looks like the OP figured out their own issue but when connecting to a non-default namespace you can specify it in the client connection:
sio.connect('http://127.0.0.1:5000', namespaces=['/test'])
I got this working by having the connect handler invoked in server.
#socket_app.on('connect', namespace='/test')
def test_connect():
print("connected")
emit('connect_custom', {'data': 'Connected'})
which then emits to client's connect_custom handler.
#sio.on('connect_custom', namespace='/test')
def on_connect(data):
print('client contacted by server')
print(data)
data = {}
data['name'] = 'Mark'
sio.emit('userdata', data, namespace='/test')
sio.connect('http://127.0.0.1:5000')
Related
I would like to create around 10,000 clients and use them to send and recieve messages from Flask-Socketio server. I am using the default Flask Werkzeug development web server.
This is the app.py
from flask import Flask, render_template
from flask_socketio import SocketIO
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
#socketio.on('message')
def handle_message(message):
print(message)
if __name__ == '__main__':
socketio.run(app,debug=True)
This is the test_client.py
import socketio
from multiprocessing.pool import ThreadPool
# standard Python
sio = socketio.Client()
def f(thread):
server = 'http://127.0.0.1:5000/' + str(thread)
s = f'/{thread}'
sio.connect(server)
sio.emit('message', {'Hello': i}, namespace=s)
threads = 5
t = ThreadPool(threads)
t.map(f, range(0, threads))
Current test_client.py's Terminal Output:
raise exceptions.BadNamespaceError(
socketio.exceptions.BadNamespaceError: /2 is not a connected namespace.
Expected app.py's Terminal Output:
Hello: 0
Hello: 1
Hello: 2
Hello: 3
Hello: 4
Please if there is a different/ better way of doing this, send me the doc link to check.
There are a couple of problems with your design.
In the client, you are using five different namespaces to connect to the server, /0 to /4. But in the server you have not defined any of these, your server operates under the default namespace /. So all these client connections are going to fail, because the server does not accept connections on unknown namespaces. For a namespace to be known, you have to implement at least one handler for it.
But it is impractical to have to define handlers for that many namespaces, though. Namespaces are not a good feature to use in this case, because they cannot be added dynamically.
So the first change is to have your clients connect using the default namespace.
The second problem is that you are using a single client instance in all your threads. This is not how the client works, a client instance can only hold one connection. The solution is to create a client instance inside each thread.
The changes for the above two problems are all in the client. Here is how I changed your client to work:
import socketio
from multiprocessing.pool import ThreadPool
def f(thread):
sio = socketio.Client()
server = 'http://127.0.0.1:5000'
sio.connect(server)
sio.emit('message', {'Hello': thread})
threads = 5
t = ThreadPool(threads)
t.map(f, range(0, threads))
Output in the server:
{'Hello': 3}
{'Hello': 0}
{'Hello': 1}
{'Hello': 2}
{'Hello': 4}
When using asyncio for receiving multiple files over a TCP socket I struggle with the call order of received_data. When sending 3 data streams at once my output is the following:
DEBUG connection_made ('127.0.0.1', 33972) new connection was made
DEBUG connection_made ('127.0.0.1', 33974) new connection was made
DEBUG connection_made ('127.0.0.1', 33976) new connection was made
DEBUG data_received ('127.0.0.1', 33976) data received from new connection
DEBUG data_received ('127.0.0.1', 33974) data received from new connection
DEBUG data_received ('127.0.0.1', 33972) data received from new connection
I assume that its behaviour is analog to a stack where the data is received from the newest to the oldest connection made, but this is only a guess.
Is it possible to change that behaviour in a way that the data is received in the order that the connections were made? This is important because I need the data received from the first connection for further processing the following connections.
My code is the following:
import asyncio
class AIO(asyncio.Protocol):
def __init__(self):
self.extra = bytearray()
def connection_made(self, transport):
global request_time
peer = transport.get_extra_info('peername')
logger.debug('{} new connection was made'.format(peer))
self.transport = transport
request_time = str(time.time())
def data_received(self, request, msgid=1):
peer = self.transport.get_extra_info('peername')
logger.debug('{} data received from new connection'.format(peer))
self.extra.extend(request)
First, I would recommend using the higher-level streams API instead of the transport/protocols. Then if you need to maintain the order observed when connections were made, you can enforce it yourself using a series of asyncio.Events. For example:
import asyncio, itertools
class Comm:
def __init__(self):
self.extra = bytearray()
self.preceding_evt = None
async def serve(self, r, w):
preceding_evt = self.preceding_evt
self.preceding_evt = this_evt = asyncio.Event()
while True:
request = await r.read(4096)
# wait for the preceding connection to receive and store data
# before we store ours
if preceding_evt is not None:
await preceding_evt.wait()
preceding_evt.clear()
# self.extra now contains data from previous connections
self.extra.extend(request)
# inform the subsequent connection that we got data
this_evt.set()
w.close()
async def main():
comm = Comm()
server = await asyncio.start_server(comm.serve, '127.0.0.1', 8888)
async with server:
await server.serve_forever()
asyncio.run(main())
Good evening everyone. I'm not quite new to this place but finally decided to register and ask for a help. I develop a web application using Quart framework (asynchronous Flask). And now as application became bigger and more complex I decided to separate different procedures to different server instances, this is mostly because I want to keep web server clean, more abstract and free of computational load.
So I plan to use one web server with a few (if needed) identical procedure servers. All servers are based on quart framework, for now just for simplicity of development. I decided to use Crossbar.io router and autobahn to connect all servers together.
And here the problem occurred.
I followed this posts:
Running several ApplicationSessions non-blockingly using autbahn.asyncio.wamp
How can I implement an interactive websocket client with autobahn asyncio?
How I can integrate crossbar client (python3,asyncio) with tkinter
How to send Autobahn/Twisted WAMP message from outside of protocol?
Seems like I tried all possible approaches to implement autobahn websocket client in my quart application. I don't know how to make it possible so both things are working, whether Quart app works but autobahn WS client does not, or vice versa.
Simplified my quart app looks like this:
from quart import Quart, request, current_app
from config import Config
# Autobahn
import asyncio
from autobahn import wamp
from autobahn.asyncio.wamp import ApplicationSession, ApplicationRunner
import concurrent.futures
class Component(ApplicationSession):
"""
An application component registering RPC endpoints using decorators.
"""
async def onJoin(self, details):
# register all methods on this object decorated with "#wamp.register"
# as a RPC endpoint
##
results = await self.register(self)
for res in results:
if isinstance(res, wamp.protocol.Registration):
# res is an Registration instance
print("Ok, registered procedure with registration ID {}".format(res.id))
else:
# res is an Failure instance
print("Failed to register procedure: {}".format(res))
#wamp.register(u'com.mathservice.add2')
def add2(self, x, y):
return x + y
def create_app(config_class=Config):
app = Quart(__name__)
app.config.from_object(config_class)
# Blueprint registration
from app.main import bp as main_bp
app.register_blueprint(main_bp)
print ("before autobahn start")
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
runner = ApplicationRunner('ws://127.0.0.1:8080 /ws', 'realm1')
future = executor.submit(runner.run(Component))
print ("after autobahn started")
return app
from app import models
In this case application stuck in runner loop and whole application does not work (can not serve requests), it becomes possible only if I interrupt the runners(autobahn) loop by Ctrl-C.
CMD after start:
(quart-app) user#car:~/quart-app$ hypercorn --debug --error-log - --access-log - -b 0.0.0.0:8001 tengine:app
Running on 0.0.0.0:8001 over http (CTRL + C to quit)
before autobahn start
Ok, registered procedure with registration ID 4605315769796303
after pressing ctrl-C:
...
^Cafter autobahn started
2019-03-29T01:06:52 <Server sockets=[<socket.socket fd=11, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('0.0.0.0', 8001)>]> is serving
How to make it possible to work quart application with autobahn client together in non-blocking fashion? So autobahn opens and keeps websocket connection to Crossbar router and silently listen on background.
Well, after many sleepless nights I finally found a good approach to solve this conundrum.
Thanks to this post C-Python asyncio: running discord.py in a thread
So, I rewrote my code like this and was able to run my Quart app with autobahn client inside, and both are actively working in nonblocking fashion.
The whole __init__.py looks like:
from quart import Quart, request, current_app
from config import Config
def create_app(config_class=Config):
app = Quart(__name__)
app.config.from_object(config_class)
# Blueprint registration
from app.main import bp as main_bp
app.register_blueprint(main_bp)
return app
# Autobahn
import asyncio
from autobahn import wamp
from autobahn.asyncio.wamp import ApplicationSession, ApplicationRunner
import threading
class Component(ApplicationSession):
"""
An application component registering RPC endpoints using decorators.
"""
async def onJoin(self, details):
# register all methods on this object decorated with "#wamp.register"
# as a RPC endpoint
##
results = await self.register(self)
for res in results:
if isinstance(res, wamp.protocol.Registration):
# res is an Registration instance
print("Ok, registered procedure with registration ID {}".format(res.id))
else:
# res is an Failure instance
print("Failed to register procedure: {}".format(res))
def onDisconnect(self):
print('Autobahn disconnected')
#wamp.register(u'com.mathservice.add2')
def add2(self, x, y):
return x + y
async def start():
runner = ApplicationRunner('ws://127.0.0.1:8080/ws', 'realm1')
await runner.run(Component) # use client.start instead of client.run
def run_it_forever(loop):
loop.run_forever()
asyncio.get_child_watcher() # I still don't know if I need this method. It works without it.
loop = asyncio.get_event_loop()
loop.create_task(start())
print('Starting thread for Autobahn...')
thread = threading.Thread(target=run_it_forever, args=(loop,))
thread.start()
print ("Thread for Autobahn has been started...")
from app import models
With this scenario we create task with autobahn's runner.run and attach it to the current loop and then run this loop forever in new thread.
I was quite satisfied with current solution.... but then then was found out that this solution has some drawbacks, that was crucial for me, for example: reconnect if connection dropped (i.e crossbar router becomes unavailable). With this approach if connection was failed to initialize or dropped after a while it will not try to reconnect. Additionally for me it wasn't obvious how to ApplicationSession API, i.e. to register/call RPC from the code in my quart app.
Luckily I spotted another new component API that autobahn used in their documentation:
https://autobahn.readthedocs.io/en/latest/wamp/programming.html#registering-procedures
https://github.com/crossbario/autobahn-python/blob/master/examples/asyncio/wamp/component/backend.py
It has auto reconnect feature and it's easy to register functions for RPC using decorators #component.register('com.something.do'), you just need to import component before.
So here is the final view of __init__.py solution:
from quart import Quart, request, current_app
from config import Config
def create_app(config_class=Config):
...
return app
from autobahn.asyncio.component import Component, run
from autobahn.wamp.types import RegisterOptions
import asyncio
import ssl
import threading
component = Component(
transports=[
{
"type": "websocket",
"url": u"ws://localhost:8080/ws",
"endpoint": {
"type": "tcp",
"host": "localhost",
"port": 8080,
},
"options": {
"open_handshake_timeout": 100,
}
},
],
realm=u"realm1",
)
#component.on_join
def join(session, details):
print("joined {}".format(details))
async def start():
await component.start() #used component.start() instead of run([component]) as it's async function
def run_it_forever(loop):
loop.run_forever()
loop = asyncio.get_event_loop()
#asyncio.get_child_watcher() # I still don't know if I need this method. It works without it.
asyncio.get_child_watcher().attach_loop(loop)
loop.create_task(start())
print('Starting thread for Autobahn...')
thread = threading.Thread(target=run_it_forever, args=(loop,))
thread.start()
print ("Thread for Autobahn has been started...")
from app import models
I hope it will help somebody. Cheers!
I have some periodic tasks which I execute with Celery (parse pages).
Also I established a websocket with tornado.
I want to pass data from periodic tasks to tornado, then write this data to websocket and use this data on my html page.
How can I do this?
I tried to import module with tornado websocket from my module with celery tasks, but ofcourse, that didn't work.
I know only how to return some data, if I get a message from my client-side. Here is how I cope with it:
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
import socket
'''
This is a simple Websocket Echo server that uses the Tornado websocket handler.
Please run `pip install tornado` with python of version 2.7.9 or greater to install tornado.
This program will echo back the reverse of whatever it recieves.
Messages are output to the terminal for debuggin purposes.
'''
class handler():
wss = []
class WSHandler(tornado.websocket.WebSocketHandler):
def open(self):
print ('new connection')
if self not in handler.wss:
handler.wss.append(self)
def on_message(self, message):
print ('message received: ' + message)
wssend('Ihaaaa')
def on_close(self):
print ('connection closed')
if self in handler.wss:
handler.wss.remove(self)
def check_origin(self, origin):
return True
def wssend(message):
print(handler.wss)
for ws in handler.wss:
if not ws.ws_connection.stream.socket:
print ("Web socket does not exist anymore!!!")
handler.wss.remove(ws)
else:
print('I am trying!')
ws.write_message(message)
print('tried')
application = tornado.web.Application([
(r'/ws', WSHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
myIP = socket.gethostbyname(socket.gethostname())
print ('*** Websocket Server Started at %s***' % myIP)
main_loop = tornado.ioloop.IOLoop.instance()
main_loop.start()
The option is to make a handle in tornado and then post results of celery task to this handle.
After that, there will be an opportunity to pass this data to websocket.
Inspired by ipython-notebook-proxy, and based on ipydra, and extending the latter to support more complex user authentication as well as a proxy, because in my use case, only port 80 can be exposed.
I am using flask-sockets for the gunicorn worker, but I am having troubles to proxy WebSockets. IPython uses three different WebSockets connections, /shell, /stdin, and /iopub, but I am only able to get the 101 Switching Protocols for the first two. And /stdin receives a Connection Close Frame as soon as is created.
This is the excerpt code in question:
# Flask imports...
from werkzeug import LocalProxy
from ws4py.client.geventclient import WebSocketClient
# I use my own LocalProxy because flask-sockets does not support Werkzeug Rules
websocket = LocalProxy(lambda: request.environ.get('wsgi.websocket', None))
websockets = {}
PROXY_DOMAIN = "127.0.0.1:8888" # IPython host and port
methods = ["GET", "POST", "PUT", "DELETE", "HEAD", "OPTIONS", "PATCH",
"CONNECT"]
#app.route('/', defaults={'url': ''}, methods=methods)
#app.route('/<path:url>', methods=methods)
def proxy(url):
with app.test_request_context():
if websocket:
while True:
data = websocket.receive()
websocket_url = 'ws://{}/{}'.format(PROXY_DOMAIN, url)
if websocket_url not in websockets:
client = WebSocketClient(websocket_url,
protocols=['http-only', 'chat'])
websockets[websocket_url] = client
else:
client = websockets[websocket_url]
client.connect()
if data:
client.send(data)
client_data = client.receive()
if client_data:
websocket.send(client_data)
return Response()
I also tried to create my own WebSocket proxy class, but it doesn't work either.
class WebSocketProxy(WebSocketClient):
def __init__(self, to, *args, **kwargs):
self.to = to
print(("Proxy to", self.to))
super(WebSocketProxy, self).__init__(*args, **kwargs)
def opened(self):
m = self.to.receive()
print("<= %d %s" % (len(m), str(m)))
self.send(m)
def closed(self, code, reason):
print(("Closed down", code, reason))
def received_message(self, m):
print("=> %d %s" % (len(m), str(m)))
self.to.send(m)
Regular request-response cycle works like a charm, so I removed that code. If interested, the complete code is hosted in hidra.
I run the server with
$ gunicorn -k flask_sockets.worker hidra:app
Here is my solution(ish). It is crude, but should serve as a starting point for building websocket proxy. The full code is available in unreleased project, pyramid_notebook.
This uses ws4py and uWSGI instead of gunicorn
We use uWSGI's internal mechanism to receive downstream websocket message loop. There is nothing like WSGI for websockets in Python world (yet?), but looks like every web server implements its own mechanism.
A custom ws4py ProxyConnection is created which can combine ws4py event loop with uWSGI event loop
The thing is started and messages start fly around
This uses Pyramid request (based on WebOb), but this really shouldn't matter and code should be fine for any Python WSGI app with little modifications
As you can see, this does not really take advantage of asynchronicity, but just sleep() if there is nothing coming in from the socket
Code goes here:
"""UWSGI websocket proxy."""
from urllib.parse import urlparse, urlunparse
import logging
import time
import uwsgi
from ws4py import WS_VERSION
from ws4py.client import WebSocketBaseClient
#: HTTP headers we need to proxy to upstream websocket server when the Connect: upgrade is performed
CAPTURE_CONNECT_HEADERS = ["sec-websocket-extensions", "sec-websocket-key", "origin"]
logger = logging.getLogger(__name__)
class ProxyClient(WebSocketBaseClient):
"""Proxy between upstream WebSocket server and downstream UWSGI."""
#property
def handshake_headers(self):
"""
List of headers appropriate for the upgrade
handshake.
"""
headers = [
('Host', self.host),
('Connection', 'Upgrade'),
('Upgrade', 'websocket'),
('Sec-WebSocket-Key', self.key.decode('utf-8')),
# Origin is proxyed from the downstream server, don't set it twice
# ('Origin', self.url),
('Sec-WebSocket-Version', str(max(WS_VERSION)))
]
if self.protocols:
headers.append(('Sec-WebSocket-Protocol', ','.join(self.protocols)))
if self.extra_headers:
headers.extend(self.extra_headers)
logger.info("Handshake headers: %s", headers)
return headers
def received_message(self, m):
"""Push upstream messages to downstream."""
# TODO: No support for binary messages
m = str(m)
logger.debug("Incoming upstream WS: %s", m)
uwsgi.websocket_send(m)
logger.debug("Send ok")
def handshake_ok(self):
"""
Called when the upgrade handshake has completed
successfully.
Starts the client's thread.
"""
self.run()
def terminate(self):
raise RuntimeError("NO!")
super(ProxyClient, self).terminate()
def run(self):
"""Combine async uwsgi message loop with ws4py message loop.
TODO: This could do some serious optimizations and behave asynchronously correct instead of just sleep().
"""
self.sock.setblocking(False)
try:
while not self.terminated:
logger.debug("Doing nothing")
time.sleep(0.050)
logger.debug("Asking for downstream msg")
msg = uwsgi.websocket_recv_nb()
if msg:
logger.debug("Incoming downstream WS: %s", msg)
self.send(msg)
s = self.stream
self.opened()
logger.debug("Asking for upstream msg")
try:
bytes = self.sock.recv(self.reading_buffer_size)
if bytes:
self.process(bytes)
except BlockingIOError:
pass
except Exception as e:
logger.exception(e)
finally:
logger.info("Terminating WS proxy loop")
self.terminate()
def serve_websocket(request, port):
"""Start UWSGI websocket loop and proxy."""
env = request.environ
# Send HTTP response 101 Switch Protocol downstream
uwsgi.websocket_handshake(env['HTTP_SEC_WEBSOCKET_KEY'], env.get('HTTP_ORIGIN', ''))
# Map the websocket URL to the upstream localhost:4000x Notebook instance
parts = urlparse(request.url)
parts = parts._replace(scheme="ws", netloc="localhost:{}".format(port))
url = urlunparse(parts)
# Proxy initial connection headers
headers = [(header, value) for header, value in request.headers.items() if header.lower() in CAPTURE_CONNECT_HEADERS]
logger.info("Connecting to upstream websockets: %s, headers: %s", url, headers)
ws = ProxyClient(url, headers=headers)
ws.connect()
# Happens only if exceptions fly around
return ""