Autoban asyncio client arguments - websocket

Disclaimer: This is my first time working with WS and MQTT, so structure may be wrong. Please point this out.
I am using autoban with asyncio to receive and send messages to a HA (HomeAssistant) instance through websockets.
Once my python code receives messages, I want to forward them using MQTT to AWS IoT service. This communication needs to work both ways.
I have made this work as a script where everything is floating within a file.
I am trying to make this work in a class structure, which is how my final work will be done.
In order to do that, I need my WebSocketClientProtocol to have access to AWSIoTClient .publish and .subscribe. Although WebSocketClientProtocol initialization is done through a factory, as a result I am not sure how to pass any arguments to it. For instance:
if __name__ == "__main__":
aws_iot_client = AWSIoTClient(...)
factory = WebSocketServerFactory('ws://localhost:8123/api/websocket')
factory.protocol = HomeAssistantProtocol
How can I pass aws_iot_client to HomeAssistantProtocol?
I have found examples of Autobahn - Twisted that do this using self.factory on the WebSocketClientProtocol subclass, but this is not available for asyncio.

I found that calling run_until_complete on returns transport, protocol instances, so I can then I can pass the AWS client to it.
loop = asyncio.get_event_loop()
coro = loop.create_connection(factory, '127.0.0.1', 9000)
transport, protocol = loop.run_until_complete(coro)

Related

How do I expose my own Elixir websocket using WebSockex

I see the basic example for Elixir's WebSockex library here but it doesn't really explain how I can expose my own websocket to the internet. This question has answers which explain how to chat to an existing websocket externally, but I want to expose my own websocket. I'm actually using websockex as part of a Phoenix application, so perhaps bits of Phoenix might help here?
I obviously know the ip:port combo of my phoenix application so given these, how do I expose a websockex websocket on that ip:port? In other words, what should I pass as the URL? in this basic example code:
defmodule WebSocketExample do
use WebSockex
def start_link(url, state) do
WebSockex.start_link(url, __MODULE__, state)
end
def handle_frame({type, msg}, state) do
IO.puts "Received Message - Type: #{inspect type} -- Message: #{inspect msg}"
{:ok, state}
end
def handle_cast({:send, {type, msg} = frame}, state) do
IO.puts "Sending #{type} frame with payload: #{msg}"
{:reply, frame, state}
end
end
Please note that I need to expose a raw websocket, not a Phoenix channel, as the consumer doesn't understand Phoenix channels. If Phoenix can expose a raw websocket then I'll consider that a solution too.
If neither Phoenix nor WebSockex can help, what are my options?
Websockex is a client library, I don't think it has any code for exposing a websocket. Since you're already using phoenix, you probably can do what you need with phoenix channels.
If you're on cowboy (and you probably are, since it's the default), then you can also use it to expose a raw websocket. However, it requires some fiddling with routing. You will need to replace YourAppWeb.Endpoint with a manual configuration of cowboy:
{
Plug.Cowboy,
scheme: :http,
plug: YourAppWeb.Endpoint,
options: endpoint_options(),
dispatch: [
_: [
# Dispatch paths beginning with /ws to a websocket handler
{"/ws/[...]", YourApp.WebsocketHandler, []},
# Dispatch other paths to the phoenix endpoint
{:_, Plug.Cowboy.Handler, {YourAppWeb.Endpoint, endpoint_options()}}
]
]
}
I have honestly only done this with raw plug, so you might need to convert the endpoint to be a Plug instead of a Phoenix.Endpoint. Then, you need to implement YourApp.WebsocketHandler to conform to cowboy's API and perform a websocket upgrade (and handle sending/receiving messages), as described in cowboy docs. You can also see this gist for a more fleshed-out example.
WebSockex implements many callbacks, including, but not limited to WebSockex.handle_connect/2. It holds WebSockex.Conn in a state and passes it to all callbacks.
WebSockex.Conn is a plain old good struct, having socket field.
So from any callback (I’d do it from WebSockex.handle_connect/2) you might share this socket with the process which needs it and use it then from there.
Also, you can borrow some internals and check how the connection is being created.
You’ll see it uses WebSockex.Conn.new/2 that returns an initialized connection, that, in turn, holds a socket. In that case, you’ll be obliged to supervise the process that holds the socket manually.
The power of OSS is all answers are one mouse click far from questions.

Broadcasting message from grpc server to all/some connected clients in python

i am learning how to use grpc streams to exchange messages between clients and server in python. I found a base example that enables the simple message sending between server and client. I am trying to modify it so that i could keep track of all the clients connected to the grpc server (on the server side) and could do two things: 1) broadcast from server to all clients, 2) send message to a particular connected client.
Here is the .proto file
syntax = 'proto3';
service Scenario {
rpc Chat(stream DPong) returns (stream DPong) {}
}
message DPong {
string name = 1;
}
And here is the client.py that creates a daemon process to listen for incoming messages and waits for stdin for any outgoing messages
import threading
import grpc
import time
import scenario_pb2_grpc, scenario_pb2
# new changes
msgQueue = queue.Queue()
def run():
channel = grpc.insecure_channel('localhost:50052')
stub = scenario_pb2_grpc.ScenarioStub(channel)
print('client connected')
global queue
def inputStream():
while 1:
msg = input('>>Enter message\n>>')
yield scenario_pb2.DPong(name=msg)
input_stream = stub.Chat(inputStream())
def read_incoming():
while 1:
print('receivedFromServer: {}\n>>'.format(next(input_stream).name))
thread = threading.Thread(target=read_incoming)
thread.daemon = True
thread.start()
while 1:
time.sleep(1)
if __name__ == '__main__':
print('client starting ...')
run()
Below is the server.py
import random
import string
import threading
import grpc
import scenario_pb2_grpc
import scenario_pb2
import time
from concurrent import futures
clientList = []
class Scenario(scenario_pb2_grpc.ScenarioServicer):
def Chat(self, request_iterator, context):
clients = []
def stream():
while 1:
time.sleep(1)
msg = input('>>Enter message\n>>')
for i in clientList:
yield msg
output_stream = stream()
def read_incoming():
while 1:
received = next(request_iterator).name
if (context,request_iterator) not in clientList:
clientList.append((context, request_iterator))
print('receivedFromClient: {}'.format(received), len(clientList))
thread = threading.Thread(target=read_incoming)
thread.daemon = True
thread.start()
while 1:
msg = output_stream
yield scenario_pb2.DPong(name=next(msg))
if __name__ == '__main__':
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
scenario_pb2_grpc.add_ScenarioServicer_to_server(
Scenario(), server)
server.add_insecure_port('[::]:50052')
server.start()
print('listening ...')
while 1:
time.sleep(1)
So far, i have tried to maintain a list object clientList that contains the context & request_iterator object of the client, and is updated every time a new client joins the server. But how do i set these object from the clientList before sending out an outgoing message? I have tried to iterate the list but the server sends the message to the same client (the last client heard from) a number of times instead of sending it to all the clients once.
Any help is highly appreciated!
This is certainly possible. The problem that you're running into here is that each call to Scenario.Chat on the server side corresponds to a single client connection. That is, this function is called when the streaming RPC starts and as soon as the function exits, the RPC ends.
So if you want n connected clients, you'll need n instances of Scenario.Chat running concurrently, each on its own thread. This does mean that the number of concurrently connected clients is limited by the size of the threadpool with which you instantiate your server.
So, let's say you have n threads in your server process dedicated to maintaining client connections. Then you need another n+1th thread (perhaps the main thread) determining when the server will broadcast a message to all clients (maybe by looking for input from STDIN?). When this extra thread determines that a message should be broadcast, it needs to communicate this intent to all of the threads maintaining connections to a client. There are many ways to make this happen. A threading.Condition and a global collections.deque, or a collections.deque per client connection (somewhat like channels between goroutines) would be two ways. The tricky bit here is ensuring that each client connection will receive the message regardless of how long the client connection thread takes to wake up and how many messages the n+1th thread decides to send in the interim.
If this is still unclear, I can follow up with some actual code demonstrating the idea.
You can spin up multiple ports in one application.
gRPC can be running in port 50011 and flask with socket.io can be running in port 8080
with python, you can use the flask framework and flask_socketio library in your server.py
eg server.py
from flask import Flask
from flask_socketio import SocketIO, emit
app = Flask(__name__)
socketio = SocketIO(app)
#app.route('/')
def index():
return "Hello, World!"
if __name__ == '__main__':
app.run(port=8080)
app.run(debug=True)
socketio.run(app)
instead of using gRPC streaming API, use WebSocket to broadcast to all connected clients and specific/selected clients using rooms.
eg
#socketio.on('message')
def handle_message(data):
// logic to send large data in chunks the logic should call the
// emit function in socket.io and emit an event that send the large
// data in chunks eg emit('my response', chunkData)
gRPC is primarily built for one client request and response and WebSocket is for multiple clients.

Non-blocking socket connect()'s within asyncio

Here is an example how to do non-blocking socket connects (as client) within asyncore. Since this module is deprecated with recomendation 'Deprecated since version 3.6: Please use asyncio instead.' How does it possible within asyncio? Creating socket and it's connect inside coroutine is working syncronious and create problem like it described in linked question.
A connect inside a coroutine appears synchronous to that coroutine, but is in fact asynchronous with respect to the event loop. This means that you can create any number of coroutines working in parallel without blocking each other, and yet all running inside a single thread.
If you are doing http, look at examples of parallel downloads using aiohttp. If you need low-level TCP connections, look at the examples in the documentation and use asyncio.gather to run them in parallel:
async def talk(host):
# wait until connection is established, but without blocking
# other coroutines
r, w = await asyncio.open_connection(host, 80)
# use the streams r, w to talk to the server - for example, echo:
while True:
line = await r.readline()
if not line:
break
w.write(line)
w.close()
async def talk_many(hosts):
coros = [talk(host) for host in hosts]
await asyncio.gather(*coros)
asyncio.run(talk_many(["host1", "host2", ...])

ZeroMQ: Many-to-one no-reply aynsc messages

I have read through the zguide but haven't found the kind of pattern I'm looking for:
There is one central server (with known endpoint) and many clients (which may come and go).
Clients keep sending hearbeats to the server, but they don't want the server to reply.
Server receives heartbeats, but it does not reply to clients.
Hearbeats sent when clients and server are disconnected should somehow be dropped to prevent a heartbeat flood when they go back online.
The closet I can think of is the DEALER-ROUTER pattern, but since this is meant to be used as an async REQ-REP pattern (no?), I'm not sure what would happen if the server just keep silent on incoming "requests." Also, the DEALER socket would block rather then start dropping heartbeats when the send High Water Mark is reached, which would still result in a heartbeat flood.
The PUSH/PULL pattern should give you what you need.
# Client example
import zmq
class Client(object):
def __init__(self, client_id):
self.client_id = client_id
ctx = zmq.Context.instance()
self.socket = ctx.socket(zmq.PUSH)
self.socket.connect("tcp://localhost:12345")
def send_heartbeat(self):
self.socket.send(str(self.client_id))
# Server example
import zmq
class Server(object):
def __init__(self):
ctx = zmq.Context.instance()
self.socket = ctx.socket(zmq.PULL)
self.socket.bind("tcp://*:12345") # close quote
def receive_heartbeat(self):
return self.socket.recv() # returns the client_id of the message's sender
This PUSH/PULL pattern works with multiple clients as you wish. The server should keep an administration of the received messages (i.e. a dictionary like {client_id : last_received} which is updated with datetime.utcnow() on each received message. And implement some housekeeping function to periodically check the administration for clients with old timestamps.

How to make an unblocking connection from a server in Twisted?

I am implementing an API with the following basic structure:
Run serverA
Client connects to serverA and sends data
ServerA processes the data and sends it on to serverB
ServerB replies to serverA
ServerA responds to client's request based on the input received from serverB
So far I have tried two solutions:
1) Create a standard non-twisted TCP connection using httplib to process the request from serverA to serverB. This however effectively blocks the server for the duration of the httplib call.
2) Create a second class inheriting from protocol.Protocol and use
factory = protocol.ClientFactory()
factory.protocol = Authenticate
reactor.connectSSL("localhost",31337,factory, ssl.ClientContextFactory())
to create the connection between serverA and serverB. However when doing this, I don't seem to be able to access the original client-to-serverA connection from within the callbacks of the request class.
What would be the correct way to handle such a setting in Twisted?
The "client-to-serverA" connection is represented by a protocol instance associated with a transport. These are both regular old Python objects, so you can do things like pass them as arguments to functions or class initializers, or set them as attributes on other objects.
For example, if you have ClientToServerAProtocol with a fetchServerBData method which is invoked in response to some bytes being received from the client, you might write it something like this:
class ClientToServerAProtocol(Protocol):
...
def fetchServerBData(self, anArg):
factory = protocol.ClientFactory()
factory.protocol = Authenticate
factory.clientToServerAProtocol = self
reactor.connectSSL("localhost",31337, factory, ssl.ClientContextFactory())
Since ClientFactory sets itself as the factory attribute on any protocol it creates, the Authenticate instance which will result from this will be able to say `self.factory.clientToServerAProtocol and get a reference to that "client-to-serverA" connection.
There are lots of variations on this approach. Here's another one, using the more recently introduced endpoint API:
from twisted.internet.endpoints import SSL4ClientEndpoint
class ClientToServerAProtocol(Protocol):
...
def fetchServerBData(self, anArg):
e = SSL4ClientEndpoint(reactor, "localhost", 31337, ssl.ClientContextFactory())
factory = protocol.Factory()
factory.protocol = Authenticate
connectDeferred = e.connect(factory)
def connected(authProto):
authProto.doSomethingWith(self)
connectDeferred.addCallback(connected)
Same basic idea here - use self to give a reference to the "client-to-serverA" connection you're interested in to the Authenticate protocol. Here, I used a nested function to ''close over'' self. That's just another of the many options you have for getting references into the right part of your program.

Resources