Micropython: asyncio Server: get Client IP address - python-asyncio

I am new in micropython and testing it out, if it can fit the needs for my next project. I have set up a script to test it and there I run three async jobs in an endless loop. one of them is a tiny webserver, which should act as an API. The construct is working fine, I just need to know, how can I get the clients IP address, which is calling my API webservice (it will be only a local IP, so no worries about reverse proxies etc.)? So I would like to have the clients IP in the Method APIHandling, in this snippet just to print it out:
async def APIHandling(reader, writer):
request_line = await reader.readline()
# We are not interested in HTTP request headers, skip them
while await reader.readline() != b"\r\n":
pass
request = str(request_line)
try:
request = request.split()[1]
except IndexError:
pass
print("API request: " + request + " from IP: ")
req = request.split('/')
#do some things here
response = html % stateis
writer.write(response)
await writer.drain()
await writer.wait_closed()
async def BusReader():
#doing something here
await asyncio.sleep(0)
async def UiHandling():
#doing something else here
await asyncio.sleep(0.5)
async def Main():
set_global_exception()
loop = asyncio.get_event_loop()
loop.create_task(asyncio.start_server(APIHandling, Networking.GetIPAddress(), 80))
loop.create_task(UiHandling())
loop.create_task(BusReader())
loop.run_forever()
try:
asyncio.run(Main())
finally:
asyncio.new_event_loop()
The only thing I found was this: Stream.get_extra_info(v) - but I do not have a Stream avaliable anywhere?
Note: This is just a snippet with the essential parts of my actual script, so you will find references to other classes etc. which are not present in this code example.

Nevermind, I was too stupid to see that "writer" is actually a Stream, where I can get the clients IP with writer.get_extra_info('peername')[0]

Related

How to break out of an (asyncio) websocket fetch loop that doesn't have any incoming messages?

This code prints all messages from a websocket connection:
class OrderStreamer:
def __init__(ᬑ):
ᬑ.terminate_flag = False
# worker thread to receive data stream
ᬑ.worker_thread = threading.Thread(
target=ᬑ.worker_thread_func,
daemon=True
)
def start_streaming(ᬑ, from_scheduler = False):
ᬑ.worker_thread.start()
def terminate(ᬑ):
ᬑ.terminate_flag = True
def worker_thread_func(ᬑ):
asyncio.run(ᬑ.aio_func()) # blocks
async def aio_func(ᬑ):
async with \
aiohttp.ClientSession() as session, \
session.ws_connect(streams_url) as wsock, \
anyio.create_task_group() as tg:
async for msg in wsock:
print(msg.data)
if ᬑ.terminate_flag:
await wsock.close()
The problem is that if no messages arrive, the loop never gets the chance to check terminate_flag and never exits.
I tried creating an external reference to the runloop and websocket:
async with \
aiohttp.ClientSession() as session, \
session.ws_connect(streams_url) as wsock, \
anyio.create_task_group() as tg:
ᬑ.wsock = wsock
ᬑ.loop = asyncio.get_event_loop()
... and modifying my terminate function:
def terminate(ᬑ):
# ᬑ.loop.stop()
asyncio.set_event_loop(ᬑ.loop)
async def kill():
await ᬑ.wsock.close()
asyncio.run(kill())
... but it does not work.
I can't afford to rearchitect my entire application to use asyncio at this point in time.
How to break out of the loop?
You should use asyncio.wait_for or asyncio.wait and call wsock.__anext__() directly instead of using async for loop.
The loop with asyncio.wait should look something like this:
next_message = asyncio.create_task(wsock.__anext__())
while not self.terminate_flag:
await asyncio.wait([next_message], timeout=SOME_TIMEOUT,)
if next_message.done():
try:
msg = next_message.result()
except StopAsyncIteration:
break
else:
print(msg.data)
next_message = asyncio.create_task(wsock.__anext__())
SOME_TIMEOUT should be replaced with the amount of seconds you want to wait continuously for the next incoming message
Here is the documentation for asyncio.wait
P.S. I replaced ᬑ with self, but I hope you get the idea
Note that to read data you should not create a new task as mentioned here:
Reading from the WebSocket (await ws.receive()) must only be done inside the request handler task;
You can simply use timeout.
async def handler(request):
ws = web.WebSocketResponse() # or web.WebSocketResponse(receive_timeout=5)
await ws.prepare(request)
while True:
try:
msg = await ws.receive(timeout=5)
except asyncio.TimeoutError:
print('TimeoutError')
if your_terminate_flag is True:
break
aiohttp/web_protocol.py/_handle_request() will dump errors if you don't write try/except or don't catch the right exception. Try testing except Exception as err: or check its source code.

write unit test for async aiohttp and aio files

I am new with asyncio. I am using aiohttp and aiofiles for downloading images. how to write the unit tests for both of these.
class ImageDownloader:
def __init__(self, folder_path: str):
self.folder_path = folder_path
async def async_download_image(self, image_name: str, image_url: str):
logging.info("%s downloading is started", image_name)
async with aiohttp.ClientSession() as session:
async with session.get(image_url) as resp:
if resp.status == 200:
logging.info(" %s downloading is finished", image_name)
image_saving_path = os.path.join(self.folder_path, image_name)
logging.info(" %s saving to directory is started", image_name)
file = await aiofiles.open(image_saving_path, mode='wb')
await file.write(await resp.read())
await file.close()
logging.info(" %s saving to directory is finished", image_name)
else:
logging.exception(IMAGE_DOWNLOADER_EXCEPTION + image_name)
raise ImageDownloaderError(IMAGE_DOWNLOADER_EXCEPTION + image_name)
Since Python 3.8 there is unittest.IsolatedAsyncioTestCase which let's you test your write unittest for any asyncio code conveniently:
class MyFixture(unittest.IsolatedAsyncioTestCase):
async def test_1(self):
result = await production_code()
self.assertEqual(result, 42)
Regarding aiohttp it is officially recommended (see warning in the docs "Faking request object") to run a local server to test your client. To be honest, I have no clue why, as it disagrees with the standard rule to mock expensive dependencies. Anyway, to do so you have to redesign your function so it's accepting the session object as a parameter. This way, you can redirect the requests with the help of a mocked resolver to your local test server.
async def production_code(client_session):
aync with client_session.get(...) as response:
...
...
async def test_2(self):
with create_mocked_session() as mock:
await production_code(mock)
...
It may be easier to bypass the whole aiohttp lib completely by mocking the session object itself and yielding prepare handcrafted test responses.
I have no idea about aiofiles but the same pattern holds for file input/output as well. Pass in a mocked file_like. which preferably holds everything in memory.

Code is not delievered by SendCodeRequest without error and this happens only on server-side with Heroku

Here is my situation: the same Telethon code is used on my local machine and on the server. Requesting authorization code from local machine works fine. Requesting the code from the server does not produce any error, and code is not sent. Sometimes it works even from the server without any changes in code.
I suppose there might be some ip blocks or something related to the ip, cause that is the only thing which might differ on the server side: Heroku assign ip addresses dynamically, so, there might by some subnets which are blocked by Telegram API for some reason. But there is no error and that is really strange. There are too many ip addresses to disprove the hypothesis. I need to catch at least one ip address which gives me opposite results: one time code it recieved and another time does not. So I am stuck with this situation and have no ideas how it could be fixed or clarified.
global t
t = None
async def ssssendCode(phone):
global t
try:
if os.path.isfile(phone+'.session'):
logger.debug('client file exists')
else:
logger.debug('client file does not exist')
if t is None:
t = TelegramClient(phone, settings['telegramClientAPIId'], settings['telegramClientAPIHash'])
t.phone = phone
#t.phone_code_hash = None
await t.connect()
#response = await t.send_code_request(phone=phone,force_sms=True)
s3_session.resource('s3').Bucket('telethon').upload_file(str(phone)+".session", str(phone)+".session")
logger.debug(str(requests.get('https://httpbin.org/ip').text))
response = await t.send_code_request(phone=phone)
logger.debug(str(t.is_connected()))
except Exception as e:
response = str(e)
return str(response)
example of response to the local machine request
SentCode(type=SentCodeTypeSms(length=5), phone_code_hash='b5b069a2a4122040f1', next_type=CodeTypeCall(), timeout=120)
example of reponse to the server-side request
SentCode(type=SentCodeTypeSms(length=5), phone_code_hash='0e89db0324c1af0149', next_type=CodeTypeCall(), timeout=120)
send_code_request is the from the Telethon without modifications
async def send_code_request(
self: 'TelegramClient',
phone: str,
*,
force_sms: bool = False) -> 'types.auth.SentCode':
"""
Sends the Telegram code needed to login to the given phone number.
Arguments
phone (`str` | `int`):
The phone to which the code will be sent.
force_sms (`bool`, optional):
Whether to force sending as SMS.
Returns
An instance of :tl:`SentCode`.
Example
.. code-block:: python
phone = '+34 123 123 123'
sent = await client.send_code_request(phone)
print(sent)
"""
result = None
phone = utils.parse_phone(phone) or self._phone
phone_hash = self._phone_code_hash.get(phone)
if not phone_hash:
try:
result = await self(functions.auth.SendCodeRequest(
phone, self.api_id, self.api_hash, types.CodeSettings()))
except errors.AuthRestartError:
return await self.send_code_request(phone, force_sms=force_sms)
# If we already sent a SMS, do not resend the code (hash may be empty)
if isinstance(result.type, types.auth.SentCodeTypeSms):
force_sms = False
# phone_code_hash may be empty, if it is, do not save it (#1283)
if result.phone_code_hash:
self._phone_code_hash[phone] = phone_hash = result.phone_code_hash
else:
force_sms = True
self._phone = phone
if force_sms:
result = await self(
functions.auth.ResendCodeRequest(phone, phone_hash))
self._phone_code_hash[phone] = result.phone_code_hash
return result
Just in case: I have much more than 2 minutes between attempts to get a code from the local machine and server, so it is absolutely not the timeout issue. And moreover: even when requesting the code from the local right after half a minute from the failed server-side attemp: code is coming almost immediately.

How to reuse aiohttp ClientSession pool?

The docs say to reuse the ClientSession:
Don’t create a session per request. Most likely you need a session per
application which performs all requests altogether.
A session contains a connection pool inside, connection reusage and
keep-alives (both are on by default) may speed up total performance.1
But there doesn't seem to be any explanation in the docs about how to do this? There is one example that's maybe relevant, but it does not show how to reuse the pool elsewhere: http://aiohttp.readthedocs.io/en/stable/client.html#keep-alive-connection-pooling-and-cookie-sharing
Would something like this be the correct way to do it?
#app.listener('before_server_start')
async def before_server_start(app, loop):
app.pg_pool = await asyncpg.create_pool(**DB_CONFIG, loop=loop, max_size=100)
app.http_session_pool = aiohttp.ClientSession()
#app.listener('after_server_stop')
async def after_server_stop(app, loop):
app.http_session_pool.close()
app.pg_pool.close()
#app.post("/api/register")
async def register(request):
# json validation
async with app.pg_pool.acquire() as pg:
await pg.execute() # create unactivated user in db
async with app.http_session_pool as session:
# TODO send activation email using SES API
async with session.post('http://httpbin.org/post', data=b'data') as resp:
print(resp.status)
print(await resp.text())
return HTTPResponse(status=204)
There're few things I think can be improved:
1)
Instance of ClientSession is one session object. This on session contains pool of connections, but it's not "session_pool" itself. I would suggest rename http_session_pool to http_session or may be client_session.
2)
Session's close() method is a corountine. Your should await it:
await app.client_session.close()
Or even better (IMHO), instead of thinking about how to properly open/close session use standard async context manager with awaiting of __aenter__ / __aexit__:
#app.listener('before_server_start')
async def before_server_start(app, loop):
# ...
app.client_session = await aiohttp.ClientSession().__aenter__()
#app.listener('after_server_stop')
async def after_server_stop(app, loop):
await app.client_session.__aexit__(None, None, None)
# ...
3)
Pay attention to this info:
However, if the event loop is stopped before the underlying connection
is closed, an ResourceWarning: unclosed transport warning is emitted
(when warnings are enabled).
To avoid this situation, a small delay must be added before closing
the event loop to allow any open underlying connections to close.
I'm not sure it's mandatory in your case but there's nothing bad in adding await asyncio.sleep(0) inside after_server_stop as documentation advices:
#app.listener('after_server_stop')
async def after_server_stop(app, loop):
# ...
await asyncio.sleep(0) # http://aiohttp.readthedocs.io/en/stable/client.html#graceful-shutdown
Upd:
Class that implements __aenter__ / __aexit__ can be used as async context manager (can be used in async with statement). It allows to do some actions before executing internal block and after it. This is very similar to regular context managers, but asyncio related. Same as regular context manager async one can be used directly (without async with) manually awaiting __aenter__ / __aexit__.
Why do I think it's better to create/free session using __aenter__ / __aexit__ manually instead of using close(), for example? Because we shouldn't worry what actually happens inside __aenter__ / __aexit__. Imagine in future versions of aiohttp creating of session will be changed with the need to await open() for example. If you'll use __aenter__ / __aexit__ you wouldn't need to somehow change your code.
seems no session pool in aiohttp.
// just post some official docs.
persistent session
here is persistent-session usage demo in official site
https://docs.aiohttp.org/en/latest/client_advanced.html#persistent-session
app.cleanup_ctx.append(persistent_session)
async def persistent_session(app):
app['PERSISTENT_SESSION'] = session = aiohttp.ClientSession()
yield
await session.close()
async def my_request_handler(request):
session = request.app['PERSISTENT_SESSION']
async with session.get("http://python.org") as resp:
print(resp.status)
//TODO: a full runnable demo code
connection pool
and it has a connection pool:
https://docs.aiohttp.org/en/latest/client_advanced.html#connectors
conn = aiohttp.TCPConnector()
#conn = aiohttp.TCPConnector(limit=30)
#conn = aiohttp.TCPConnector(limit=0) # nolimit, default is 100.
#conn = aiohttp.TCPConnector(limit_per_host=30) # default is 0
session = aiohttp.ClientSession(connector=conn)
I found this question after searching on Google on how to reuse an aiohttp ClientSession instance after my code was triggering this warning message: UserWarning: Creating a client session outside of coroutine is a very dangerous idea
This code may not solve the above problem though it is related. I am new to asyncio and aiohttp, so this may not be best practice. It's the best I could come up with after reading a lot of seemingly conflicting information.
I created a class ResourceManager taken from the Python docs that opens a context.
The ResourceManager instance handles the opening and closing of the aiohttp ClientSession instance via the magic methods __aenter__ and __aexit__ with BaseScraper.set_session and BaseScraper.close_session wrapper methods.
I was able to reuse a ClientSession instance with the following code.
The BaseScraper class also has methods for authentication. It depends on the lxml third-party package.
import asyncio
from time import time
from contextlib import contextmanager, AbstractContextManager, ExitStack
import aiohttp
import lxml.html
class ResourceManager(AbstractContextManager):
# Code taken from Python docs: 29.6.2.4. of https://docs.python.org/3.6/library/contextlib.html
def __init__(self, scraper, check_resource_ok=None):
self.acquire_resource = scraper.acquire_resource
self.release_resource = scraper.release_resource
if check_resource_ok is None:
def check_resource_ok(resource):
return True
self.check_resource_ok = check_resource_ok
#contextmanager
def _cleanup_on_error(self):
with ExitStack() as stack:
stack.push(self)
yield
# The validation check passed and didn't raise an exception
# Accordingly, we want to keep the resource, and pass it
# back to our caller
stack.pop_all()
def __enter__(self):
resource = self.acquire_resource()
with self._cleanup_on_error():
if not self.check_resource_ok(resource):
msg = "Failed validation for {!r}"
raise RuntimeError(msg.format(resource))
return resource
def __exit__(self, *exc_details):
# We don't need to duplicate any of our resource release logic
self.release_resource()
class BaseScraper:
login_url = ""
login_data = dict() # dict of key, value pairs to fill the login form
loop = asyncio.get_event_loop()
def __init__(self, urls):
self.urls = urls
self.acquire_resource = self.set_session
self.release_resource = self.close_session
async def _set_session(self):
self.session = await aiohttp.ClientSession().__aenter__()
def set_session(self):
set_session_attr = self.loop.create_task(self._set_session())
self.loop.run_until_complete(set_session_attr)
return self # variable after "as" becomes instance of BaseScraper
async def _close_session(self):
await self.session.__aexit__(None, None, None)
def close_session(self):
close_session = self.loop.create_task(self._close_session())
self.loop.run_until_complete(close_session)
def __call__(self):
fetch_urls = self.loop.create_task(self._fetch())
return self.loop.run_until_complete(fetch_urls)
async def _get(self, url):
async with self.session.get(url) as response:
result = await response.read()
return url, result
async def _fetch(self):
tasks = (self.loop.create_task(self._get(url)) for url in self.urls)
start = time()
results = await asyncio.gather(*tasks)
print(
"time elapsed: {} seconds \nurls count: {}".format(
time() - start, len(urls)
)
)
return results
#property
def form(self):
"""Create and return form for authentication."""
form = aiohttp.FormData(self.login_data)
get_login_page = self.loop.create_task(self._get(self.login_url))
url, login_page = self.loop.run_until_complete(get_login_page)
login_html = lxml.html.fromstring(login_page)
hidden_inputs = login_html.xpath(r'//form//input[#type="hidden"]')
login_form = {x.attrib["name"]: x.attrib["value"] for x in hidden_inputs}
for key, value in login_form.items():
form.add_field(key, value)
return form
async def _login(self, form):
async with self.session.post(self.login_url, data=form) as response:
if response.status != 200:
response.raise_for_status()
print("logged into {}".format(url))
await response.release()
def login(self):
post_login_form = self.loop.create_task(self._login(self.form))
self.loop.run_until_complete(post_login_form)
if __name__ == "__main__":
urls = ("http://example.com",) * 10
base_scraper = BaseScraper(urls)
with ResourceManager(base_scraper) as scraper:
for url, html in scraper():
print(url, len(html))

Why does using asyncio.ensure_future for long jobs instead of await run so much quicker?

I am downloading jsons from an api and am using the asyncio module. The crux of my question is, with the following event loop as implemented as this:
loop = asyncio.get_event_loop()
main_task = asyncio.ensure_future( klass.download_all() )
loop.run_until_complete( main_task )
and download_all() implemented like this instance method of a class, which already has downloader objects created and available to it, and thus calls each respective download method:
async def download_all(self):
""" Builds the coroutines, uses asyncio.wait, then sifts for those still pending, loops """
ret = []
async with aiohttp.ClientSession() as session:
pending = []
for downloader in self._downloaders:
pending.append( asyncio.ensure_future( downloader.download(session) ) )
while pending:
dne, pnding= await asyncio.wait(pending)
ret.extend( [d.result() for d in dne] )
# Get all the tasks, cannot use "pnding"
tasks = asyncio.Task.all_tasks()
pending = [tks for tks in tasks if not tks.done()]
# Exclude the one that we know hasn't ended yet (UGLY)
pending = [t for t in pending if not t._coro.__name__ == self.download_all.__name__]
return ret
Why is it, that in the downloaders' download methods, when instead of the await syntax, I choose to do asyncio.ensure_future instead, it runs way faster, that is more seemingly "asynchronously" as I can see from the logs.
This works because of the way I have set up detecting all the tasks that are still pending, and not letting the download_all method complete, and keep calling asyncio.wait.
I thought that the await keyword allowed the event loop mechanism to do its thing and share resources efficiently? How come doing it this way is faster? Is there something wrong with it? For example:
async def download(self, session):
async with session.request(self.method, self.url, params=self.params) as response:
response_json = await response.json()
# Not using await here, as I am "supposed" to
asyncio.ensure_future( self.write(response_json, self.path) )
return response_json
async def write(self, res_json, path):
# using aiofiles to write, but it doesn't (seem to?) support direct json
# so converting to raw text first
txt_contents = json.dumps(res_json, **self.json_dumps_kwargs);
async with aiofiles.open(path, 'w') as f:
await f.write(txt_contents)
With full code implemented and a real API, I was able to download 44 resources in 34 seconds, but when using await it took more than three minutes (I actually gave up as it was taking so long).
When you do await in each iteration of for loop it will await to download every iteration.
When you do ensure_future on the other hand it doesn't it creates task to download all the files and then awaits all of them in second loop.

Resources