For a personal project I am using PonyOrm with FastApi ; is there a classy way to keep a db_session through the whole async lifecycle call of an endpoint ?
The documentation of PonyOrm talks about using the decorator and yield; but it didn't work for me so after looking on other Github projects, I found this workaround which is working fine.
But I don't really know what's happening behind the scenes and why the documentation of Pony isn't accurate about the async topic.
def _enter_session():
session = db_session(sql_debug=True)
Request.pony_session = session
session.__enter__()
def _exit_session():
session = getattr(Request, 'pony_session', None)
if session is not None:
session.__exit__()
#app.middleware("http")
async def add_pony(request: Request, call_next):
_enter_session()
response = await call_next(request)
_exit_session()
return response
and then in a dependency for example :
async def current_user(
username: str = Depends(current_user_from_token)) -> User:
with Request.pony_session:
# db actions
and in an endpoint call :
#router.post("/token", response_model=Token)
async def login_for_access_token(
request: Request,
user_agent: Optional[str] = Header(None),
form_data: OAuth2PasswordRequestForm = Depends()):
status: bool = authenticate_user(
form_data.username,
form_data.password,
request.client.host,
user_agent)
#db_session
def authenticate_user(
username: str,
password: str,
client_ip: str = 'Undefined',
client_app: str = 'Undefined'):
user: User = User.get(email=username)
If you guys have a better way or a good explanation, I would love to hear about it :)
I'm a kinda PonyORM developer and FastAPI user.
The problem with the async and Pony is that Pony uses transactions which in our understanding are atomic. Also we use thread local cache that can be used in another session if context will switch to another coroutine.
I agree that we should add information about it in documentation.
To be sure everything will be okay you should use db_session as the context manager and be sure that you don't have async calls inside this block of code.
If your endpoints are not asynchronous you can also use db_session decorator for them.
In Pony we agree that using ContextVar instead of Local should help with some cases.
The answer in one sentence is: Use little shortliving sessions and don't interrupt them with async.
Try using a standard fastapi dependency:
from fastapi import Depends
async def get_pony():
with db_session(sql_debug=True) as session:
yield session
async def current_user(
username: str = Depends(current_user_from_token),
pony_session = Depends(get_pony)) -> User:
with pony_session:
# db actions
Related
Trying to create simple ask, answer, remember telegram bot with pyTelegramBot.
Everything were normal, when TeleBot were used. TeleBot.register_next_message_handler helped me a lot.
Example:
...
#bot.message_handler(func=lambda msg: msg.text is not None and '/start' in msg.text)
def send_welcome(msg):
global cur_user
cur_user = msg.from_user.id
keyboard = types.InlineKeyboardMarkup()
keyboard.add(types.InlineKeyboardButton('Да', callback_data='old'),
types.InlineKeyboardButton('Нет', callback_data='new'))
keyboard.add(types.InlineKeyboardButton('Пора остановиться', callback_data='stop_bot'))
greet = 'Бла...бла...бла\nМы знакомы?'
bot.send_message(msg.chat.id, greet, reply_markup=keyboard)
...
#bot.callback_query_handler(func=lambda call: True)
def query_processing(call):
global user
global cur_user
if call.data == 'new':
user = dict.fromkeys(user, None)
nxt = bot.send_message(call.message.chat.id, 'Как звать?')
bot.register_next_step_handler(nxt, get_name_ask_goal)
...
Still I need my bot to be asynchronous because of sleep time till message to be send from bot and with simultaneous possibility for user to send messages.
Tried to use AsyncTeleBot, but there is no register_next_step_handler function. I did't find out how to make bot wait user for the name typing and it is almost impossible to me to add register_next_step_handler function into particular files. Although found issue in gitHub and no solution from 2017.
Tried
...
bot = AsyncTeleBot(os.getenv('token2'))
...
async def beep(chat_id) -> None:
"""Send the beep message."""
await bot.send_message(chat_id, text='Beep!')
aioschedule.clear(chat_id)
async def scheduler():
global chat_id
aioschedule.every(5).seconds.do(beep, chat_id).tag(chat_id)
while True:
await aioschedule.run_pending()
await asyncio.sleep(1)
async def main():
await asyncio.gather(scheduler(), bot.polling(non_stop=True))
if __name__ == "__main__":
asyncio.run(main(), debug=True)
...
Result:
TypeError: An asyncio.Future, a coroutine or an awaitable is required
I'm wondering:
is there other appropriate libraries for my task?
is there any simple solution to save user's message?
is it possible to use asyncio with TeleBot that made Thread-style?
Hoping for any help.
The following code is part of some automated tests that I have written in python 3.6:
connected = False
def aiohttp_server(loop):
async def handler(msg, session):
global connected
if msg.type == sockjs.MSG_OPEN:
connected = True
if msg.type == sockjs.MSG_CLOSE:
connected = False
app = web.Application(loop=loop)
sockjs.add_endpoint(app, handler)
runner = web.AppRunner(app)
return runner
def run_server(runner, loop):
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s')
asyncio.set_event_loop(loop)
loop.run_until_complete(runner.setup())
site = web.TCPSite(runner, 'localhost', 8080)
loop.run_until_complete(site.start())
loop.run_forever()
def start_server():
loop = asyncio.new_event_loop()
t = threading.Thread(target=run_server, args=(aiohttp_server(loop),loop,), daemon=True)
t.start()
time.sleep(0.01)
Basically, calling start_server should initiate a simple web server with a sockjs endpoint named /sockjs
I am not yet a master of python's async keyword. There are two issues, that I suspect are related:
Firstly, I am getting a deprecation warning on the app = web.Application(loop=loop) statement:
/home/peter/incubator/sockjs_client/tests/test_sockjs_client.py:25: DeprecationWarning: loop argument is deprecated
app = web.Application(loop=loop)
/home/peter/.local/lib/python3.6/site-packages/sockjs/route.py:54: DeprecationWarning: loop property is deprecated
manager = SessionManager(name, app, handler, app.loop)
And secondly, the tests fail occasionally. I believe that, depending on machine load, sometimes the server hasn't had enough time to start before the test code actually starts executing.
Basically, what I need is for the start_server function to initialise a web application with a websocket endpoint, and not return until the application is prepared to accept websocket connections.
Firstly, I am getting a deprecation warning on the app = web.Application(loop=loop) statement:
The recommended way to avoid passing around the loop everywhere is to switch to asyncio.run. Instead of managing the loop manually, let asyncio.run create (and close) the loop for you. If all your work is done in coroutines, you can access the loop with get_event_loop() or get_running_loop().
Basically, what I need is for the start_server function to initialise a web application with a websocket endpoint, and not return until the application is prepared to accept websocket connections.
You can pass a threading.Event to the thread that gets set when the site is set up, and wait for it in the main thread.
Here is an (untested) example that implements both suggestions:
connected = False
def aiohttp_server():
async def handler(msg, session):
global connected
if msg.type == sockjs.MSG_OPEN:
connected = True
if msg.type == sockjs.MSG_CLOSE:
connected = False
app = web.Application()
sockjs.add_endpoint(app, handler)
return web.AppRunner(app)
async def run_server(ready):
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s')
runner = aiohttp_server()
await runner.setup()
site = web.TCPSite(runner, 'localhost', 8080)
await site.start()
ready.set()
# emulates loop.run_forever()
await asyncio.get_running_loop().create_future()
def start_server():
ready = threading.Event()
threading.Thread(target=asyncio.run, args=(aiohttp_server(ready),),
daemon=True).start()
ready.wait()
Please upgrade sockjs to the newest version.
It doesn't require passing the loop anymore.
When working with Autobahn and WAMP before I have been using the Subclassing-Approach but stumbled over decorator / functions approach which I really prefer over subclassing.
However. I have a function that is being called from an external hardware (via callback) and this function needs to publish to Crossbar.io Router whenever it is being called.
This is how I've done this, keeping a reference of the Session right after the on_join -> async def joined(session, details) was called.
from autobahn.asyncio.component import Component
from autobahn.asyncio.component import run
global_session = None
comp = Component(
transports=u"ws://localhost:8080/ws",
realm=u"realm1",
)
def callback_from_hardware(msg):
if global_session is None:
return
global_session.publish(u'com.someapp.somechannel', msg)
#comp.on_join
async def joined(session, details):
global global_session
global_session = session
print("session ready")
if __name__ == "__main__":
run([comp])
This approach of keeping a reference after component has joined connection feels however a bit "odd". Is there a different approach to this? Can this done on some other way.
If not than it feels a bit more "right" with subclassing and having all the application depended code within that subclass (but however keeping everything of my app within one subclass also feels odd).
I would recommend to use asynchronous queue instead of shared session:
import asyncio
from autobahn.asyncio.component import Component
from autobahn.asyncio.component import run
queue = asyncio.queues.Queue()
comp = Component(
transports=u"ws://localhost:8080/ws",
realm=u"realm1",
)
def callback_from_hardware(msg):
queue.put_nowait((u'com.someapp.somechannel', msg,))
#comp.on_join
async def joined(session, details):
print("session ready")
while True:
topic, message, = await queue.get()
print("Publishing: topic: `%s`, message: `%s`" % (topic, message))
session.publish(topic, message)
if __name__ == "__main__":
callback_from_hardware("dassdasdasd")
run([comp])
There are multiple approaches you could take here, though the simplest IMO would be to use Crossbar's http bridge. So whenever an event callback is received from your hardware, you can just make a http POST request to Crossbar and your message will get delivered
More details about http bridge https://crossbar.io/docs/HTTP-Bridge-Publisher/
The docs say to reuse the ClientSession:
Don’t create a session per request. Most likely you need a session per
application which performs all requests altogether.
A session contains a connection pool inside, connection reusage and
keep-alives (both are on by default) may speed up total performance.1
But there doesn't seem to be any explanation in the docs about how to do this? There is one example that's maybe relevant, but it does not show how to reuse the pool elsewhere: http://aiohttp.readthedocs.io/en/stable/client.html#keep-alive-connection-pooling-and-cookie-sharing
Would something like this be the correct way to do it?
#app.listener('before_server_start')
async def before_server_start(app, loop):
app.pg_pool = await asyncpg.create_pool(**DB_CONFIG, loop=loop, max_size=100)
app.http_session_pool = aiohttp.ClientSession()
#app.listener('after_server_stop')
async def after_server_stop(app, loop):
app.http_session_pool.close()
app.pg_pool.close()
#app.post("/api/register")
async def register(request):
# json validation
async with app.pg_pool.acquire() as pg:
await pg.execute() # create unactivated user in db
async with app.http_session_pool as session:
# TODO send activation email using SES API
async with session.post('http://httpbin.org/post', data=b'data') as resp:
print(resp.status)
print(await resp.text())
return HTTPResponse(status=204)
There're few things I think can be improved:
1)
Instance of ClientSession is one session object. This on session contains pool of connections, but it's not "session_pool" itself. I would suggest rename http_session_pool to http_session or may be client_session.
2)
Session's close() method is a corountine. Your should await it:
await app.client_session.close()
Or even better (IMHO), instead of thinking about how to properly open/close session use standard async context manager with awaiting of __aenter__ / __aexit__:
#app.listener('before_server_start')
async def before_server_start(app, loop):
# ...
app.client_session = await aiohttp.ClientSession().__aenter__()
#app.listener('after_server_stop')
async def after_server_stop(app, loop):
await app.client_session.__aexit__(None, None, None)
# ...
3)
Pay attention to this info:
However, if the event loop is stopped before the underlying connection
is closed, an ResourceWarning: unclosed transport warning is emitted
(when warnings are enabled).
To avoid this situation, a small delay must be added before closing
the event loop to allow any open underlying connections to close.
I'm not sure it's mandatory in your case but there's nothing bad in adding await asyncio.sleep(0) inside after_server_stop as documentation advices:
#app.listener('after_server_stop')
async def after_server_stop(app, loop):
# ...
await asyncio.sleep(0) # http://aiohttp.readthedocs.io/en/stable/client.html#graceful-shutdown
Upd:
Class that implements __aenter__ / __aexit__ can be used as async context manager (can be used in async with statement). It allows to do some actions before executing internal block and after it. This is very similar to regular context managers, but asyncio related. Same as regular context manager async one can be used directly (without async with) manually awaiting __aenter__ / __aexit__.
Why do I think it's better to create/free session using __aenter__ / __aexit__ manually instead of using close(), for example? Because we shouldn't worry what actually happens inside __aenter__ / __aexit__. Imagine in future versions of aiohttp creating of session will be changed with the need to await open() for example. If you'll use __aenter__ / __aexit__ you wouldn't need to somehow change your code.
seems no session pool in aiohttp.
// just post some official docs.
persistent session
here is persistent-session usage demo in official site
https://docs.aiohttp.org/en/latest/client_advanced.html#persistent-session
app.cleanup_ctx.append(persistent_session)
async def persistent_session(app):
app['PERSISTENT_SESSION'] = session = aiohttp.ClientSession()
yield
await session.close()
async def my_request_handler(request):
session = request.app['PERSISTENT_SESSION']
async with session.get("http://python.org") as resp:
print(resp.status)
//TODO: a full runnable demo code
connection pool
and it has a connection pool:
https://docs.aiohttp.org/en/latest/client_advanced.html#connectors
conn = aiohttp.TCPConnector()
#conn = aiohttp.TCPConnector(limit=30)
#conn = aiohttp.TCPConnector(limit=0) # nolimit, default is 100.
#conn = aiohttp.TCPConnector(limit_per_host=30) # default is 0
session = aiohttp.ClientSession(connector=conn)
I found this question after searching on Google on how to reuse an aiohttp ClientSession instance after my code was triggering this warning message: UserWarning: Creating a client session outside of coroutine is a very dangerous idea
This code may not solve the above problem though it is related. I am new to asyncio and aiohttp, so this may not be best practice. It's the best I could come up with after reading a lot of seemingly conflicting information.
I created a class ResourceManager taken from the Python docs that opens a context.
The ResourceManager instance handles the opening and closing of the aiohttp ClientSession instance via the magic methods __aenter__ and __aexit__ with BaseScraper.set_session and BaseScraper.close_session wrapper methods.
I was able to reuse a ClientSession instance with the following code.
The BaseScraper class also has methods for authentication. It depends on the lxml third-party package.
import asyncio
from time import time
from contextlib import contextmanager, AbstractContextManager, ExitStack
import aiohttp
import lxml.html
class ResourceManager(AbstractContextManager):
# Code taken from Python docs: 29.6.2.4. of https://docs.python.org/3.6/library/contextlib.html
def __init__(self, scraper, check_resource_ok=None):
self.acquire_resource = scraper.acquire_resource
self.release_resource = scraper.release_resource
if check_resource_ok is None:
def check_resource_ok(resource):
return True
self.check_resource_ok = check_resource_ok
#contextmanager
def _cleanup_on_error(self):
with ExitStack() as stack:
stack.push(self)
yield
# The validation check passed and didn't raise an exception
# Accordingly, we want to keep the resource, and pass it
# back to our caller
stack.pop_all()
def __enter__(self):
resource = self.acquire_resource()
with self._cleanup_on_error():
if not self.check_resource_ok(resource):
msg = "Failed validation for {!r}"
raise RuntimeError(msg.format(resource))
return resource
def __exit__(self, *exc_details):
# We don't need to duplicate any of our resource release logic
self.release_resource()
class BaseScraper:
login_url = ""
login_data = dict() # dict of key, value pairs to fill the login form
loop = asyncio.get_event_loop()
def __init__(self, urls):
self.urls = urls
self.acquire_resource = self.set_session
self.release_resource = self.close_session
async def _set_session(self):
self.session = await aiohttp.ClientSession().__aenter__()
def set_session(self):
set_session_attr = self.loop.create_task(self._set_session())
self.loop.run_until_complete(set_session_attr)
return self # variable after "as" becomes instance of BaseScraper
async def _close_session(self):
await self.session.__aexit__(None, None, None)
def close_session(self):
close_session = self.loop.create_task(self._close_session())
self.loop.run_until_complete(close_session)
def __call__(self):
fetch_urls = self.loop.create_task(self._fetch())
return self.loop.run_until_complete(fetch_urls)
async def _get(self, url):
async with self.session.get(url) as response:
result = await response.read()
return url, result
async def _fetch(self):
tasks = (self.loop.create_task(self._get(url)) for url in self.urls)
start = time()
results = await asyncio.gather(*tasks)
print(
"time elapsed: {} seconds \nurls count: {}".format(
time() - start, len(urls)
)
)
return results
#property
def form(self):
"""Create and return form for authentication."""
form = aiohttp.FormData(self.login_data)
get_login_page = self.loop.create_task(self._get(self.login_url))
url, login_page = self.loop.run_until_complete(get_login_page)
login_html = lxml.html.fromstring(login_page)
hidden_inputs = login_html.xpath(r'//form//input[#type="hidden"]')
login_form = {x.attrib["name"]: x.attrib["value"] for x in hidden_inputs}
for key, value in login_form.items():
form.add_field(key, value)
return form
async def _login(self, form):
async with self.session.post(self.login_url, data=form) as response:
if response.status != 200:
response.raise_for_status()
print("logged into {}".format(url))
await response.release()
def login(self):
post_login_form = self.loop.create_task(self._login(self.form))
self.loop.run_until_complete(post_login_form)
if __name__ == "__main__":
urls = ("http://example.com",) * 10
base_scraper = BaseScraper(urls)
with ResourceManager(base_scraper) as scraper:
for url, html in scraper():
print(url, len(html))
I am downloading jsons from an api and am using the asyncio module. The crux of my question is, with the following event loop as implemented as this:
loop = asyncio.get_event_loop()
main_task = asyncio.ensure_future( klass.download_all() )
loop.run_until_complete( main_task )
and download_all() implemented like this instance method of a class, which already has downloader objects created and available to it, and thus calls each respective download method:
async def download_all(self):
""" Builds the coroutines, uses asyncio.wait, then sifts for those still pending, loops """
ret = []
async with aiohttp.ClientSession() as session:
pending = []
for downloader in self._downloaders:
pending.append( asyncio.ensure_future( downloader.download(session) ) )
while pending:
dne, pnding= await asyncio.wait(pending)
ret.extend( [d.result() for d in dne] )
# Get all the tasks, cannot use "pnding"
tasks = asyncio.Task.all_tasks()
pending = [tks for tks in tasks if not tks.done()]
# Exclude the one that we know hasn't ended yet (UGLY)
pending = [t for t in pending if not t._coro.__name__ == self.download_all.__name__]
return ret
Why is it, that in the downloaders' download methods, when instead of the await syntax, I choose to do asyncio.ensure_future instead, it runs way faster, that is more seemingly "asynchronously" as I can see from the logs.
This works because of the way I have set up detecting all the tasks that are still pending, and not letting the download_all method complete, and keep calling asyncio.wait.
I thought that the await keyword allowed the event loop mechanism to do its thing and share resources efficiently? How come doing it this way is faster? Is there something wrong with it? For example:
async def download(self, session):
async with session.request(self.method, self.url, params=self.params) as response:
response_json = await response.json()
# Not using await here, as I am "supposed" to
asyncio.ensure_future( self.write(response_json, self.path) )
return response_json
async def write(self, res_json, path):
# using aiofiles to write, but it doesn't (seem to?) support direct json
# so converting to raw text first
txt_contents = json.dumps(res_json, **self.json_dumps_kwargs);
async with aiofiles.open(path, 'w') as f:
await f.write(txt_contents)
With full code implemented and a real API, I was able to download 44 resources in 34 seconds, but when using await it took more than three minutes (I actually gave up as it was taking so long).
When you do await in each iteration of for loop it will await to download every iteration.
When you do ensure_future on the other hand it doesn't it creates task to download all the files and then awaits all of them in second loop.