Using promises with Transcrypt - transcrypt

I'm having great fun with Transcrypt, a fantastic Python 3 to Javascript compiler available as a python module. Most of my code is synchronous, but i've had no problem doing things with setTimeout and XHR requests. Now i've started using PouchDB for local persistence and am trying to find a pretty way to handle promises. At the moment, I am doing this to write to a pouchdb instance:
def db_put():
def put_success(doc):
print("Put a record in the db. Id: ", doc.id, "rev: ", doc.rev)
def put_failure(error):
print('Failed to put a record in the db. Error: ', error)
strHello = {'_id': "1", 'title': 'hello db'}
db.put(strHello) \
.then(put_success) \
.catch(put_failure)
db = PouchDB('test_db')
document.getElementById("db_put").addEventListener("click", db_put)
This works fine, but I am curious to know a few things about promises being transcrypted from python to Javascript (this may save me from madness):
Are there more preferable 'pythonic' ways to handle this?
Can one make use of ES7's async / await through Transcrypt? Since Transcrypt allows Javascript functions to be accessed directly from within the python code, i thought there might be some trick here that i'm not getting..
Thanks!

About the promises
The way you dealt with promises looks pythonic enough to me.
In case you get tired of the line continuations where 'fluent' notation (call chaining) is involved, there's an alternative to using \. This alternative is used e.g. in the d3js_demo that comes with Transcrypt, in the following fragment:
self.svg = d3.select('body'
).append('svg'
).attr('width', self.width
).attr('height', self.height
).on('mousemove', self.mousemove
).on('mousedown', self.mousedown)
Since many .then's can be chained as well, one could write:
db.put(strHello
).then(put_success
).then(put_success_2
).then(put_success_3
... etc.
).catch(put_failure)
After some getting used to, this will immediately make clear that call chaining is involved. But it is only a matter of formatting.
about async/await
They aren't yet supported, but the plan is they will be soon after JS officially has them (JS7, I hope). For now you can use __pragma__ ('js', '{}', '''<any javascript code>''') as a workaround.

Async/await is supported some time now. You can use it to deal with Promises. For example:
Enable JQuery usage:
__pragma__ ('alias', 'S', '$')
Define a function which returns a Promise, in this case an Ajax call:
def read(url: str) -> 'Promise':
deferred = S.Deferred()
S.ajax({'type': "POST", 'url': url, 'data': { },
'success': lambda d: deferred.resolve(d),
'error': lambda e: deferred.reject(e)
})
return deferred.promise()
Use the asynchronous code as if it were synchronous:
async def readALot():
try:
result1 = await read("url_1")
result2 = await read("url_2")
except Exception:
console.warn("Reading a lot failed")
Happy usage of python in the browser

Related

What’s the right way to use PonyOrm with FastApi?

For a personal project I am using PonyOrm with FastApi ; is there a classy way to keep a db_session through the whole async lifecycle call of an endpoint ?
The documentation of PonyOrm talks about using the decorator and yield; but it didn't work for me so after looking on other Github projects, I found this workaround which is working fine.
But I don't really know what's happening behind the scenes and why the documentation of Pony isn't accurate about the async topic.
def _enter_session():
session = db_session(sql_debug=True)
Request.pony_session = session
session.__enter__()
def _exit_session():
session = getattr(Request, 'pony_session', None)
if session is not None:
session.__exit__()
#app.middleware("http")
async def add_pony(request: Request, call_next):
_enter_session()
response = await call_next(request)
_exit_session()
return response
and then in a dependency for example :
async def current_user(
username: str = Depends(current_user_from_token)) -> User:
with Request.pony_session:
# db actions
and in an endpoint call :
#router.post("/token", response_model=Token)
async def login_for_access_token(
request: Request,
user_agent: Optional[str] = Header(None),
form_data: OAuth2PasswordRequestForm = Depends()):
status: bool = authenticate_user(
form_data.username,
form_data.password,
request.client.host,
user_agent)
#db_session
def authenticate_user(
username: str,
password: str,
client_ip: str = 'Undefined',
client_app: str = 'Undefined'):
user: User = User.get(email=username)
If you guys have a better way or a good explanation, I would love to hear about it :)
I'm a kinda PonyORM developer and FastAPI user.
The problem with the async and Pony is that Pony uses transactions which in our understanding are atomic. Also we use thread local cache that can be used in another session if context will switch to another coroutine.
I agree that we should add information about it in documentation.
To be sure everything will be okay you should use db_session as the context manager and be sure that you don't have async calls inside this block of code.
If your endpoints are not asynchronous you can also use db_session decorator for them.
In Pony we agree that using ContextVar instead of Local should help with some cases.
The answer in one sentence is: Use little shortliving sessions and don't interrupt them with async.
Try using a standard fastapi dependency:
from fastapi import Depends
async def get_pony():
with db_session(sql_debug=True) as session:
yield session
async def current_user(
username: str = Depends(current_user_from_token),
pony_session = Depends(get_pony)) -> User:
with pony_session:
# db actions

cx_oracle with Asyncio in Python with SQLAlchemy

I am confused from different thread posted in different time for this topic.
Is this feature of Asyncio available with latest version(As of Dec 2019) of cx_Oracle?
I am using below code snippets which is working but not sure if this is perfect way to do async call for Oracle? Any pointer will be helpful.
import asyncio
async def sqlalchemyoracle_fetch():
conn_start_time = time()
oracle_tns_conn = 'oracle+cx_oracle://{username}:{password}#{tnsname}'
engine = create_engine(
oracle_tns_conn.format(
username=USERNAME,
password=PWD,
tnsname=TNS,
),
pool_recycle=50,
)
for x in test:
pd.read_sql(query_randomizer(x), engine)
!calling custom query_randomizer function which will execute oracle queries from the parameters passed through test which is a list
async def main():
tasks = [sqlalchemyoracle_asyncfetch()]
return await asyncio.gather(*tasks)
if __name__ == "__main__":
result = await main()
I use the cx_Oracle library but not SQLAlchemy. As of v8.2, asyncio is not supported.
This issue tracks and confirms it - https://github.com/oracle/python-cx_Oracle/issues/178.
And no, your code block does not run asynchronously, although defined using async def there is no statement in the code block that is asynchronous. To be asynchronous, your async function either needs to await another async function (that already supports async operations) or use yield to indicate a possible context switch. None of these happens in your code block.
You can try the following package which states to have implemented async support for cx_Oracle. https://pypi.org/project/cx-Oracle-async/

Is it possible to avoid .await with Elastic4s

I'm using Elastic4s (Scala Client for ElasticSearch).
I can retrieve multiGet results with await :
val client = HttpClient(ElasticsearchClientUri(esHosts, esPort))
val resp = client.execute {
multiget(
get(C1) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C2) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C3) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events")
)
}.await
val result = resp.items
But I've read that in practice it's better to avoid this ".await".
How can we do that ? thanks
You shouldn't use .await because you're blocking the thread waiting for the future to return.
Instead you should handle the future like you would any other API that returns futures - whether that be reactive-mongo, akka.ask or whatever.
I realise this is old, but in case anyone else comes across it, the simplest way to handle this would be:
val client = HttpClient(ElasticsearchClientUri(esHosts, esPort))
val respFuture = client.execute {
multiget(
get(C1) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C2) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C3) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events")
)
}
respFuture.map(resp=> ...[do stuff with resp.items])
The key thing here is that your processing actually takes place in a subthread, which Scala takes care of calling for you when, any only when, the data is ready for you. The caller keeps running immediately after respFuture.map(). Whatever your function in map(()=>{}) returns is passed back as a new Future; if you don't need it then use onComplete or andThen as they make error handling a little easier.
See https://docs.scala-lang.org/overviews/core/futures.html for more details on Futures handling, and https://alvinalexander.com/scala/concurrency-with-scala-futures-tutorials-examples for some good examples

Using the Decorator approach with AutobahnWS, how to publish messages independent from subscription callbacks and it's Session-Reference?

When working with Autobahn and WAMP before I have been using the Subclassing-Approach but stumbled over decorator / functions approach which I really prefer over subclassing.
However. I have a function that is being called from an external hardware (via callback) and this function needs to publish to Crossbar.io Router whenever it is being called.
This is how I've done this, keeping a reference of the Session right after the on_join -> async def joined(session, details) was called.
from autobahn.asyncio.component import Component
from autobahn.asyncio.component import run
global_session = None
comp = Component(
transports=u"ws://localhost:8080/ws",
realm=u"realm1",
)
def callback_from_hardware(msg):
if global_session is None:
return
global_session.publish(u'com.someapp.somechannel', msg)
#comp.on_join
async def joined(session, details):
global global_session
global_session = session
print("session ready")
if __name__ == "__main__":
run([comp])
This approach of keeping a reference after component has joined connection feels however a bit "odd". Is there a different approach to this? Can this done on some other way.
If not than it feels a bit more "right" with subclassing and having all the application depended code within that subclass (but however keeping everything of my app within one subclass also feels odd).
I would recommend to use asynchronous queue instead of shared session:
import asyncio
from autobahn.asyncio.component import Component
from autobahn.asyncio.component import run
queue = asyncio.queues.Queue()
comp = Component(
transports=u"ws://localhost:8080/ws",
realm=u"realm1",
)
def callback_from_hardware(msg):
queue.put_nowait((u'com.someapp.somechannel', msg,))
#comp.on_join
async def joined(session, details):
print("session ready")
while True:
topic, message, = await queue.get()
print("Publishing: topic: `%s`, message: `%s`" % (topic, message))
session.publish(topic, message)
if __name__ == "__main__":
callback_from_hardware("dassdasdasd")
run([comp])
There are multiple approaches you could take here, though the simplest IMO would be to use Crossbar's http bridge. So whenever an event callback is received from your hardware, you can just make a http POST request to Crossbar and your message will get delivered
More details about http bridge https://crossbar.io/docs/HTTP-Bridge-Publisher/

Why does using asyncio.ensure_future for long jobs instead of await run so much quicker?

I am downloading jsons from an api and am using the asyncio module. The crux of my question is, with the following event loop as implemented as this:
loop = asyncio.get_event_loop()
main_task = asyncio.ensure_future( klass.download_all() )
loop.run_until_complete( main_task )
and download_all() implemented like this instance method of a class, which already has downloader objects created and available to it, and thus calls each respective download method:
async def download_all(self):
""" Builds the coroutines, uses asyncio.wait, then sifts for those still pending, loops """
ret = []
async with aiohttp.ClientSession() as session:
pending = []
for downloader in self._downloaders:
pending.append( asyncio.ensure_future( downloader.download(session) ) )
while pending:
dne, pnding= await asyncio.wait(pending)
ret.extend( [d.result() for d in dne] )
# Get all the tasks, cannot use "pnding"
tasks = asyncio.Task.all_tasks()
pending = [tks for tks in tasks if not tks.done()]
# Exclude the one that we know hasn't ended yet (UGLY)
pending = [t for t in pending if not t._coro.__name__ == self.download_all.__name__]
return ret
Why is it, that in the downloaders' download methods, when instead of the await syntax, I choose to do asyncio.ensure_future instead, it runs way faster, that is more seemingly "asynchronously" as I can see from the logs.
This works because of the way I have set up detecting all the tasks that are still pending, and not letting the download_all method complete, and keep calling asyncio.wait.
I thought that the await keyword allowed the event loop mechanism to do its thing and share resources efficiently? How come doing it this way is faster? Is there something wrong with it? For example:
async def download(self, session):
async with session.request(self.method, self.url, params=self.params) as response:
response_json = await response.json()
# Not using await here, as I am "supposed" to
asyncio.ensure_future( self.write(response_json, self.path) )
return response_json
async def write(self, res_json, path):
# using aiofiles to write, but it doesn't (seem to?) support direct json
# so converting to raw text first
txt_contents = json.dumps(res_json, **self.json_dumps_kwargs);
async with aiofiles.open(path, 'w') as f:
await f.write(txt_contents)
With full code implemented and a real API, I was able to download 44 resources in 34 seconds, but when using await it took more than three minutes (I actually gave up as it was taking so long).
When you do await in each iteration of for loop it will await to download every iteration.
When you do ensure_future on the other hand it doesn't it creates task to download all the files and then awaits all of them in second loop.

Resources