Gathering coin volumes - Is my code running asynchronously? - python-asyncio

I'm fairly new to programming in python, I've been programming for about half a year. I've decided to try to build a functional trading bot. While trying to code this bot, I stumbled upon the asyncio module. I would really like to understand the module better but it's hard finding any simple tutorials or documentation about asyncio.
For my script I'm gathering per coin the volume. This works perfectly, but it takes a really long time to gather all the volumes. I would like to ask if my script is running synchronously, and if so how do I fix this? I'm using an API wrapper to communicate with the Binance Exchange.
import binance
import asyncio
import time
s = time.time()
names = [name for name in binance.ticker_prices()] #Gathering all the coin names
loop = asyncio.get_event_loop()
async def get_volume(name):
async def get_data():
return binance.ticker_24hr(name) #Returns per coin a dict of the data of the last 24hr
data = await get_data()
return (name, data['volume'])
tasks = [asyncio.ensure_future(get_volume(name)) for name in names]
results = loop.run_until_complete(asyncio.gather(*tasks))
print('Total time:', time.time() - s)

Since binance.ticker_24hr does not look like it's a coroutine, it is almost certainly blocking the event loop and therefore preventing asyncio.gather to do its job. As a quick fix, you can use run_in_executor to run the blocking function in a separate thread:
async def get_volume(name):
loop = asyncio.get_event_loop()
data = await loop.run_in_executor(None, binance.ticker_24hr, name)
return name, data['volume']
This will work just fine for a reasonable number of parallel tasks. The downside is that it uses threads, so it might not scale to a huge number of parallel requests (or it would require unnecessary waiting). The correct solution in the long run is to use a library that natively supports asyncio.

Maarten firstly you are calling get_ticker for every symbol which means you're making many unnecessary requests. If you call it without a symbol value, you get all tickers in one request. This removes any loops or async as well if you aren't performing other tasks. It looks like the binance library you're using doesn't support this. You can use python-binance to do it
return client.get_ticker()
That said I've been testing an asyncio version of python-binance. It's currently in a feature branch now if you want to try it.
pip install git+https://github.com/sammchardy/python-binance#feature/asyncio
Include the asyncio version of the client and initialise the client
from binance.client_async import AsyncClient as Client
client = Client("<api_key>", "<api_secret>")
Then you can await the calls to get the ticker for a particular symbol
return await client.get_ticker(symbol=name)
Or for all symbol tickers don't pass the symbol parameter
return await client.get_ticker()
Hope that helps

Related

Can I use multiple event loops in a program where I also use multiprocessing module

Thanks for any reply in advance.
I have the entrance program main.py:
import asyncio
from loguru import logger
from multiprocessing import Process
from app.events import type_a_tasks, type_b_tasks, type_c_tasks
def run_task(task):
loop = asyncio.get_event_loop()
loop.run_until_complete(task())
loop.run_forever()
def main():
processes = list()
processes.append(Process(target=run_task, args=(type_a_tasks,)))
processes.append(Process(target=run_task, args=(type_b_tasks,)))
processes.append(Process(target=run_task, args=(type_c_tasks,)))
for process in processes:
process.start()
logger.info(f"Started process id={process.pid}, name={process.name}")
for process in processes:
process.join()
if __name__ == '__main__':
main()
where the different types of tasks are similarly defined, for example type_a_tasks are:
import asyncio
from . import business_1, business_2, business_3, business_4, business_5, business_6
async def type_a_tasks():
tasks = list()
tasks.append(asyncio.create_task(business_1.main()))
tasks.append(asyncio.create_task(business_2.main()))
tasks.append(asyncio.create_task(business_3.main()))
tasks.append(asyncio.create_task(business_4.main()))
tasks.append(asyncio.create_task(business_5.main()))
tasks.append(asyncio.create_task(business_6.main()))
await asyncio.wait(tasks)
return tasks
where the main() function of businesses(1-6) are Future objects provided by asyncio, in which I implemented my business code.
Is my usage of multiprocessing and asyncio event loops above the correct way of doing it?
I am doing so because I have a lot of asynchronous tasks to perform, but it doesn't seem appropriate to put them all in one event loop, so I divided them into three parts(a, b and c) accordingly, and I hope they can be run in three different processes to exert the capability of multiple CPU cores, in the meantime taking advantage of asyncio features.
I tried running my code, where the log records show there actually are different processes but all are using the same thread/event loop(knowing this by adding process_id and thread_id to loguru format)
this seens ok. Just use asyncio.run(task()) inside run_task - it is simpler and there is no need to call run_forever (also, with the run_forever` call, your processes will never join the base one.
IDs for other objects across process may repeat - if you want, add to your logging the result of calling os.getpid() in the body of run_task.
(if these are, by chance, the same, that means that somehow subprocessing is using a "dummy" backend due to some configuration in your project - should not happen anyway)

Inter-process communication between async and sync tasks using PyZMQ

On a single process I have a tasks running on a thread that produces values and broadcasts them and
several consumer async tasks that run concurrently in an asyncio loop.
I found this issue on PyZMQ's github asking async <-> sync communication
with inproc sockets which is what I also wanted and the answer was to use .shadow(ctx.underlying) when
creating the async ZMQ Context.
I prepared this example and seems to be working fine:
import signal
import asyncio
import zmq
import threading
import zmq.asyncio
import sys
import time
import json
def producer(ctrl):
# delay first push to give asyncio loop time
# to start
time.sleep(1)
ctx = ctrl["ctx"]
s = ctx.socket(zmq.PUB)
s.bind(ctrl["endpoint"])
v = 0
while ctrl["run"]:
payload = {"value": v, "timestamp": time.time()}
msg = json.dumps(payload).encode("utf-8")
s.send(msg)
v += 1
time.sleep(5)
print("Bye")
def main():
endpoint = "inproc://testendpoint"
ctx = zmq.Context()
actx = zmq.asyncio.Context.shadow(ctx.underlying)
ctrl = {"run": True, "ctx": ctx, "endpoint": endpoint, }
th = threading.Thread(target=producer, args=(ctrl,))
th.start()
try:
asyncio.run(amain(actx, endpoint))
except KeyboardInterrupt:
pass
print("Stopping thread")
ctrl["run"] = False
th.join()
async def amain(ctx, endpoint):
s = ctx.socket(zmq.SUB)
s.subscribe("")
s.connect(endpoint)
loop = asyncio.get_running_loop()
def stop():
try:
print("Closing zmq async socket")
s.close()
except:
pass
raise KeyboardInterrupt
loop.add_signal_handler(signal.SIGINT, stop)
while True:
event = await s.poll(1000)
if event & zmq.POLLIN:
msg = await s.recv()
payload = json.loads(msg.decode("utf-8"))
print("%f: %d" % (payload["timestamp"], payload["value"]))
if __name__ == "__main__":
sys.exit(main())
Is it safe to use inproc://* between a thread and asyncio task in this way? The 0MQ
context is thread safe and I'm not sharing sockets between the thread and the
asyncio task, so I would say in general that this is thread safe, right? Or am I
missing something that I should consider?
Q :Is it safe to use inproc://* between a thread and asyncio task in this way?""
A :First and foremost, I might be awfully wrong (not only here), yet having worked with ZeroMQ since native API 2.1.1+ I dare claim that unless newer "improvements" got lost the core principles ( ZeroMQ ZMTP/RFC-documented properties for building legal implementation of the still valid ZMTP-arsenal ), the answer here shall be YES, as much as the newer releases of pyzmq-binding kept all mandatory properties of the inproc:-Transport-Class without a compromise.
Q :" The 0MQ context is thread safe and I'm not sharing sockets between the thread and the asyncio task, so I would say in general that this is thread safe, right? "
A :Here my troubles start - ZeroMQ implementations were since ever developed based on Martin SUSTRIK's & Pieter HINTJENS' Zen-of-Zero -- i.e. also as Zero-sharing -- so never sharing was the principle ( though "share"-zmq.Context-instances were no problem to be used from different threads, to the contrary of the zmq.Socket-instances )
Python (since ever & still valid in 2022-Q1) used to use & still uses a total [CONCURRENT]-code-execution avoider -- prevented by GIL-lock, which principally avoids any & all kinds of problems, arising from [CONCURRENT]-code-execution to never happen insider Python GIL-lock re-[SERIAL]-ised flow of code-execution, so even if the asyncio-part is built as a pythonic (non-destructive) part of the ecosystem, your code shall never "meet" any kind of concurrency-related issue, as the unless it gains GIL-lock, it does nothing but "hanging in NOP-s cracking" ( nuts-cracking in idle loop ).
Being inside the same process, there seems no advantage to spawn another Context-instance at all ( this used to be the rock-solid certainty since ever, not to ever increase any kind of overheads - Zen-of-Zero ( almost )Zero-overhead ... ). The Sig/Msg core engine was, if performance or latency needs required, powered with more zmq.Context( IOthreads ) upon instantiations, yet these were zmq.Context-owned, not Python-GIL-governed/(b)locked threads, so the performance was pretty well scalable, without wasting any RAM/HWM/buffers/...-resources, without growing any overheads and very efficient, as the IO-threads were co-located for only indeed I/O-work, so not needed for inproc:-( protocol-less )-Transport-Class at all )
Q :" Or am I missing something that I should consider? "
A :Mixing asyncio, O/S-signals ( that are well documented how they interact with native ZeroMQ API ) and other layers of complexity is for sure possible, yet it comes at a cost - it makes the use-case less and less readable and more and more prone to conceptual-gaps and similar hard to decode "errors".
I remember using Tkinter-mainloop() as a cost-wise very cheap and a super-stable framework for rapid-prototyping an MVC-{ M-odel, V-isual, C-ontroller }-parts of many-actors' indeed distributed-system applications in Python. There were Zerop-problems to use ZeroMQ with a single Context-instance, passing the references of the respective AccessNodes' into whatever amount of event-handlers, supposing we kept the ZeroMQ Zen-of-Zero, i.e. no to "share" (meaning no two parts "use" (compete to use) one and the same AccessPoint "one-over-another")
This all was designed-in, at "Zero-cost", by the ZeroMQ by-definition, so unless spoilt in some later phase, re-wrapping a re-wrapped native API, all this ought still work in 2022-Q1, ought it not?

Using asyncio.run, is it safe to run multiple times?

The documentation for asyncio.run states:
This function always creates a new event loop and closes it at the end.
It should be used as a main entry point for asyncio programs, and should
ideally only be called once.
But it does not say why. I have a non-async program that needs to invoke something async. Can I just use asyncio.run every time I get to the async portion, or is this unsafe/wrong?
In my case, I have several async coroutines I want to gather and run in parallel to completion. When they are all completed, I want move on with my synchronous code.
async my_task(url):
# request some urls or whatever
integration_tasks = [my_task(url1), my_task(url2)]
async def gather_tasks(*integration_tasks):
return await asyncio.gather(*integration_tasks)
def complete_integrations(*integration_tasks):
return asyncio.run(gather_tasks(*integration_tasks))
print(complete_integrations(*integration_tasks))
Can I use asyncio.run() to run coroutines multiple times?
This actually is an interesting and very important question.
As a documentation of asyncio (python3.9) says:
This function always creates a new event loop and closes it at the end. It should be used as a main entry point for asyncio programs, and should ideally only be called once.
It does not prohibit calling it multiple times. And moreover, an old way of calling coroutines from synchronous code, which was:
loop = asyncio.get_event_loop()
loop.run_until_complete(coroutine)
Is now deprecated because of get_event_loop() method, which documentation says:
Consider also using the asyncio.run() function instead of using lower level functions to manually create and close an event loop.
Deprecated since version 3.10: Deprecation warning is emitted if there is no running event loop. In future Python releases, this function will be an alias of get_running_loop().
So in future releases it will not spawn new event loop if already running one is not present! Docs are proposing usage of asyncio.run() if You want to automatically spawn new loop if there is no new one.
There is a good reason for such decision. Even if You have an event loop and You will successfully use it to execute coroutines, there is few more things You must remember to do:
closing an event loop
consuming unconsumed generators (most important in case of failed coroutines)
...probably more, which I do not even attempt to refer here
What is exactly needed to be done to properly finalize event loop You can read in this source code.
Managing an event loop manually (if there is no running one) is a subtle procedure, and it is better to not doing that, unless one know what he is doing.
So Yes, I think that proper way of runing async function from synchronous code is calling asyncio.run(). But it is only suitable from a fully synchronous application. If there is already running event loop, it will probably fail (not tested). In such case, just await it or use get_runing_loop().run_untilcomplete(coro).
And for such synchronous apps, using asyncio.run() it is safe way and actually the only safe way of doing this, and it can be invoked multiple times.
The reason docs says that You should call it only once is that usually there is one single entrypoint to whole asynchronous application. It simplifies things and actually improves performance, because setting thins up for an event loop also takes some time. But if there is no single loop available in Your application, You should use multiple calls to asyncio.run() to run coroutines multiple times.
Is there is any performance gain?
Beside discussing multiple calls to asyncio.run(), I want to address one more concern. In comments, #jwal says:
asyncio is not parallel processing. Says so in the docs. [...] If you want parallel, run in a separate processes on a computer with a separate CPU core, not a separate thread, not a separate event loop.
Suggesting that asyncio is not suitable for parallel processing, which can be misunderstood and misleading to a conclusion, that it will not result in a performance gain, which is not always true. Moreover it is usually false!
So, any time You can delegate a job to an external process (not only a python process, it can be a database worker process, http call, ideally any TCP socket call) You can utilize a performance gain using asyncio. In huge majority of cases, when You are using a library which exposes async interface, the author of that library made an effort to eventually await for a result from a network/socket/process call. While response from such socket is not ready, event loop is completely free to do any other tasks. If loop has more than one such tasks, it will gain a performance.
A canonical example of such case is making a calls to a HTTP endpoints. At some point, there will be a network call, so python thread is free to do other work while awaiting for a data to appear on a TCP socket buffer. I have an example!
The example uses httpx library to compare performance of doing multiple calls to a OpenWeatherMap API. There are two functions:
get_weather_async()
get_weather_sync()
The first one does 8 request to an http API, but schedules those request to
run cooperatively (not concurrently!) on an event loop using asyncio.gather().
The second one performs 8 synchronous request in sequence.
To call the asynchronous function, I am actually using asyncio.run() method. And moreover, I am using timeit module to perform such call to asyncio.run() 4 times. So in a single python application, asyncio.run() was called 4 times, just to challenge my previous considerations.
from time import time
import httpx
import asyncio
import timeit
from random import uniform
class AsyncWeatherApi:
def __init__(
self, base_url: str = "https://api.openweathermap.org/data/2.5"
) -> None:
self.client: httpx.AsyncClient = httpx.AsyncClient(base_url=base_url)
async def weather(self, lat: float, lon: float, app_id: str) -> dict:
response = await self.client.get(
"/weather",
params={
"lat": lat,
"lon": lon,
"appid": app_id,
"units": "metric",
},
)
response.raise_for_status()
return response.json()
class SyncWeatherApi:
def __init__(
self, base_url: str = "https://api.openweathermap.org/data/2.5"
) -> None:
self.client: httpx.Client = httpx.Client(base_url=base_url)
def weather(self, lat: float, lon: float, app_id: str) -> dict:
response = self.client.get(
"/weather",
params={
"lat": lat,
"lon": lon,
"appid": app_id,
"units": "metric",
},
)
response.raise_for_status()
return response.json()
def get_random_locations() -> list[tuple[float, float]]:
"""generate 8 random locations in +/-europe"""
return [(uniform(45.6, 52.3), uniform(-2.3, 29.4)) for _ in range(8)]
async def get_weather_async(locations: list[tuple[float, float]]):
api = AsyncWeatherApi()
return await asyncio.gather(
*[api.weather(lat, lon, api_key) for lat, lon in locations]
)
def get_weather_sync(locations: list[tuple[float, float]]):
api = SyncWeatherApi()
return [api.weather(lat, lon, api_key) for lat, lon in locations]
api_key = "secret"
def time_async_job(repeat: int = 1):
locations = get_random_locations()
def run():
return asyncio.run(get_weather_async(locations))
duration = timeit.Timer(run).timeit(repeat)
print(
f"[ASYNC] In {duration}s: done {len(locations)} API calls, all"
f" repeated {repeat} times"
)
def time_sync_job(repeat: int = 1):
locations = get_random_locations()
def run():
return get_weather_sync(locations)
duration = timeit.Timer(run).timeit(repeat)
print(
f"[SYNC] In {duration}s: done {len(locations)} API calls, all repeated"
f" {repeat} times"
)
if __name__ == "__main__":
time_sync_job(4)
time_async_job(4)
At the end, a comparison of performance was printed. It says:
[SYNC] In 5.5580058859995916s: done 8 API calls, all repeated 4 times
[ASYNC] In 2.865574334995472s: done 8 API calls, all repeated 4 times
Those 4 repetitions was just to show that You can safely run a asyncio.run() multiple times. It had actualy destructive impact on measuring performance of asynchronous http calls, because all 32 request was actually run in four synchronous batches of 8 asynchronous tasks. Just to compare performance of one batch of 32 request:
[SYNC] In 4.373898585996358s: done 32 API calls, all repeated 1 times
[ASYNC] In 1.5169846520002466s: done 32 API calls, all repeated 1 times
So yes, it can, and usually will result in performance gain, if only proper async library is used (if library exposes an async API, it usually does it intentianally, knowing that there will be a network call somewhere).

How to perform asynchronous tasks in fixed interval of time

The goal is to perform an async task(file read, network operation) without blocking the code. And we have multiple such async tasks that need to be executed at a fixed interval of times. Here is a pseudo code to demonstrate the same.
# the async tasks should be performed in parallel
# provide me with a return value after the task is complete, or they can have a callback or any other mechanism of communication
async_task_1 = perform_async(1)
# now I need to wait fix amount of time before the async task 2
sleep(5)
# this also similar to the tasks one in nature
async_task_2 = perform_async(2)
# finally do something with the result
I'm reading that in ruby I've 2 options forking, threading. The is also something called as Fiber. I also read that due to GIL in the basic Ruby, I won't be able to make much use of threading. I still want to stick to the base Ruby.
I've written some parallel code previously in OMP and Cuda. But I've never got a chance to do that in Ruby.
Can you suggest how to achieve this?
I would recommend to you the concurrent-ruby gem with its async feature. This will work great, as long as your tasks are IO bound. (As you said they are)
There you have a async feature to perform your tasks. To wait the amount of time between your 2 async calls you can use literally the sleep function
class AsyncCalls
include Concurrent::Asnyc
def perform_task(params)
# IO bound task
end
end
AsyncCalls.new.async.perform_task("param")
sleep 5
AsyncCalls.new.async.perform_task("other param")

How to access a python object from a previous HTTP request?

I have some confusion about how to design an asynchronous part of a web app. My setup is simple; a visitor uploads a file, a bunch of computation is done on the file, and the results are returned. Right now I'm doing this all in one request. There is no user model and the file is not stored on disk.
I'd like to change it so that the results are delivered in two parts. The first part comes back with the request response because it's fast. The second part might be heavy computation and a lot of data, so I want it to load asynchronously, whenever it's done. What's a good way to do this?
Here are some things I do know about this. Usually, asynchronicity is done with ajax requests. The request will be to some route, let's say /results. In my controller, there'll be a method written to respond to /results. But this method will no longer have any information from the previous request, because HTTP is stateless. To get around this, people pass info through the request. I could either pass all the data through the request, or I could pass an id which the controller would use to look up the data somewhere else.
My data is a big python object (a pandas DataFrame). I don't want to pass it through the network. If I use an id, the controller will have to look it up somewhere. I'd rather not spin up a database just for these short durations, and I'd also rather not convert it out of python and write to disk. How else can I give the ajax request access to the python object across requests?
My only idea so far is to have the initial request trigger my framework to render a second route, /uuid/slow_results. This would be served until the ajax request hits it. I think this would work, but it feels pretty ad hoc and unnatural.
Is this a reasonable solution? Is there another method I don't know? Or should I bite the bullet and use one of the aforementioned solutions?
(I'm using the web framework Flask, though this question is probably framework agnostic.
PS: I'm trying to get better at writing SO questions, so let me know how to improve it.)
So if your app is only being served by one Python process, you could just have a global object that's a map from ids to DataFrames, but you'd also need some way of expiring them out of the map so you don't leak memory.
So if your app is running on multiple machines, you're screwed. If your app is just running on one machine, it might be sitting behind apache or something and then apache might spawn multiple Python processes and you'd still be screwed? I think you'd find out by doing ps aux and counting instances of python.
Serializing to a temporary file or database are fine choices in general, but if you don't like either in this case and don't want to set up e.g. Celery just for this one thing, then multiprocessing.connection is probably the tool for the job. Copying and lightly modifying from here, the box running your webserver (or another, if you want) would have another process that runs this:
from multiprocessing.connection import Listener
import traceback
RESULTS = dict()
def do_thing(data):
return "your stuff"
def worker_client(conn):
try:
while True:
msg = conn.recv()
if msg['type'] == 'answer': # request for calculated result
answer = RESULTS.get(msg['id'])
conn.send(answer)
if answer:
del RESULTS[msg['id']]
else:
conn.send("doing thing on {}".format(msg['id']))
RESULTS[msg['id']] = do_thing(msg)
except EOFError:
print('Connection closed')
def job_server(address, authkey):
serv = Listener(address, authkey=authkey)
while True:
try:
client = serv.accept()
worker_client(client)
except Exception:
traceback.print_exc()
if __name__ == '__main__':
job_server(('', 25000), authkey=b'Alex Altair')
and then your web app would include:
from multiprocessing.connection import Client
client = Client(('localhost', 25000), authkey=b'Alex Altair')
def respond(request):
client.send(request)
return client.recv()
Design could probably be improved but that's the basic idea.

Resources