Here is a block of code from: https://github.com/ronf/asyncssh/blob/master/examples/math_server.py#L38
async def handle_client(process):
process.stdout.write('Enter numbers one per line, or EOF when done:\n')
total = 0
try:
async for line in process.stdin:
line = line.rstrip('\n')
if line:
try:
total += int(line)
except ValueError:
process.stderr.write('Invalid number: %s\n' % line)
except asyncssh.BreakReceived:
pass
There is an async keyword before the def, however there is also one before the for loop. In looking at the documentation for asyncio here: https://docs.python.org/3/library/asyncio-task.html, I do not see any similar uses of this async keyword.
So, what does the keyword do in this context?
The async for ... in ... construct allows you to loop through "Asynchronous iterable" and as stated in comments, detailed explanation is in PEP 492
In your example case, the async for loop, waits for stdin input, while not blocking other asyncio-loop tasks.
If you would use for loop, this would be blocking operation and no other tasks on the loop could be executed unit you've entered input.
To get another example, imagine MySQL client fetching x rows from database.
aio-mysql example
async for row in conn.execute("SELECT * FROM table;"):
print(row)
This fetches single row, and it isn't blocking the execution of other tasks on the asyncio-loop, while waiting for IO operations (mysql query).
Then you do something with the row data you've obtained.
Related
Using this example
import time
import asyncio
async def main(x):
print(f"Starting Task {x}")
await asyncio.sleep(3)
print(f"Finished Task {x}")
async def async_io():
tasks = []
for i in range(10):
tasks += [main(i)]
await asyncio.gather(*tasks)
if __name__ == "__main__":
start_time = time.perf_counter()
asyncio.run(async_io())
print(f"Took {time.perf_counter() - start_time} secs")
I noticed that we need to create a list that keeps track of the tasks to do. Understandable, but then why do we add the [] wrapper over the main(i) function? And also in the asyncio.gather(*tasks), why do we need to add the asterisk there as well?
why do we add the [] wrapper over the main(i) function?
There are a few ways to add items into a list. One such way, the way you've chosen, is by concatenating two lists together.
>>> [1] + [2]
[1, 2]
Trying to concatenate a list and something else will lead to a TypeError.
In your particular case you're using augmented assignment, a (often more performant) shorthand for
tasks = tasks + [main(i)]
Another way to accomplish this is with append.
tasks.append(main(i))
If your real code matches your example code, an even better way to spell all of this is
tasks = [main(i) for i in range(10)]
in the asyncio.gather(*tasks), why do we need to add the asterisk there as well?
Because gather will to run each positional argument it receives. Calls to gather should look like
asyncio.gather(main(0))
asyncio.gather(main(0), main(1))
Since there are times when you need to use a variable number of positional arguments, Python offers the unpacking operator (* in the case of lists).
If you felt so inclined, your example can be rewritten as
async def async_io():
await asyncio.gather(*[main(i) for i in range(10)])
I'm using asyncio to await set of coroutines in following way:
# let's assume we have fn defined and that it can throw an exception
coros_objects = []
for x in range(10):
coros_objects.append(fn(x))
for c in asyncio.as_completed(coros_objects):
try:
y = await c
exception:
# something
# if possible print(x)
Question is how can I know which coroutine failed and for which argument?
I could append "x" to the outputs but this would give me info about successful executions only.
I can know that form order because it's different from the order of coros_objects
Can I somehow identify what coro just yielded result?
Question is how can I know which coroutine failed and for which argument?
You can't with the current as_completed. Once this PR is merged, it will be possible by attaching the information to the future (because as_completed will then yield the original futures). At the moment there are two workarounds:
wrap the coroutine execution in a wrapper that catches exceptions and stores them, and also stores the original arguments that you need, or
not use as_completed at all, but write your own loop using tools like asyncio.wait.
The second option is easier than most people expect, so here it is (untested):
# create a list of tasks and attach the needed information to each
tasks = []
for x in range(10):
t = asyncio.create_task(fn(x))
t.my_task_arg = x
tasks.append(t)
# emulate as_completed with asyncio.wait()
while tasks:
done, tasks = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
for t in done:
try:
y = await t
except Exception as e:
print(f'{e} happened while processing {t.my_task_arg}')
I want this code to imitate a metronome. How do I get it to keep calling the timer instead of performing the final iteration and stopping?
-- main.lua
tempo = 60000/60
for i = 1, 100 do
local accomp = audio.loadStream("sounds/beep.mp3")
audio.play(accomp, {channel = 1})
audio.stopWithDelay(tempo)
timer.performWithDelay(tempo, listener)
end
performWithDelay accepts 3rd parameter for number of loops, you don't need to do it manually.
local accomp = audio.loadStream("sounds/beep.mp3")
timer.performWithDelay(tempo, function() audio.play(accomp, {channel = 1}) end, 100)
Read the manual...
https://docs.coronalabs.com/api/library/timer/performWithDelay.html#iterations-optional
You are doing it completely wrong.
timer.performWithDelay calls the listener function after a given delay.
You don't have to load the file 100 times. Once is enough.
You call the timer function 100 times which does nothing as you don't have any listener function.
Please read the documentation of functions befor you use them so you know what they do and how to properly use them. You can't cook a tasty meal if you don't know anything about your ingredients.
Remove that for loop and implement a listener function.
Use the optional third parameter iterations to specify how often you want to repeat that. Use -1 for infinite repetitions...
Its all there. You just have to RTFM.
I'm running some Ruby scripts concurrently using Grosser/Parallel.
During each concurrent test I want to add up the number of times a particular thing has happened, then display that number.
Let's say:
def main
$this_happened = 0
do_this_in_parallel
puts $this_happened
end
def do_this_in_parallel
Parallel.each(...) {
$this_happened += 1
}
end
The final value after do_this_in_parallel has finished will always be 0
I'd like to know why this happens.
How can I get the desired result which would be $this_happenend > 0?
Thanks.
This doesn't work because separate processes have separate memory spaces: setting variables etc in one process has no effect on what happens in the other process.
However you can return a result from your block (because under the hood parallel sets up pipes so that the processes can be fed input/return results). For example you could do this
counts = Parallel.map(...) do
#the return value of the block should
#be the number of times the event occurred
end
Then just sum the counts to get your total count (eg counts.reduce(:+)). You might also want to read up on map-reduce for more information about this way of parallelising work
I have never used parallel but the documentation seems to suggest that something like this might work.
Parallel.each(..., :finish => lambda {|*_| $this_happened += 1}) { do_work }
I am working on parallelizing String matching algorithm using MATLAB PCT. I am using createJob and several tasks where i am passing the text to be searched, pattern and other parameters. I get the following error. Any idea. The boyer_horsepool function the tasks are targetted looks fine.
Error using parallel.Job/fetchOutputs (line 677)
An error occurred during execution of Task with ID 1.
Error in stringmatch (line 42)
matches = fetchOutputs(job1);
Caused by:
Error using feval
Undefined function handle.
Code
% create the job
parallel.defaultClusterProfile('local');
cluster = parcluster();
job1 = createJob(cluster);
% create the tasks
for index = 1: num_tasks
ret = createTask(job1, #boyer_horsepool, 1, {haystack, needle, nlength, startValues(index), endValues(index)});
fprintf('For index %d the crateTask value is ?\n',index);
disp(class(ret));
%disp(ret);
end
% Submit and wait for the results
submit(job1);
wait(job1);
% Report the number of matches
matches = fetchOutputs(job1);
delete(job1);
Hm, I could be wrong, but it looks like your syntax is fine...
I think the issue is that it's not recognizing boyer_horsepool as a function. It's hard to do anything further without a bit more context. Try moving that function into the same .m file, and double check the spelling and argument count.
Also, try getAllOutputArguments(job1). It's a long shot, but it might work.
Good luck!