"working" terminal prompt running in parallel Python 3.10 - async-await

I am trying to show an animated "working" prompt while running some python code. I've been searching for a way to do so but the solutions i've found are not quite what i want (if i recall correctly tqdm and alive-progress require a for loop with a defined number of iterations) and I'd like to find a way to code it myself.
The closest I've gotten to making it is using asyncio as follows:
async def main():
dummy_task = asyncio.create_task(dummy_search())
bar_task = asyncio.create_task(progress())
test = await dummy_task
bar_task.cancel()
where dummy task can be any async task and bar_task is:
FLUSH_LINE = "\033[K"
async def progress(mode=""):
def integers():
n = 0
while True:
yield n
n += 1
progress_indicator = ["-", "\\", "|", "/"]
message = "working"
message_len = len(message)
message += "-" * message_len
try:
if not mode:
for i in (n for n in integers()):
await asyncio.sleep(0.05)
message = message[1:] + message[0]
print(f"{FLUSH_LINE}{progress_indicator[i % 4]} [{message[:message_len]}]", end="\r")
finally:
print(f"{FLUSH_LINE}")
The only problem with this approach is that asyncio does not actually run tasks in parallel, so if the dummy_task does not use await at any point, the bar_task will not run until the dummy task is complete, and it won't show the working prompt in the terminal.
How should I go about trying to run both tasks in parallel? Do I need to use multiprocessing? If so, would both tasks write to the same terminal by default?
Thanks in advance.

Related

How to create multiprocess with regression function?

I'm trying to build a regression function that call itself in a new process. The new process should not stop the parent process nor wait for it to finish, that is why I don't use join(). Do you have another way to create regression function with multi-process.
I use the following code:
import multiprocessing as mp
import concurrent.futures
import time
def do_something(c, seconds, r_list):
c += 1 # c is a counter that all processes should use
# such that no more than 20 processes are created.
print(f"Sleeping {seconds} second(s)...")
if c < 20:
P_V = mp.Value('d', 0.0, lock=False)
p = mp.Process(group=None, target=do_something, args=(c, 1, r_list,))
p.start()
if not p.is_alive():
r_list.append(P_V.value)
time.sleep(seconds)
print(f"Done Sleeping...{seconds}")
return f"Done Sleeping...{seconds}"
if __name__ == '__main__':
C = 0 # C is a counter that all processes should use
# such that no more than 20 processes are created.
Result_list = [] # results that come from all processes are saved here
Result_list.append(do_something(C, 1, Result_list))
Notice that results from all processes should be compared at the end.
In fact, this code is working well but the child processes, which are created in the recursive method, do not print anything, the list "Result_list" contains only one item from the first call, and C=0 at the end, any idea why?
Here's a simplified example of what I think you're trying to do (side note: launching processes recursively is a great way to accidentally create a "fork bomb". It is extremely more common to create multiple processes in some sort of loop instead)
from multiprocessing import Process, Queue
from time import sleep
from os import getpid
def foo(n_procs, return_Q, arg):
if __name__ == "__main__": #don't actually run the body of foo in the "main" process, just start the recursion
Process(target=foo, args=(n_procs, return_Q, arg)).start()
else:
n_procs -= 1
if n_procs > 0:
Process(target=foo, args=(n_procs, return_Q, arg)).start()
sleep(arg)
print(f"{getpid()} done sleeping {arg} seconds")
return_Q.put(f"{getpid()} done sleeping {arg} seconds") #put the result to a queue so we can get it in the main process
if __name__ == "__main__":
q = Queue()
foo(10, q, 2)
sleep(10) #do something else in the meantime
results = []
#while not q.empty(): #usually better to just know how many results you're expecting as q.empty can be unreliable
for _ in range(10):
results.append(q.get())
print("mp results:")
print("\n".join(results))

Problems interrupting a python Input (Mac)

I am trying to allow a user to input multiple answers but only within an allocated amount of time. The problem is I have it running but the program will not interrupt the input. The program will only stop the user from inputing if the user inputs an answer after the time ends. Any ideas? Is what I am trying to do even possible in python?
I have tried using threading and the signal module however they both result in the same issue.
Using Signal:
import signal
def handler(signum, frame):
raise Exception
def answer_loop():
score = 0
while True:
answer = input("Please input your answer")
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
try:
answer_loop()
except Exception:
print("end")
signal.alarm(0)
Using Threading:
from threading import Timer
def end():
print("Time is up")
def answer_loop():
score = 0
while True:
answer = input("Please input your answer")
time_limit = 5
t = Timer(time_limit, end)
t.start()
answer_loop()
t.cancel()
Your problem is that builtin input does not have a timeout parameter and, AFAIK, threads cannot be terminated by other threads. I suggest instead that you use a GUI with events to finely control user interaction. Here is a bare bones tkinter example.
import tkinter as tk
root = tk.Tk()
label = tk.Label(root, text='answer')
entry = tk.Entry(root)
label.pack()
entry.pack()
def timesup():
ans = entry.get()
entry.destroy()
label['text'] = f"Time is up. You answered {ans}"
root.after(5000, timesup)
root.mainloop()

Why does using asyncio.ensure_future for long jobs instead of await run so much quicker?

I am downloading jsons from an api and am using the asyncio module. The crux of my question is, with the following event loop as implemented as this:
loop = asyncio.get_event_loop()
main_task = asyncio.ensure_future( klass.download_all() )
loop.run_until_complete( main_task )
and download_all() implemented like this instance method of a class, which already has downloader objects created and available to it, and thus calls each respective download method:
async def download_all(self):
""" Builds the coroutines, uses asyncio.wait, then sifts for those still pending, loops """
ret = []
async with aiohttp.ClientSession() as session:
pending = []
for downloader in self._downloaders:
pending.append( asyncio.ensure_future( downloader.download(session) ) )
while pending:
dne, pnding= await asyncio.wait(pending)
ret.extend( [d.result() for d in dne] )
# Get all the tasks, cannot use "pnding"
tasks = asyncio.Task.all_tasks()
pending = [tks for tks in tasks if not tks.done()]
# Exclude the one that we know hasn't ended yet (UGLY)
pending = [t for t in pending if not t._coro.__name__ == self.download_all.__name__]
return ret
Why is it, that in the downloaders' download methods, when instead of the await syntax, I choose to do asyncio.ensure_future instead, it runs way faster, that is more seemingly "asynchronously" as I can see from the logs.
This works because of the way I have set up detecting all the tasks that are still pending, and not letting the download_all method complete, and keep calling asyncio.wait.
I thought that the await keyword allowed the event loop mechanism to do its thing and share resources efficiently? How come doing it this way is faster? Is there something wrong with it? For example:
async def download(self, session):
async with session.request(self.method, self.url, params=self.params) as response:
response_json = await response.json()
# Not using await here, as I am "supposed" to
asyncio.ensure_future( self.write(response_json, self.path) )
return response_json
async def write(self, res_json, path):
# using aiofiles to write, but it doesn't (seem to?) support direct json
# so converting to raw text first
txt_contents = json.dumps(res_json, **self.json_dumps_kwargs);
async with aiofiles.open(path, 'w') as f:
await f.write(txt_contents)
With full code implemented and a real API, I was able to download 44 resources in 34 seconds, but when using await it took more than three minutes (I actually gave up as it was taking so long).
When you do await in each iteration of for loop it will await to download every iteration.
When you do ensure_future on the other hand it doesn't it creates task to download all the files and then awaits all of them in second loop.

Tkinter problems with GUI when entering while loop

I have a simple GUI which run various scripts from another python file, everything works fine until the GUI is running a function which includes a while loop, at which point the GUI seems to crash and become in-active. Does anybody have any ideas as to how this can be overcome, as I believe this is something to do with the GUI being updated,Thanks. Below is a simplified version of my GUI.
GUI
#!/usr/bin/env python
# Python 3
from tkinter import *
from tkinter import ttk
from Entry import ConstrainedEntry
import tkinter.messagebox
import functions
AlarmCode = "2222"
root = Tk()
root.title("Simple Interface")
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
ttk.Button(mainframe, width=12,text="ButtonTest",
command=lambda: functions.test()).grid(
column=5, row=5, sticky=SE)
for child in mainframe.winfo_children():
child.grid_configure(padx=5, pady=5)
root.mainloop()
functions
def test():
period = 0
while True:
if (period) <=100:
time.sleep(1)
period +=1
print(period)
else:
print("100 seconds has passed")
break
What will happen in the above is that when the loop is running the application will crash. If I insert a break in the else statement after the period has elapsed, everything will work fine. I want users to be able to click when in loops as this GUI will run a number of different functions.
Don't use time.sleep in the same thread than your Tkinter code: it freezes the GUI until the execution of test is finished. To avoid this, you should use after widget method:
# GUI
ttk.Button(mainframe, width=12,text="ButtonTest",
command=lambda: functions.test(root))
.grid(column=5, row=5, sticky=SE)
# functions
def test(root, period=0):
if period <= 100:
period += 1
print(period)
root.after(1000, lambda: test(root, period))
else:
print("100 seconds has passed")
Update:
In your comment you also add that your code won't use time.sleep, so your original example may not be the most appropiate. In that case, you can create a new thread to run your intensive code.
Note that I posted the alternative of after first because multithreading should be used only if it is completely necessary - it adds overhead to your applicacion, as well as more difficulties to debug your code.
from threading import Thread
ttk.Button(mainframe, width=12,text="ButtonTest",
command=lambda: Thread(target=functions.test).start())
.grid(column=5, row=5, sticky=SE)
# functions
def test():
for x in range(100):
time.sleep(1) # Simulate intense task (not real code!)
print(x)
print("100 seconds has passed")

wxPython + subprocess. ProgressDialog doesn't work under windows

I have a GUI application which launches some commands using subprocess and then shows the progress of this commands by reading from subprocess.Popen.stdout and using wx.ProgressDialog. I've written the app under Linux and it works flawlessly there, but I'm now doing some testing under windows and it seems that trying to update the progress dialog causes the app to hang. There are no error messages or anything, so it's difficult for me to figure out what's happening. Below is a simplified code:
The subprocess is launched in separate thread by this method in main thread:
def onOk(self,event):
""" Starts processing """
self.infotxt.Clear()
args = self.getArgs()
self.stringholder = args['outfile']
if (args):
cmd = self.buildCmd(args, True)
if (cmd):
# Make sure the output directory is writable.
if not self.isWritable(args['outfile']):
print "Cannot write to %s. Make sure you have write permission or select a different output directory." %os.path.dirname(args['outfile'])
else:
try:
self.thread = threading.Thread(target=self.runCmd,args=(cmd,))
self.thread.setDaemon(True)
self.thread.start()
except Exception:
sys.stderr.write('Error starting thread')
And here's the runCmd method:
def runCmd(self, cmd):
""" Runs a command line provided as a list of arguments """
temp = []
aborted = False
dlg = None
for i in cmd:
temp.extend(i.split(' '))
# Use wx.MutexGuiEnter()/MutexGuiLeave() for anything that accesses GUI from another thread
wx.MutexGuiEnter()
max = 100
stl = wx.PD_CAN_ABORT | wx.PD_APP_MODAL | wx.PD_ELAPSED_TIME | wx.PD_REMAINING_TIME
dlg = wx.ProgressDialog("Please wait", "Processing...", maximum = max, parent = self.frame, style=stl)
wx.MutexGuiLeave()
# This is for windows to not display the black command line window when executing the command
if os.name == 'nt':
si = subprocess.STARTUPINFO()
si.dwFlags |= subprocess.STARTF_USESHOWWINDOW
si.wShowWindow = subprocess.SW_HIDE
else:
si = None
try:
proc = subprocess.Popen(temp, shell=False, bufsize=1, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except Exception:
sys.stderr.write('Error executing a command. ')
# Progress dialog
count = 0
while True:
line=proc.stdout.readline()
count += 1
wx.MutexGuiEnter()
if dlg.Update(count) == (True, False):
print line.rstrip()
wx.MutexGuiLeave()
if not line: break
else:
print "Processing cancelled."
aborted = True
wx.MutexGuiLeave()
proc.kill()
break
wx.MutexGuiEnter()
dlg.Destroy()
wx.GetApp().GetTopWindow().Raise()
wx.MutexGuiLeave()
if aborted:
if os.path.exists(self.stringholder):
os.remove(self.stringholder)
dlg.Destroy()
proc.wait()
Again this works fine under Linux, but freezes on Windows. If I remove dlg.Update() line it also works fine. The subprocess output is printed out in main window and ProgressDialog is shown, just progressbar doesn't move. What am I missing?
Try not using wx.MutexGuiEnter and wx.MutexGuiLeave. You can handle updating the GUI from another thread using wx.CallAfter. I have never seen anyone using those mutexes in wx application before and not even in tutorials or examples of using threads.

Resources