Is there a blocking version of "Sleep"? (ahk) - sleep

I have a hotkey that I press repeatedly quite fast and I included a Sleep in the routine to throttle it a bit. But when I go at faster speeds the output starts to mess up. The culprit is definitely the sleep because when I take it out and let it go as fast as it wants, the output is fine. I know Sleep allows for new processes to start while it's waiting, and so I'm thinking having all these new processes of the same hotkey going on top of each other is what's causing to the errors. So I'm wondering if there's a variation on the sleep function that blocks new processes while it's waiting? I couldn't find anything like it in the docs or google.

In C++ you would use mutexes in this case. In AHK you have to work around that and there are multiple ways to do it.
One way would be to disable the hotkeys while any hotkey is doing an action. For that you can use a simple variable.
Example:
#If !mutex_locked
F2::
mutex_locked := True
Send, letters incomming...
Sleep, 500
Send, abcdefghijklmnopqrstuvwxyz
mutex_locked := False
Return
F3::
mutex_locked := True
Send, numbers incomming...
Sleep, 500
Send, 1234567890
mutex_locked := False
Return
#If
While the variable named mutex_locked is set to false, the hotkeys are disabled. As soon as they finish, they set the variable to true again.

I like Forivin's code/answer above, but I think the below is also relevant.
From the AHK reference: ( https://autohotkey.com/docs/misc/Threads.htm )
"By default, a given hotkey or hotstring subroutine cannot be run a second time if it is already running. Use #MaxThreadsPerHotkey to change this behavior."
Info on #MaxThreadsPerHotkey is located at: ( https://autohotkey.com/docs/commands/_MaxThreadsPerHotkey.htm )
Perhaps allowing the same hotkey to run concurrently with itself in different threads (by increasing #MaxThreadsPerHotkey) would bypass the problem? Just a guess... feel free to confirm or correct this notion.

Related

Triggering a software event from an interrupt (XMEGA, GCC)

I want to run a periodic "housekeeping" event, triggered regularly by a timer interrupt. The interrupt fires frequently (kHz+), while the function may take a long time to finish, so I can't simply have it executed in line.
In the past, I've done this on an ATMEGA, where an ISR can simply permit other interrupts to fire (including itself again) with sei(). By wrapping the event in a "still executing" flag, it won't pile up on the stack and cause a... you know:
if (!inFunction) { inFunction = true; doFunction(); inFunction = false; }
I don't think this can be done -- at least as easily -- on the XMEGA, due to the PMIC interrupt controller. It appears the interrupt flags can only be reset by executing RETI.
So, I was thinking, it would be convenient if I could convince GCC to produce a tail call out of an interrupt. That would immediately execute the event, while clearing interrupts.
This would be easy enough to do in assembler, just push the address and IRET. (Well, some stack-mangling because ISR, but, yeah.) But I'm guessing it'll be a hack in GCC, possibly a custom ASM wrapper around a "naked" function?
Alternately, I would love to simply set a low priority software interrupt, but I don't see an intentional way to do this.
I could use software to trigger an interrupt from an otherwise unused peripheral. That's fine as a special case, but then, if I ever need to use that device, I have to find another. It's bad for code reuse, too.
Really, this is an X-Y problem and I know it. I think I want to do X, but really I need method Y that I just don't know about.
One better method is to set a flag, then let main() deal with it when it gets around to it. Unfortunately, I have blocking functions in main() (handling user input via serial), so that would take work, and be a mess.
The only "proper" method I know of offhand, is to do a full task switch -- but damned if I'm going to effectively implement an RTOS, or pull one in, just for this. There's got to be a better way.
Have I actually covered all the possibilities, and painted myself into a corner? Do I have to compromise and choose one of these? Am I missing anything better?
There are more possibilities to solve this.
1. Enable your timer interrupt as low priority. In this way the medium and high priority interrupts will be able to interrupt this low priority interrupt, and run unaffected.
This is similar to using sei(); in your interrupt handler in older processors (without PMIC).
2.a Set a flag (variable) in the interrupt. Poll the flag in the main loop. If the flag is set, clear it and do your stuff.
2.b Set up the timer but don't enable its interrupt. Poll the OVF interrupt flag of your timer in the main loop. If the flag is set, clear it and do your stuff.
These are timed less accurately according to what else the main loop does, so depends on your expectations for accuracy. Handling more tasks in the main loop without an OS: Cooperative multitasking, State machine.

How to to stop a machine from sleeping/hibernating for execution period

I have an app written in golang (partially), as part of its operation it will spawn an external process (written in c) and begin monitoring. This external process can take many hours to complete so I am looking for a way to prevent the machine from sleeping or hibernating whilst processing.
I would like to be able to then relinquish this lock so that when the process is finished the machine is allowed to sleep/hibernate
I am initially targeting windows, but a cross-platform solution would be ideal (does nix even hibernate?).
Thanks to Anders for pointing me in the right direction - I put together a minimal example in golang (see below).
Note: polling to reset the timer seems to be the only reliable method, I found that when trying to combine with the continuous flag it would only take effect for approx 30 seconds (no idea why), having said that polling on this example is excessive and could probably be increased to 10 mins (since min hibernation time is 15 mins)
Also FYI this is a windows specific example:
package main
import (
"log"
"syscall"
"time"
)
// Execution States
const (
EsSystemRequired = 0x00000001
EsContinuous = 0x80000000
)
var pulseTime = 10 * time.Second
func main() {
kernel32 := syscall.NewLazyDLL("kernel32.dll")
setThreadExecStateProc := kernel32.NewProc("SetThreadExecutionState")
pulse := time.NewTicker(pulseTime)
log.Println("Starting keep alive poll... (silence)")
for {
select {
case <-pulse.C:
setThreadExecStateProc.Call(uintptr(EsSystemRequired))
}
}
}
The above is tested on win 7 and 10 (not tested on Win 8 yet - presumed to work there too).
Any user request to sleep will override this method, this includes actions such as shutting the lid on a laptop (unless power management settings are altered from defaults)
The above were sensible behaviors for my application.
On Windows, your first step is to try SetThreadExecutionState:
Enables an application to inform the system that it is in use, thereby preventing the system from entering sleep or turning off the display while the application is running
This is not a perfect solution but I assume this is not an issue for you:
The SetThreadExecutionState function cannot be used to prevent the user from putting the computer to sleep. Applications should respect that the user expects a certain behavior when they close the lid on their laptop or press the power button
The Windows 8 connected standby feature is also something you might need to consider. Looking at the power related APIs we find this description of PowerRequestSystemRequired:
The system continues to run instead of entering sleep after a period of user inactivity.
This request type is not honored on systems capable of connected standby. Applications should use PowerRequestExecutionRequired requests instead.
If you are dealing with tablets and other small devices then you can try to call PowerSetRequest with PowerRequestExecutionRequired to prevent this although the description of that is also not ideal:
The calling process continues to run instead of being suspended or terminated by process lifetime management mechanisms. When and how long the process is allowed to run depends on the operating system and power policy settings.
You might also want to use ShutdownBlockReasonCreate but I'm not sure if it blocks sleep/hibernate.

Ruby - fork, exec, detach .... do we have a race condition here?

Simple example, which doesn't work on my platform (Ruby 2.2, Cygwin):
#!/usr/bin/ruby
backtt = fork { exec('mintty','/usr/bin/zsh','-i') }
Process.detach(backtt)
exit
This tiny program (when started from the shell) is supposed to span a terminal window (mintty) and then get me back to the shell prompt.
However, while it DOES create the mintty window, I don't have a shell prompt afterwards, and I can't type anything in the calling shell.
But when I introduce a small delay before the detach, either using 'sleep', or by printing something on stdout, it works as expected:
#!/usr/bin/ruby
backtt = fork { exec('mintty','/usr/bin/zsh','-i') }
sleep 1
Process.detach(backtt)
exit
Why is this necessary?
BTW, I'm well aware that I could (from the shell) do a
mintty /usr/bin/zsh -i &
directly, or I could use system(...... &) from inside Ruby, but this is not the point here. I'm particularily interested in the fork/exec/detach behaviour in Ruby. Any insights?
Posting as an answer, because it is too long for a comment
Although I am no specialist in Ruby, and do not know Cygwin at all, this situation sounds very familiar to me, coming from C/C++.
This script is too short, so the parent of the parent completes, while the grandchild tries to start.
What would happen if you put the sleep after detach and before exit?
If my theory is correct, it should work too. Your program exits before any (or enough) thread-switching happens.
I call such problems "interrupted hand shaking". Although this is psychology terminology, it describes what happens.
Sleep "gives up the time slice", leading to thread-switching,
Console output (any file I/O) runs into semaphores, also leading to thread switching.
If my idea is correct, it should also work, if you dont "sleep", just count to 1e9 (depending on the speed of computation) because then preemptive multitasking makes even the thread-switch itself not giving up the CPU.
So it is an error in programming (IMHO: race condition is philosophical in that case), but it will get hard to find "who" is responsible. There are many things involved.
According to the documentation:
Process::detach prevents this by setting up a separate Ruby thread whose sole job is to reap the status of the process pid when it terminates.
NB: I can’t reproduce this behaviour on any of available to me operating systems, and I’m posting this as an answer just for the sake of formatting.
Since Process.detach(backtt) transparently creates a thread, I would suggest you to try:
#!/usr/bin/ruby
backtt = fork { exec('mintty','/usr/bin/zsh','-i') }
# ⇓⇓⇓⇓⇓
Process.detach(backtt).join
exit
This is no hack by any mean (as opposite to silly sleep,) since you are likely aware of that the underlying command should return more-or-less immediately. I am not a guru in cygwin, but it might have some specific issues with threads, so, let this thread to be handled.
I'm neither a Ruby nor a Cygwin guy, so what I propose here may not work at all. Anyways: I guess, you're not even hitting a Ruby or Cygwin specific bug here. In a program called "start" I've written in C many years ago, I hit the same issue. Here is a comment from the start of the function void daemonize_now():
/*
* This is a little bit trickier than I expected: If we simply call
* setsid(), it may fail! We have to fork() and exit(), and let our
* child call setsid().
*
* Now the problem: If we fork() and exit() immediatelly, our child
* will be killed before it ever had been run. So we need to sleep a
* little bit. Now the question: How long? I don't know an answer. So
* let us being killed by our child :-)
*/
So, he strategy is this: Let the parent wait on it's child (that can be done immediately before the child actually had a chance to do anything) and then let the child do the detaching part. How? Let it create a new process group (it will be reparented to the init process). That's the setsid() call for, I'm talking about in the comment. It will work something like this (C-Syntax, you should be able to lookup the correct usage for Ruby and apply the needed changes yourself):
parentspid = getpid();
Fork = fork();
if (Fork) {
if (Fork == -1) { // fork() failed
handle error
} else { // parent, Fork is the pid of the child
int tmp; waitpid(0, &tmp, 0);
}
} else { // child
if (setsid() == -1) {
handle error - possibly by doing nothing
and just let the parent wait ...
} else {
kill(parentspid, SIGUSR1);
}
exec(...);
}
You can use any signal, that terminates the process (i.e. SIGKILL). I used SIGUSR1 and installed a signal handler that exit(0)s the parent process, so the caller gets a success message. Only caveat: You get a success even if the exec fails. However, that is a problem that can't really be worked around, since after a successful exec you can't signal your parent anything anymore. And since you don't know when the exec will have failed (if it fails), you're back at the race condition part.

Self-restarting MathKernel - is it possible in Mathematica?

This question comes from the recent question "Correct way to cap Mathematica memory use?"
I wonder, is it possible to programmatically restart MathKernel keeping the current FrontEnd process connected to new MathKernel process and evaluating some code in new MathKernel session? I mean a "transparent" restart which allows a user to continue working with the FrontEnd while having new fresh MathKernel process with some code from the previous kernel evaluated/evaluating in it?
The motivation for the question is to have a way to automatize restarting of MathKernel when it takes too much memory without breaking the computation. In other words, the computation should be automatically continued in new MathKernel process without interaction with the user (but keeping the ability for user to interact with the Mathematica as it was originally). The details on what code should be evaluated in new kernel are of course specific for each computational task. I am looking for a general solution how to automatically continue the computation.
From a comment by Arnoud Buzing yesterday, on Stack Exchange Mathematica chat, quoting entirely:
In a notebook, if you have multiple cells you can put Quit in a cell by itself and set this option:
SetOptions[$FrontEnd, "ClearEvaluationQueueOnKernelQuit" -> False]
Then if you have a cell above it and below it and select all three and evaluate, the kernel will Quit but the frontend evaluation queue will continue (and restart the kernel for the last cell).
-- Arnoud Buzing
The following approach runs one kernel to open a front-end with its own kernel, which is then closed and reopened, renewing the second kernel.
This file is the MathKernel input, C:\Temp\test4.m
Needs["JLink`"];
$FrontEndLaunchCommand="Mathematica.exe";
UseFrontEnd[
nb = NotebookOpen["C:\\Temp\\run.nb"];
SelectionMove[nb, Next, Cell];
SelectionEvaluate[nb];
];
Pause[8];
CloseFrontEnd[];
Pause[1];
UseFrontEnd[
nb = NotebookOpen["C:\\Temp\\run.nb"];
Do[SelectionMove[nb, Next, Cell],{12}];
SelectionEvaluate[nb];
];
Pause[8];
CloseFrontEnd[];
Print["Completed"]
The demo notebook, C:\Temp\run.nb contains two cells:
x1 = 0;
Module[{},
While[x1 < 1000000,
If[Mod[x1, 100000] == 0, Print["x1=" <> ToString[x1]]]; x1++];
NotebookSave[EvaluationNotebook[]];
NotebookClose[EvaluationNotebook[]]]
Print[x1]
x1 = 0;
Module[{},
While[x1 < 1000000,
If[Mod[x1, 100000] == 0, Print["x1=" <> ToString[x1]]]; x1++];
NotebookSave[EvaluationNotebook[]];
NotebookClose[EvaluationNotebook[]]]
The initial kernel opens a front-end and runs the first cell, then it quits the front-end, reopens it and runs the second cell.
The whole thing can be run either by pasting (in one go) the MathKernel input into a kernel session, or it can be run from a batch file, e.g. C:\Temp\RunTest2.bat
#echo off
setlocal
PATH = C:\Program Files\Wolfram Research\Mathematica\8.0\;%PATH%
echo Launching MathKernel %TIME%
start MathKernel -noprompt -initfile "C:\Temp\test4.m"
ping localhost -n 30 > nul
echo Terminating MathKernel %TIME%
taskkill /F /FI "IMAGENAME eq MathKernel.exe" > nul
endlocal
It's a little elaborate to set up, and in its current form it depends on knowing how long to wait before closing and restarting the second kernel.
Perhaps the parallel computation machinery could be used for this? Here is a crude set-up that illustrates the idea:
Needs["SubKernels`LocalKernels`"]
doSomeWork[input_] := {$KernelID, Length[input], RandomReal[]}
getTheJobDone[] :=
Module[{subkernel, initsub, resultSoFar = {}}
, initsub[] :=
( subkernel = LaunchKernels[LocalMachine[1]]
; DistributeDefinitions["Global`"]
)
; initsub[]
; While[Length[resultSoFar] < 1000
, DistributeDefinitions[resultSoFar]
; Quiet[ParallelEvaluate[doSomeWork[resultSoFar], subkernel]] /.
{ $Failed :> (Print#"Ouch!"; initsub[])
, r_ :> AppendTo[resultSoFar, r]
}
]
; CloseKernels[subkernel]
; resultSoFar
]
This is an over-elaborate setup to generate a list of 1,000 triples of numbers. getTheJobDone runs a loop that continues until the result list contains the desired number of elements. Each iteration of the loop is evaluated in a subkernel. If the subkernel evaluation fails, the subkernel is relaunched. Otherwise, its return value is added to the result list.
To try this out, evaluate:
getTheJobDone[]
To demonstrate the recovery mechanism, open the Parallel Kernel Status window and kill the subkernel from time-to-time. getTheJobDone will feel the pain and print Ouch! whenever the subkernel dies. However, the overall job continues and the final result is returned.
The error-handling here is very crude and would likely need to be bolstered in a real application. Also, I have not investigated whether really serious error conditions in the subkernels (like running out of memory) would have an adverse effect on the main kernel. If so, then perhaps subkernels could kill themselves if MemoryInUse[] exceeded a predetermined threshold.
Update - Isolating the Main Kernel From Subkernel Crashes
While playing around with this framework, I discovered that any use of shared variables between the main kernel and subkernel rendered Mathematica unstable should the subkernel crash. This includes the use of DistributeDefinitions[resultSoFar] as shown above, and also explicit shared variables using SetSharedVariable.
To work around this problem, I transmitted the resultSoFar through a file. This eliminated the synchronization between the two kernels with the net result that the main kernel remained blissfully unaware of a subkernel crash. It also had the nice side-effect of retaining the intermediate results in the event of a main kernel crash as well. Of course, it also makes the subkernel calls quite a bit slower. But that might not be a problem if each call to the subkernel performs a significant amount of work.
Here are the revised definitions:
Needs["SubKernels`LocalKernels`"]
doSomeWork[] := {$KernelID, Length[Get[$resultFile]], RandomReal[]}
$resultFile = "/some/place/results.dat";
getTheJobDone[] :=
Module[{subkernel, initsub, resultSoFar = {}}
, initsub[] :=
( subkernel = LaunchKernels[LocalMachine[1]]
; DistributeDefinitions["Global`"]
)
; initsub[]
; While[Length[resultSoFar] < 1000
, Put[resultSoFar, $resultFile]
; Quiet[ParallelEvaluate[doSomeWork[], subkernel]] /.
{ $Failed :> (Print#"Ouch!"; CloseKernels[subkernel]; initsub[])
, r_ :> AppendTo[resultSoFar, r]
}
]
; CloseKernels[subkernel]
; resultSoFar
]
I have a similar requirement when I run a CUDAFunction for a long loop and CUDALink ran out of memory (similar here: https://mathematica.stackexchange.com/questions/31412/cudalink-ran-out-of-available-memory). There's no improvement on the memory leak even with the latest Mathematica 10.4 version. I figure out a workaround here and hope that you may find it's useful. The idea is that you use a bash script to call a Mathematica program (run in batch mode) multiple times with passing parameters from the bash script. Here is the detail instruction and demo (This is for Window OS):
To use bash-script in Win_OS you need to install cygwin (https://cygwin.com/install.html).
Convert your mathematica notebook to package (.m) to be able to use in script mode. If you save your notebook using "Save as.." all the command will be converted to comments (this was noted by Wolfram Research), so it's better that you create a package (File->New-Package), then copy and paste your commands to that.
Write the bash script using Vi editor (instead of Notepad or gedit for window) to avoid the problem of "\r" (http://www.linuxquestions.org/questions/programming-9/shell-scripts-in-windows-cygwin-607659/).
Here is a demo of the test.m file
str=$CommandLine;
len=Length[str];
Do[
If[str[[i]]=="-start",
start=ToExpression[str[[i+1]]];
Pause[start];
Print["Done in ",start," second"];
];
,{i,2,len-1}];
This mathematica code read the parameter from a commandline and use it for calculation.
Here is the bash script (script.sh) to run test.m many times with different parameters.
#c:\cygwin64\bin\bash
for ((i=2;i<10;i+=2))
do
math -script test.m -start $i
done
In the cygwin terminal type "chmod a+x script.sh" to enable the script then you can run it by typing "./script.sh".
You can programmatically terminate the kernel using Exit[]. The front end (notebook) will automatically start a new kernel when you next try to evaluate an expression.
Preserving "some code from the previous kernel" is going to be more difficult. You have to decide what you want to preserve. If you think you want to preserve everything, then there's no point in restarting the kernel. If you know what definitions you want to save, you can use DumpSave to write them to a file before terminating the kernel, and then use << to load that file into the new kernel.
On the other hand, if you know what definitions are taking up too much memory, you can use Unset, Clear, ClearAll, or Remove to remove those definitions. You can also set $HistoryLength to something smaller than Infinity (the default) if that's where your memory is going.
Sounds like a job for CleanSlate.
<< Utilities`CleanSlate`;
CleanSlate[]
From: http://library.wolfram.com/infocenter/TechNotes/4718/
"CleanSlate, tries to do everything possible to return the kernel to the state it was in when the CleanSlate.m package was initially loaded."

How to wait/block until a semaphore value reaches 0 in windows

Using the semop() function on unix, it's possible to provide a sembuf struct with sem_op =0. Essentially this means that the calling process will wait/block until the semaphore's value becomes zero. Is there an equivalent way to achieve this in windows?
The specific use case I'm trying to implement is to wait until the number of readers reaches zero before letting a writer write. (yes, this is a somewhat unorthodox way to use semaphores; it's because there is no limit to the number of readers and so there's no set of constrained resources which is what semaphores are typically used to manage)
Documentation on unix semop system call can be found here:
http://codeidol.com/unix/advanced-programming-in-unix/Interprocess-Communication/-15.8.-Semaphores/
Assuming you have one writer thread, just have the writer thread gobble up the semaphore. I.e., grab the semaphore via WaitForSingleObject for however many times you initialized the semaphore count to.
A Windows semaphore counts down from the maximum value (the maximum number of readers allowed) to zero. WaitXxx functions wait for a non-zero semaphore value and decrement it, ReleaseSemaphore increments the semaphore (allowing other threads waiting on the semaphore to unblock). It is not possible to wait on a Windows semaphore in a different way, so a Windows semaphore is probably the wrong choice of synchronization primitive in your case. On Vista/2008 you could use slim read-write locks; if you need to support earlier versions of Windows you'll have to roll your own.
I've never seen any function similar to that in the Win32 API.
I think the way to do this is to call WaitForSingleObject or similar and get a WAIT_OBJECT_0 the same number of times as the maximum count specified when the semaphore was created. You will then hold all the available "slots" and anyone else waiting on the semaphore will block.
The specific use case I'm trying to implement
is to wait until the number of readers reaches
zero before letting a writer write.
Can you guarantee that the reader count will remain at zero until the writer is all done?
If so, you can implement the equivalent of SysV "wait-for-zero" behavior with a manual-reset event object, signaling the completion of the last reader. Maintain your own (synchronized) count of "active readers", decrementing as readers finish, and then signal the patiently waiting writer via SetEvent() when that count is zero.
If you can't guarantee that the readers will be well behaved, well, then you've got an unhappy race to deal with even with SysV sems.

Resources