I launch an instance via the API and poll for when it enters the "passed" state like this:
while ( (Get-EC2InstanceStatus -InstanceId $InstanceId -Region $Region).status.Details.status.value -ne 'passed') {
Start-Sleep -Seconds 20
'Instance ' + $InstanceId + ": waiting for passed state ($(((Get-Date) - $StartTime).TotalSeconds) elapsed)"
}
I have a script that kicks off (that runs locally on the instance) when the instance is launched and I was wondering if there is a way I could have that script control when the instance enters the passed state.
Is this available from the powershell API?
I was thinking I could have the script postpone the transition to "passed" until it is finished.
You can control even though the feature is not meant for that purpose. The instance state has to be running before you can "control" the status. I have not tried this before. Check ReportInstanceStatus
Related
I have a data processing app which takes many hours to run. One of our customers is running it on an AWS EC2 Windows 2012 instance which has a shutdown schedule defined. As a result, it keeps getting cut off part way through each run.
This doesn't pose a problem for the data processing itself: each data row is processed atomically, and it can be safely restarted afterwards without re-doing any of the data rows already processed. However, it does write lots of summary data to disk at the end of the process, which I could really do with seeing, and this data is not getting written.
The app is written in VB.NET WinForms. There is a form that displays progress on screen, and the processing logic is done via a BackgroundWorker component.
The form handles cancellation events as follows:
Private Sub MyBase_FormClosing(sender As Object, e As FormClosingEventArgs) Handles MyBase.FormClosing
' If the user tried to close the app, ask if they are sure, and if not, then don't close down.
If e.CloseReason = CloseReason.UserClosing AndAlso ContainsFocus AndAlso Not PromptToClose() Then
e.Cancel = True
Return
End If
' If the background worker is running, then politely request it to stop. Otherwise, let the form close down.
If worker.IsBusy Then
If Not worker.CancellationPending Then
worker.CancelAsync()
End If
e.Cancel = True
Else
Environment.ExitCode = AppExitCode.Cancelled
End If
End Sub
The background thread polls for cancellation once per data row processed, as follows:
Private Sub ReportProgress(state as UIState)
worker.ReportProgress(0, state)
If worker.CancellationPending Then
Throw New AbortException ' custom exception type
End If
End Sub
The top-level Catch block in the background thread handles all success and failure paths appropriately, writing all summary data to disk, before exiting the thread.
I have tested this extensively outside AWS. In particular, I have verified that, if you end the process via Task Manager, then it gracefully shuts down straight away without prompting the user, and writes all summary data to disk as expected. The graceful shutdown takes a few seconds at most.
My problem is, the summary data is not written to disk when AWS shuts down the EC2 instance on schedule. I don't know exactly what happens during the EC2 shutdown process, and my searching of the EC2 documentation has not yet shed light on this. I guess it could be one of:
AWS or Windows is terminating running processes without sending WM_CLOSE messages
AWS or Windows is giving insufficient time to each process to shut down
Can anyone clarify how the EC2 shutdown process works, or suggest how I can improve my code to handle it?
I want to stop the user from running another instance of an already running program/service in Windows using PowerShell.
Eg: I have notepad opened, then for minute's time period I want to disable the option to open notepad, since it already is running.
As of now, I can detect if the program is open or not, and if not I may have it opened for user (code attached).
$processName = Get-Process notepad -ErrorAction SilentlyContinue
if ( $processName ) {
Write-Host 'Process is already running!'
#stop another instance of notepad to be opened, since it is already running
}
else {
$userChoice = Read-Host 'Process is not running, should I start it? (Y/N) '
if ($userChoice -eq 'Y') {
Start-Process notepad
}
else {
Write-Host 'No Problem!'
}
}
But, how can I disable the option for the user to open another instance of the same?
Any lead for the same would be helpful.
Since Windows doesn't have such a feature that prevents launching multiple copies of an executable, you have two options:
poll process list every now and then. Terminate extra instances of the application
create a wrapper to the application and use a mutex to prevent multiple copies
The first option has its caveats. If additional copies are launched, it takes on the average half the polling interval to detect those. What's more, which of the processes are to be terminated? The eldest? The youngest? Some other criteria?
The second one can be circumvented easily by just launching the application itself.
The only real solution is to implement a single-instance feature in the application itself. Games often do this. For business software, be wary that the users will hate you, if there is a single reason why running multiple instances would be of use. Yes, especially if that use case would be absurd.
As an example of a mutex-based launcher, consider the following function
function Test-Mutex {
$mtx = new-object System.Threading.Mutex($false, "SingleInstanceAppLauncher")
if(-not $mtx.WaitOne(0, $false)) {
write-host "Mutex already acquired, will not launch second instance!"
read-host "Any key"
return
}
write-host "Running instance #1"
read-host "Any key"
# Do stuff
}
As like the solution 2 caveat, any user can work around the limit by just executing the do suff part. Remember, the wrapper prevents launching multiple instances of the wrapper, not about the do stuff.
I am trying to fix a python3 application where multiple proceess and threads are created controlled by various queues and pipes. I am trying to make a form of controlled exit when someone tries to break the program with ctrl-c. However no mather what I do it always hangs just at the end.
I've tried to used Keyboard-interrupt exception and signal catch
The below code is part of the multi process code.
from multiprocessing import Process, Pipe, JoinableQueue as Queue, Event
class TaskExecutor(Process):
def __init__(....)
{inits}
def signal_handler(self, sig, frame):
print('TaskExecutor closing')
self._in_p.close()
sys.exit(1)
def run
signal.signal(signal.SIGINT, self.signal_handler)
signal.signal(signal.SIGTERM, self.signal_handler)
while True:
# Get the Task Groupe name from the Task queue.
try:
ExecCmd = self._in_p.recv() # type: TaskExecCmd
except Exceptions as e:
self._in_p.close()
return
if ExecCmd.Kill:
self._log.info('{:30} : Kill Command received'.format(self.name))
self._in_p.close()
return
else
{other code executing here}
I'm getting the above print that its closing.
but im still getting a lot of different exceptions which i try to catch but it will not.
I'm am looking for some documentation on how to and in which order to shut down multiprocess and its main process.
I know it's very general question however its a very large application so if there are any question or thing i could test i could narrow it down.
Regards
So after investigating this issue further I found that in situation where I had a pipe thread, Queue thread and 4 multiprocesses running. # of these processes could end up hanging when terminating the application with ctrl-c. The Pipe and Queue process where already shut down.
In the multiprocessing documentation there are a warning.
Warning If this method is used when the associated process is using a
pipe or queue then the pipe or queue is liable to become corrupted and
may become unusable by other process. Similarly, if the process has
acquired a lock or semaphore etc. then terminating it is liable to
cause other processes to deadlock.
And I think this is what's happening.
I also found that even though I have a shutdown mechanism in my multi-process class the threads still running would of cause be considered alive (reading is_alive()) even though I know that the run() method have return IE som internal was hanging.
Now of the solution. My multiprocesses was for a design view not a Deamon because I wanted to control the shot down of them. However I changed them to Deamon so they would always be killed regardless. I first added that anyone kill signal would raise and ProgramKilled exception throughout my entire program.
def signal_handler(signum, frame):
raise ProgramKilled('Task Executor killed')
I then changed my shut down mechanism in my multi process class to
while True:
# Get the Task Groupe name from the Task queue.
try:
# Reading from pipe
ExecCmd = self._in_p.recv() # type: TaskExecCmd
# If fatal error just close it all
except BrokenPipe:
break
# This can occure close the pipe and break the loop
except EOFError:
self._in_p.close()
break
# Exception for when a kill signal is detected
# Set the multiprocess as killed (just waiting for the kill command from main)
except ProgramKilled:
self._log.info('{:30} : Died'.format(self.name))
self._KilledStatus = True
continue
# kill command from main recieved
# Shut down all we can. Ignore exceptions
if ExecCmd.Kill:
self._log.info('{:30} : Kill Command received'.format(self.name))
try:
self._in_p.close()
self._out_p.join()
except Exception:
pass
self._log.info('{:30} : Kill Command executed'.format(self.name))
break
else if (not self._KilledStatus):
{Execute code}
# When out of the loop set killed event
KilledEvent.set()
And in my main thread I have added the following clean up process.
#loop though all my resources
for ThreadInterfaces in ResourceThreadDict.values():
# test each process in each resource
for ThreadIf in ThreadInterfaces:
# Wait for its event to be set
ThreadIf['KillEvent'].wait()
# When event have been recevied see if its hanging
# We know at this point every thing have been closed and all data have been purged correctly so if its still alive terminate it.
if ThreadIf['Thread'].is_alive():
try:
psutil.Process(ThreadIf['Thread'].pid).terminate()
except (psutil.NoSuchProcess, AttributeError):
pass
Af a lot of testing I know its really hard to control a termination of and app with multiple processes because you simply do not know in which order all of your processes receive this signal.
I've tried to in someway to save most of my data when its killed. Some would argue what I need that data for when manually terminating the app. But in this case this app runs a lot of external scripts and other application and any of those can lock the application and then you need to manually kill it but still retain the information for what have already been executed.
So this is my solution to my current problem with my current knowledge.
Any input or more in depth knowledge on what happening is welcome.
Please note that this app runs both on linux and windows.
Regards
Environment
jdk8 on windows
Steps
Create a Process instance with ProcessBuilder, and do some tasks using the process's output.
Call waitFor() to wait for this process to complete.
Use jna+cmd to forcibly kill the process. I do this in a finally block to make sure that the process is always terminated.
Field f = process.getClass().getDeclaredField("handle");
f.setAccessible(true);
long handleValue = f.getLong(process);
WinNT.HANDLE handle = new WinNT.HANDLE();
handle.setPointer(Pointer.createConstant(handleValue));
Kernel32 kernel = Kernel32.INSTANCE;
int pid = kernel.GetProcessId(handle);
Process killPr = Runtime.getRuntime().exec("cmd /c taskkill /pid " + pid + " /f /t");
killPr.waitFor();
killPr.destroy();
Question
Is it safe to do above steps? Will I kill another unrelated process in step 3? I debugged and notice that the handle value of ProcessImpl is still valid after the process is exited. I'm worried that windows system will reuse the same handle when the real process is exited but the process object is not recycled by the jvm.
Process handle becomes available for reuse only after the last handle to the process is closed. As long as you can ensure that process doesn't get cleaned up, killing the process by PID is fine. Also, there are more cleaner ways to kill the process since you are already using JNA, see kernel32.TerminateProcess.
The handle for the process will be closed in finalize, also Process class has a call to destroy() that will also trigger a TerminateProcess call. Mostly you don't have to do what you are doing here. Calling destroy followed by waitFor should do the job.
I'm trying to script Powerpoint with Powershell 2.0.
This site says there's a "PresentationOpen" event. However, Get-Member does not show this event. Also, when I try to do this:
register-objectevent $application PresentationOpen notification_event
it says: "Cannot register for event. An event with name 'PresentationOpen' does not exist."
Why is this event not accessible from PowerShell? Am I doing it wrong, and there is another way?
What I'm really trying to do is to wait until the presentation is fully loaded before I save it in another format. Not waiting causes PPT to freeze sometimes.
I'm grateful for any help!
PowerShell is pretty weak in COM support (it's a lot more like C# than it is like VB). In this case, you'll have to delegate the event. See the dispatches on this page: http://support.microsoft.com/kb/308825/EN-US/
There may be other (and better) ways to do this, but this should get you started:
$ppa = New-Object -ComObject PowerPoint.Application
$eventId = Register-ObjectEvent $ppa PresentationOpen -Action { "Hi" }
$ppa.Visible = 1
$ppa.Presentations.Open("Path\To\Presentation.ppt")
You would want to replace the script block after -Action on the second line with whatever code would do the processing/saving.
If there is any output from your event that you have registered, you can deal with it through the Receive-Job cmdlet, otherwise you can just simply add a loop similar to this right after the Open() method call to block further script execution until the slide deck has finished opening:
While ((Get-Job $eventId).State -neq "Completed") { Start-Sleep -m 250 }
Receive-Job $eventId