I'm building an ASP.Net Core (netcore 1.1) application deployed on CentOS 7.2.
I have an action which calls an external process (a console application also built with .net core) through System.Diagnostics.Process and doesn't wait for it to exit before returning.
The problem is that the said process becomes and stays <defunct> even after it has finished executing. I don't want to wait for it to exit because that process could take several minutes to do its job.
Here is a sample code
//The process is writing its progress to a sqlite database using a
//previously generated guid which is used later in order to check
//the task's progress
ProcessStartInfo psi = new ProcessStartInfo();
psi.FileName = "/bin/sh -c \"/path/to/process/executable -args\"";
psi.UseShellExecute = true;
psi.WorkingDirectory = "/path/to/process/";
psi.RedirectStandardOutput = false;
psi.RedirectStandardError = false;
psi.RedirectStandardInput = false;
using(Process proc = new Process({ StartInfo = psi }))
{
proc.Start();
}
The process starts and does its job. It writes its progress for his specific task to a sqlite database. I can then probe that database to check out the progress.
Everything runs fine, but I can see after the process execution with ps -ef |grep executable that it's listed as <defunct> and I have no other way to get rid of it than by killing it's parent process which is my CoreMVC application.
Is there a way to start a process within a .NET Core application without waiting for it to exit, and force the parent application to reap the resulting <defunct> child process ?
I somehow fixed it by allowing the process to raise events :
using(Process proc = new Process(
{
StartInfo = psi,
EnableRaisingEvents = true //Allow the process to raise events,
//which I guess triggers the reaping of
//the child process by the parent
//application
}))
{
proc.Start();
}
Related
I created a method to prevent the system from sleeping as follows:
public static void KeepSystemAwake(bool bEnable)
{
if (bEnable)
{
EXECUTION_STATE state = SetThreadExecutionState(EXECUTION_STATE.ES_DISPLAY_REQUIRED | EXECUTION_STATE.ES_CONTINUOUS);
}
else
{
EXECUTION_STATE state = SetThreadExecutionState(EXECUTION_STATE.ES_CONTINUOUS);
}
}
The method prevents the system from sleep but when I call the ES_CONTINUOUS part of the method,the system does not sleep at all when I want it behave normally. What am I missing? I'm running this code in a different thread (Timer)
I'm running this code in a different thread (Timer)
If you're using something like a System.Threading.Timer callback, it will be called on different (read: arbitrary) threads.
From MSDN:
The callback method executed by the timer should be reentrant, because it is called on ThreadPool threads
Make sure you're calling SetThreadExecutionState for the same thread. Ideally, you'll serialise calls onto one thread (like the main thread).
I am working on web application, using tool testcomplete with vbscript.
pageTab = Sys.Process("iexplore").IEFrame(0).CommandBar.TabBand.TabButton("Tieto Client Manager").Enabled
do while(pageTab <> True)
Sys.Process("Explorer").Refresh
pageTab = Sys.Process("iexplore").IEFrame(0).CommandBar.TabBand.TabButton("Tieto Client Manager").Enabled
Sys.Process("iexplore").IEFrame(0).CommandBar.TabBand.TabButton("Tieto Client Manager").Refresh
loop
pageBusyState = Sys.Process("iexplore" , 2).Page("*").Busy
do while(pageBusyState <> False)
pageBusyState = Sys.Process("iexplore" , 2).Page("*").Busy
loop
With this code i can wait for new page but not able to wait for control loading page.
The best approach to wait until a dynamic page is ready is to wait for a specific object on this page. For example, this can be the first object you need to work on the page. This approach is described along with a couple of other approaches in the Waiting For Web Pages help topic.
Timeout=False
'Check IEXPLORE Process running on window
If Sys.Process("IEXPLORE").Exists Then
Set obj = Sys.Process("IEXPLORE").Page("*")
Set PageObj = Eval(obj.FullName)
'Set Default Timeout
intDefaultTimeout=1000
'Do until Page Object readyState=4 or Timeout
Do
Set PageObj= Sys.Process("IEXPLORE").Page("*")
'Check for Timeout
If aqConvert.StrToInt(DateDiff("n",intTimerStart,Now))>= aqConvert.StrToInt(intDefaultTimeout) Then
Timeout=True
End If
Loop Until PageObj.ReadyState = 4 Or Timeout=True
Else
'Check iexplore 2 Process running on window
If Sys.Process("iexplore",2).Exists Then
Set obj = Sys.Process("iexplore",2).Page("*")
Set PageObj = Eval(obj.FullName)
'Set Default Timeout
intDefaultTimeout=Project.Variables.prjDefaultTimeout
'Do until Page Object readyState=4(page loaded fully or request finished and response is ready) or Timeout
Do
Set PageObj= Sys.Process("iexplore",2).Page("*")
If aqConvert.StrToInt(DateDiff("n",intTimerStart,Now))>= aqConvert.StrToInt(intDefaultTimeout) Then
Timeout=True
End If
'Check still the page is in busy mode or page loaded fully .
Loop Until PageObj.ReadyState = 4 Or Timeout=True
End If
End If
'Calling Activate method to apply a property collection corresponding to a run mode
PageObj.Activate
I always become crazy with bash, i don't understand it.
I basically want to do this (i'm not using some specific syntax, it's just to explain my problem):
processes_count = 20;
for (i = 0; i < processes_count; i++)
{
php -f file.php "{$i}-{$processes_count}" &
proc_id[i] = $!
}
The above cycle start the processes. The next one should keep the processes "alive for ever"!
while(true)
{
foreach(proc_id as id)
{
if(!exist(proc_id[id]))
{
php -f file.php "{$id}-{$processes_count}" &
proc_id[id] = $!
}
}
sleep 5
}
If someone can help translating this into bash, python or something, thank you :)
I don't think you can do that because bash doesn't provide a method to 'wait for any one child process to die and let me know which one it was that died'. The nearest approach is wait:
wait
wait [jobspec or pid ...]
Wait until the child process specified by each process id pid or job specification
jobspec exits and return the exit status of the last command waited for. If a
job spec is given, all processes in the job are waited for. If no arguments are
given, all currently active child processes are waited for, and the return status
is zero. If neither jobspec nor pid specifies an active child process of the shell,
the return status is 127.
This means you can wait for a specific child to die, or you can wait for all children to die, but you can't do what you want.
If you drop into Perl or Python, you can do it, using the wait system call.
I am writing a simple script on Windows PowerShell in order to evaluate performance of executable files.
The important hypothesisi is the following: I have an executable file, it can be an application written in any possible language (.net and not, Viual-Prolog, C++, C, everything that can be compiled as an .exe file). I want to profile it getting execution times.
I did this:
Function Time-It {
Param ([string]$ProgramPath, [string]$Arguments)
$Watch = New-Object System.Diagnostics.Stopwatch
$NsecPerTick = (1000 * 1000 * 1000) / [System.Diagnostics.Stopwatch]::Frequency
Write-Output "Stopwatch created! NSecPerTick = $NsecPerTick"
$Watch.Start() # Starts the timer
[System.Diagnostics.Process]::Start($ProgramPath, $Arguments)
$Watch.Stop() # Stops the timer
# Collectiong timings
$Ticks = $Watch.ElapsedTicks
$NSecs = $Watch.ElapsedTicks * $NsecPerTick
Write-Output "Program executed: time is: $Nsecs ns ($Ticks ticks)"
}
This function uses stopwatch.
Well, the functoin accepts a program path, the stopwatch is started, the program run and the stopwatch then stopped. Problem: the System.Diagnostics.Process.Start is asynchronous and the next instruction (watch stopped) is not executed when the application finishes. A new process is created...
I need to stop the timer once the program ends.
I thought about the Process class, thicking it held some info regarding the execution times... not lucky...
How to solve this?
You can use Process.WaitForExit()
$proc = new-object "System.Diagnostics.Process"
$proc.StartInfo.FileName = "notepad.exe"
$proc.StartInfo.UseShellExecute = $false
$proc.Start()
$proc.WaitForExit()
Here's kprobst's answer, combined with the Measure-Command CmdLet, for a complete solution:
$proc = new-object "System.Diagnostics.Process"
$proc.StartInfo.FileName = "notepad.exe"
$proc.StartInfo.UseShellExecute = $false
$timeSpan = (MeasureCommand {
$proc.Start()
$proc.WaitForExit()
}
);
"Program executed: Time is {0} seconds" -f $timeSpan.TotalSeconds;
In C++ Windows app, I launch several long running child processes (currently I use CreateProcess(...) to do this.
I want the child processes to be automatically closed if my main processes crashes or is closed.
Because of the requirement that this needs to work for a crash of the "parent", I believe this would need to be done using some API/feature of the operating system. So that all the "child" processes are cleaned up.
How do I do this?
The Windows API supports objects called "Job Objects". The following code will create a "job" that is configured to shut down all processes when the main application ends (when its handles are cleaned up). This code should only be run once.:
HANDLE ghJob = CreateJobObject( NULL, NULL); // GLOBAL
if( ghJob == NULL)
{
::MessageBox( 0, "Could not create job object", "TEST", MB_OK);
}
else
{
JOBOBJECT_EXTENDED_LIMIT_INFORMATION jeli = { 0 };
// Configure all child processes associated with the job to terminate when the
jeli.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
if( 0 == SetInformationJobObject( ghJob, JobObjectExtendedLimitInformation, &jeli, sizeof(jeli)))
{
::MessageBox( 0, "Could not SetInformationJobObject", "TEST", MB_OK);
}
}
Then when each child process is created, execute the following code to launch each child each process and add it to the job object:
STARTUPINFO info={sizeof(info)};
PROCESS_INFORMATION processInfo;
// Launch child process - example is notepad.exe
if (::CreateProcess( NULL, "notepad.exe", NULL, NULL, TRUE, 0, NULL, NULL, &info, &processInfo))
{
::MessageBox( 0, "CreateProcess succeeded.", "TEST", MB_OK);
if(ghJob)
{
if(0 == AssignProcessToJobObject( ghJob, processInfo.hProcess))
{
::MessageBox( 0, "Could not AssignProcessToObject", "TEST", MB_OK);
}
}
// Can we free handles now? Not sure about this.
//CloseHandle(processInfo.hProcess);
CloseHandle(processInfo.hThread);
}
VISTA NOTE: See AssignProcessToJobObject always return "access denied" on Vista if you encounter access-denied issues with AssignProcessToObject() on vista.
One somewhat hackish solution would be for the parent process to attach to each child as a debugger (use DebugActiveProcess). When a debugger terminates all its debuggee processes are terminated as well.
A better solution (assuming you wrote the child processes as well) would be to have the child processes monitor the parent and exit if it goes away.
Windows Job Objects sounds like a good place to start. The name of the Job Object would have to be well-known, or passed to the children (or inherit the handle). The children would need to be notice when the parent dies, either through a failed IPC "heartbeat" or just WFMO/WFSO on the parent's process handle. At that point any child process could TermianteJobObject to bring down the whole group.
You can keep a separate watchdog process running. Its only task is watching the current process space to spot situations like you describe. It could even re-launch the original application after a crash or provide different options to the user, collect debug information, etc. Just try to keep it simple enough so that you don't need a second watchdog to watch the first one.
You can assign a job to the parent process before creating processes:
static HANDLE hjob_kill_on_job_close=INVALID_HANDLE_VALUE;
void init(){
hjob_kill_on_job_close = CreateJobObject(NULL, NULL);
if (hjob_kill_on_job_close){
JOBOBJECT_EXTENDED_LIMIT_INFORMATION jobli = { 0 };
jobli.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
SetInformationJobObject(hjob_kill_on_job_close,
JobObjectExtendedLimitInformation,
&jobli, sizeof(jobli));
AssignProcessToJobObject(hjob_kill_on_job_close, GetCurrentProcess());
}
}
void deinit(){
if (hjob_kill_on_job_close) {
CloseHandle(hjob_kill_on_job_close);
}
}
JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE causes all processes associated with the job to terminate when the last handle to the job is closed. By default, all child processes will be assigned to the job automatically, unless you passed CREATE_BREAKAWAY_FROM_JOB when calling CreateProcess. See https://learn.microsoft.com/en-us/windows/win32/procthread/process-creation-flags for more information about CREATE_BREAKAWAY_FROM_JOB.
You can use process explorer from Sysinternals to make sure all processes are assigned to the job. Just like this:
You'd probably have to keep a list of the processes you start, and kill them off one by one when you exit your program. I'm not sure of the specifics of doing this in C++ but it shouldn't be hard. The difficult part would probably be ensuring that child processes are shutdown in the case of an application crash. .Net has the ability to add a function that get's called when an unhandled exception occurs. I'm not sure if C++ offers the same capabilities.
You could encapsulate each process in a C++ object and keep a list of them in global scope. The destructors can shut down each process. That will work fine if the program exits normally but it it crashes, all bets are off.
Here is a rough example:
class myprocess
{
public:
myprocess(HANDLE hProcess)
: _hProcess(hProcess)
{ }
~myprocess()
{
TerminateProcess(_hProcess, 0);
}
private:
HANDLE _hProcess;
};
std::list<myprocess> allprocesses;
Then whenever you launch one, call allprocessess.push_back(hProcess);