I have written a small Windows service in C++ that periodically runs .cmd file via ShellExecuteEx. Sometimes happens a issue: ShellExecute returns true - everythings is ok - but no child cmd.exe process was started and SHELLEXECUTEINFO.hProcess is NULL although I specified SHELLEXECUTEINFO.fMask = SEE_MASK_NOCLOSEPROCESS. Even it didn't start a simple.cmd:
date /T >> file.txt
Normally the .cmd files contain a php command to run a php script.
This issue occured when the whole system runs approx 100 child cmd.exe processes through this Windows service that is run under NETWORK_SERVICE account. Manually from the Explorer I was able to run such cmd process.
Is there a Windows System limit of maximum processes started by ShellExecuteEx ?
Related
We have a Powershell script in C:\test\test.ps1. The script has the following content (nothing removed):
Start-Process -NoNewWindow powershell { sleep 30; }
sleep 10
When we open a command line window (cmd.exe) and execute that script by the following command
c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\test\test.ps1
we can see in the Windows task manager (as well as Sysinternals Process Explorer) that the script behaves as expected:
Immediately after having executed the command above, two new entries appear in the process list, one being Powershell executing the "main" script (test.ps1), and one being Powershell executing the "background script" ({ sleep 30; }).
When 10 seconds have passed, the first entry (related to test.ps1) disappears from the process list, while the second entry remains in the process list.
When additional 20 seconds have passed (that is, 30 seconds in sum), the second entry (related to { sleep 30; }) also disappears from the process list.
This is the expected behavior, because Start-Process starts new processes in the background no matter what, unless -Wait is given. So far, so good.
But now we have a hairy problem which already has cost us two days of debugging until we finally figured out the reason for the misbehavior of one of our scripts:
Actually, test.ps1 is executed via SSH.
That is, we have installed Microsoft's implementation of the OpenSSH server on a Windows Server 2019 and have configured it correctly. Using SSH clients on other machines (Linux and Windows), we can log into the Windows Server, and we can execute test.ps1 on the server via SSH by executing the following command on the clients:
ssh -i <appropriate_key> administrator#ip.of.windows.server c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\test\test.ps1
When observing the task manager on the Windows Server, we can again see the two new entries in the process list as described above as soon as this command is executed on a client.
However, both entries then disappear from the process list on the server after 10 seconds.
This means that the background process ({ sleep 30; }) gets killed as soon as the main process ends. This is the opposite of what is documented and of what should happen, and we really need to prevent it.
So the question is:
How can we change test.ps1 so that the background process ({ sleep 30; }) does not get killed under any circumstances when the script ends, even when the script is started via SSH?
Some side notes:
This is not an academic example. Actually, we have a fairly complex script system in place on that server which consists of about a dozen of Powershell scripts, one of them being the "main" script which executes the other scripts in the background as shown above; the background scripts themselves in turn might start further background scripts.
It is important to not start the main script in the background via SSH in the first place. The clients must process the output and the exit code of the main script, and must wait until the main script has done some work and returns.
That means that we must use Powershell's capabilities to kick off the background processes in the main script (or to kick off a third-party program which is able to launch background processes which don't get killed when the main script ends).
Note: The original advice to use jobs here was incorrect, as when the job is stopped along with the parent session, any child processes still get killed. As it was incorrect advice for this scenario, I've removed that content from this answer.
Unfortunately, when the PowerShell session ends, so too are child processes created from that session. However, when using PSRemoting we can tell Invoke-Command to run the command in a disconnected session with the -InDisconnectedSession parameter (aliased to -Disconnected):
$icArgs = #{
ComputerName = 'RemoteComputerName'
Credential = ( Get-Credential )
Disconnected = $true
}
Invoke-Command #icArcs { ping -t 127.0.0.1 }
From my testing with an infinite ping, if the parent PowerShell session closes, the remote session should continue executing. In my case, the ping command continued running until I stopped the process myself on the remote server.
The downside here is that it doesn't seem you are using PowerShell Remoting, but instead are invoking shell commands over SSH that happen to be a PowerShell script. You will also have to use PowerShell Core if you require the SSH transport for PowerShell remoting.
We have a group of machines running scripts. Most of our machines fail to finish the task after last weekend.
After comparing the working machines and non-workings machines we find out something weired that the behavior of Taskkill command has changed from non-blocking command into a blocking one on broken machines.
For example, to kill a process:
taskkill /im notexist.exe /f, if the process 'notexist.exe' does not exist, command will return immediatelly on working machines but keep a 5s timeout before return on broken machines.
They share the same harware and os version and the 'taskkill.exe' version.
The blocking one:
The non-blocking one:
Is there anything in the registry matters? How could I make the behavior consistently?
I am running a script on Windows command line that takes multiple hours to finish executing. During this time, I am required to keep my computer open or the script stops. I was wondering if there are any tools that I can use which would keep the script running even if I put my computer to sleep (or shut the computer down). Thanks!
If computer is put to sleep or shut down, programs cannot run on it by definition of these states. Possible workarounds might include:
Running script on a permanently running remote machine (i.e. server)
Preventing computer to go to sleep
I have a Jenkins script execution step which processes out-data with Matlab to perform evaluation of test results.
When running the script from command prompt it starts up and exits quite fast but when executing the same script with the same arguments from Jenkins it performs extremely por. I get the Matlab welcome message in the "prompt only" window that appears but nothing else within the timeout of 2 hours that I have set for the job.
Have disabled the Jenkins Windows service on node and are running the node-process from desktop but no difference:
C:\Windows\System32\java.exe -jar c:\j-mpc\slave.jar -jnlpUrl http://<server>/slave-agent.jnlp -secret <xxxxx>
Also tried to increase the memory for the node process in but no change:
C:\Windows\System32\java.exe -Xmx2048m
When killing the process-tree starting with bash it indicates that it is inherited from java.exe-sh.exe tree (Pocess Explorer window) but there is a missing PID in between:
java.exe (<0.01%, 1 420 000K)
sh.exe (<0.01%, 2 140K)
bash.exe (<0.01%, 2 580K)
bash.exe ( , 2 580K)
python.exe ( , 6 044K)
python.exe ( , 4 800K)
matlab.exe ( , 1 844K)
MATLAB.exe (<0.01%, 167 324K)
Is there a hidden limitation in child processes that limits the memory or process usage when called from Jenkins, in other jobs I don't see the same limitations. Memory allocation for Matlab is very slow (from start to reasonable size >100M takes about a minute)
(Have a screen dump from Process Explorer but I am not allowed to upload)
EDIT
I have also tried to limit the call to a single windows command line from Jenkins with the same result (suspected that the deep call stack was to blame for it) but same result.
matlab.exe -nodisplay -nosplash -nodesktop -wait -logfile "log_file.txt" -r "try script_file ;catch err; disp(err.message); end ; exit"
Solved by setting the LM_LICENSE_FILE environment variable in Jenkins node setup.
(found a thread about slow startup)
Apparently the shell environment started by Jenkins does not completely comply with the one started from explorer.
I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...
If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.