cmd batch - How to set the "Taskkill" command blocking or non-blocking in Windows command line? - windows

We have a group of machines running scripts. Most of our machines fail to finish the task after last weekend.
After comparing the working machines and non-workings machines we find out something weired that the behavior of Taskkill command has changed from non-blocking command into a blocking one on broken machines.
For example, to kill a process:
taskkill /im notexist.exe /f, if the process 'notexist.exe' does not exist, command will return immediatelly on working machines but keep a 5s timeout before return on broken machines.
They share the same harware and os version and the 'taskkill.exe' version.
The blocking one:
The non-blocking one:
Is there anything in the registry matters? How could I make the behavior consistently?

Related

How to prevent Powershell from killing its own background processes when spawned via SSH?

We have a Powershell script in C:\test\test.ps1. The script has the following content (nothing removed):
Start-Process -NoNewWindow powershell { sleep 30; }
sleep 10
When we open a command line window (cmd.exe) and execute that script by the following command
c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\test\test.ps1
we can see in the Windows task manager (as well as Sysinternals Process Explorer) that the script behaves as expected:
Immediately after having executed the command above, two new entries appear in the process list, one being Powershell executing the "main" script (test.ps1), and one being Powershell executing the "background script" ({ sleep 30; }).
When 10 seconds have passed, the first entry (related to test.ps1) disappears from the process list, while the second entry remains in the process list.
When additional 20 seconds have passed (that is, 30 seconds in sum), the second entry (related to { sleep 30; }) also disappears from the process list.
This is the expected behavior, because Start-Process starts new processes in the background no matter what, unless -Wait is given. So far, so good.
But now we have a hairy problem which already has cost us two days of debugging until we finally figured out the reason for the misbehavior of one of our scripts:
Actually, test.ps1 is executed via SSH.
That is, we have installed Microsoft's implementation of the OpenSSH server on a Windows Server 2019 and have configured it correctly. Using SSH clients on other machines (Linux and Windows), we can log into the Windows Server, and we can execute test.ps1 on the server via SSH by executing the following command on the clients:
ssh -i <appropriate_key> administrator#ip.of.windows.server c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\test\test.ps1
When observing the task manager on the Windows Server, we can again see the two new entries in the process list as described above as soon as this command is executed on a client.
However, both entries then disappear from the process list on the server after 10 seconds.
This means that the background process ({ sleep 30; }) gets killed as soon as the main process ends. This is the opposite of what is documented and of what should happen, and we really need to prevent it.
So the question is:
How can we change test.ps1 so that the background process ({ sleep 30; }) does not get killed under any circumstances when the script ends, even when the script is started via SSH?
Some side notes:
This is not an academic example. Actually, we have a fairly complex script system in place on that server which consists of about a dozen of Powershell scripts, one of them being the "main" script which executes the other scripts in the background as shown above; the background scripts themselves in turn might start further background scripts.
It is important to not start the main script in the background via SSH in the first place. The clients must process the output and the exit code of the main script, and must wait until the main script has done some work and returns.
That means that we must use Powershell's capabilities to kick off the background processes in the main script (or to kick off a third-party program which is able to launch background processes which don't get killed when the main script ends).
Note: The original advice to use jobs here was incorrect, as when the job is stopped along with the parent session, any child processes still get killed. As it was incorrect advice for this scenario, I've removed that content from this answer.
Unfortunately, when the PowerShell session ends, so too are child processes created from that session. However, when using PSRemoting we can tell Invoke-Command to run the command in a disconnected session with the -InDisconnectedSession parameter (aliased to -Disconnected):
$icArgs = #{
ComputerName = 'RemoteComputerName'
Credential = ( Get-Credential )
Disconnected = $true
}
Invoke-Command #icArcs { ping -t 127.0.0.1 }
From my testing with an infinite ping, if the parent PowerShell session closes, the remote session should continue executing. In my case, the ping command continued running until I stopped the process myself on the remote server.
The downside here is that it doesn't seem you are using PowerShell Remoting, but instead are invoking shell commands over SSH that happen to be a PowerShell script. You will also have to use PowerShell Core if you require the SSH transport for PowerShell remoting.

Long running scripts in Windows command line

I am running a script on Windows command line that takes multiple hours to finish executing. During this time, I am required to keep my computer open or the script stops. I was wondering if there are any tools that I can use which would keep the script running even if I put my computer to sleep (or shut the computer down). Thanks!
If computer is put to sleep or shut down, programs cannot run on it by definition of these states. Possible workarounds might include:
Running script on a permanently running remote machine (i.e. server)
Preventing computer to go to sleep

Windows Server limit of maximum processes started by ShellExecuteEx

I have written a small Windows service in C++ that periodically runs .cmd file via ShellExecuteEx. Sometimes happens a issue: ShellExecute returns true - everythings is ok - but no child cmd.exe process was started and SHELLEXECUTEINFO.hProcess is NULL although I specified SHELLEXECUTEINFO.fMask = SEE_MASK_NOCLOSEPROCESS. Even it didn't start a simple.cmd:
date /T >> file.txt
Normally the .cmd files contain a php command to run a php script.
This issue occured when the whole system runs approx 100 child cmd.exe processes through this Windows service that is run under NETWORK_SERVICE account. Manually from the Explorer I was able to run such cmd process.
Is there a Windows System limit of maximum processes started by ShellExecuteEx ?

In batch programing can one command run before the previous command finishes executing?

In batch programing is one command waited until completed until the next one is run? What I mean is for example
net stop wuauserv
net start wuauserv
Since net stop wuauserv takes a while to complete is it given time to complete or do I need another command to wait until it completes?
The NET STOP command does wait (or timeout while waiting) for a service to stop or start.
You can check the %ERRORCODE% from the command to get more information about if there was a problem or if it worked as expected.
In general most system command line tools return control once they are done executing. A few specialized programs will call into other services or systems and may return control before execution is complete. You will need to check the docs for whatever you are trying to run, but generally processes exit once the 'task' they perform is complete.
In a batch file, all commands are run sequentially, and execution waits for the command to complete.
In your example, net stop wuauserv would complete before net start wuauserv gets run.
You could confirm that by running something you know will take a long time, such as
ping www.google.com
ping www.stackoverflow.com
and you'll see that the second ping does not start until the first completes.
In your case, yes the second command will not execute until the first finishes.
However, GUI apps will start up and return control the batch file.
For example,
PING localhost
NOTEPAD
DIR
The DIR command will execute even if NOTEPAD is still running.

Running remotely Linux script from Windows and get execution result code

I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...
If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.

Resources