Matlab bad performance under Jenkins - windows

I have a Jenkins script execution step which processes out-data with Matlab to perform evaluation of test results.
When running the script from command prompt it starts up and exits quite fast but when executing the same script with the same arguments from Jenkins it performs extremely por. I get the Matlab welcome message in the "prompt only" window that appears but nothing else within the timeout of 2 hours that I have set for the job.
Have disabled the Jenkins Windows service on node and are running the node-process from desktop but no difference:
C:\Windows\System32\java.exe -jar c:\j-mpc\slave.jar -jnlpUrl http://<server>/slave-agent.jnlp -secret <xxxxx>
Also tried to increase the memory for the node process in but no change:
C:\Windows\System32\java.exe -Xmx2048m
When killing the process-tree starting with bash it indicates that it is inherited from java.exe-sh.exe tree (Pocess Explorer window) but there is a missing PID in between:
java.exe (<0.01%, 1 420 000K)
sh.exe (<0.01%, 2 140K)
bash.exe (<0.01%, 2 580K)
bash.exe ( , 2 580K)
python.exe ( , 6 044K)
python.exe ( , 4 800K)
matlab.exe ( , 1 844K)
MATLAB.exe (<0.01%, 167 324K)
Is there a hidden limitation in child processes that limits the memory or process usage when called from Jenkins, in other jobs I don't see the same limitations. Memory allocation for Matlab is very slow (from start to reasonable size >100M takes about a minute)
(Have a screen dump from Process Explorer but I am not allowed to upload)
EDIT
I have also tried to limit the call to a single windows command line from Jenkins with the same result (suspected that the deep call stack was to blame for it) but same result.
matlab.exe -nodisplay -nosplash -nodesktop -wait -logfile "log_file.txt" -r "try script_file ;catch err; disp(err.message); end ; exit"

Solved by setting the LM_LICENSE_FILE environment variable in Jenkins node setup.
(found a thread about slow startup)
Apparently the shell environment started by Jenkins does not completely comply with the one started from explorer.

Related

How to prevent Powershell from killing its own background processes when spawned via SSH?

We have a Powershell script in C:\test\test.ps1. The script has the following content (nothing removed):
Start-Process -NoNewWindow powershell { sleep 30; }
sleep 10
When we open a command line window (cmd.exe) and execute that script by the following command
c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\test\test.ps1
we can see in the Windows task manager (as well as Sysinternals Process Explorer) that the script behaves as expected:
Immediately after having executed the command above, two new entries appear in the process list, one being Powershell executing the "main" script (test.ps1), and one being Powershell executing the "background script" ({ sleep 30; }).
When 10 seconds have passed, the first entry (related to test.ps1) disappears from the process list, while the second entry remains in the process list.
When additional 20 seconds have passed (that is, 30 seconds in sum), the second entry (related to { sleep 30; }) also disappears from the process list.
This is the expected behavior, because Start-Process starts new processes in the background no matter what, unless -Wait is given. So far, so good.
But now we have a hairy problem which already has cost us two days of debugging until we finally figured out the reason for the misbehavior of one of our scripts:
Actually, test.ps1 is executed via SSH.
That is, we have installed Microsoft's implementation of the OpenSSH server on a Windows Server 2019 and have configured it correctly. Using SSH clients on other machines (Linux and Windows), we can log into the Windows Server, and we can execute test.ps1 on the server via SSH by executing the following command on the clients:
ssh -i <appropriate_key> administrator#ip.of.windows.server c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\test\test.ps1
When observing the task manager on the Windows Server, we can again see the two new entries in the process list as described above as soon as this command is executed on a client.
However, both entries then disappear from the process list on the server after 10 seconds.
This means that the background process ({ sleep 30; }) gets killed as soon as the main process ends. This is the opposite of what is documented and of what should happen, and we really need to prevent it.
So the question is:
How can we change test.ps1 so that the background process ({ sleep 30; }) does not get killed under any circumstances when the script ends, even when the script is started via SSH?
Some side notes:
This is not an academic example. Actually, we have a fairly complex script system in place on that server which consists of about a dozen of Powershell scripts, one of them being the "main" script which executes the other scripts in the background as shown above; the background scripts themselves in turn might start further background scripts.
It is important to not start the main script in the background via SSH in the first place. The clients must process the output and the exit code of the main script, and must wait until the main script has done some work and returns.
That means that we must use Powershell's capabilities to kick off the background processes in the main script (or to kick off a third-party program which is able to launch background processes which don't get killed when the main script ends).
Note: The original advice to use jobs here was incorrect, as when the job is stopped along with the parent session, any child processes still get killed. As it was incorrect advice for this scenario, I've removed that content from this answer.
Unfortunately, when the PowerShell session ends, so too are child processes created from that session. However, when using PSRemoting we can tell Invoke-Command to run the command in a disconnected session with the -InDisconnectedSession parameter (aliased to -Disconnected):
$icArgs = #{
ComputerName = 'RemoteComputerName'
Credential = ( Get-Credential )
Disconnected = $true
}
Invoke-Command #icArcs { ping -t 127.0.0.1 }
From my testing with an infinite ping, if the parent PowerShell session closes, the remote session should continue executing. In my case, the ping command continued running until I stopped the process myself on the remote server.
The downside here is that it doesn't seem you are using PowerShell Remoting, but instead are invoking shell commands over SSH that happen to be a PowerShell script. You will also have to use PowerShell Core if you require the SSH transport for PowerShell remoting.

Windows "Start" command doesn't return from within "Execute shell script" step

Within a kettle job we need to call a program that doesn't return until it is stopped. From a command line this can be done with the Start command of Windows:
Start "some title" /b "C:\windows-style\path with spaces\program.exe" unqoted_param -i -s "quoted param"
This works well by starting the program in another shell while the shell calling it returns and can continue. From within a kettle job this should be possible too, I think, by simply running the above command in a Execute a shell script step with the Insert script option.
However instead of returning from running the program in a new shell, the execution waits for the program to finish. This is not what we want because while the program is running (it's a VPN connection) we need to perform some other steps before the program is stopped again.
I suspect this might have something to do with how kettle performs the script, namely by putting the commands in a temporary batch file, then running that one. At least that's how it is presented in the job log:
2019/09/17 09:40:24 - Step Name - Executing command : cmd.exe /C "C:\Users\username\AppData\Local\Temp\kettle_69458258-d91e-11e9-b9fb-5f418528413ashell.bat"
2019/09/17 09:40:25 - Step Name - (stdout)
2019/09/17 09:40:25 - Step Name - (stdout) C:\pentaho_path>Start "some title" /b "C:\windows-style\path with spaces\program.exe" unqoted_param -i -s "quoted param"```
For a quick solution, you can use parallel execution in the job.
From the Start step (or whichever step precedes the need for the VPN), activate the option to run the subsequent steps in parallel. Then you can put the shell script step in its own branch while the rest of the job can continue (with a wait step on the other branch to allow the VPN to start).
From the question, you are probably running jobs from the Pentaho server. If you happen to run them from a scheduler with kitchen.bat, you could start the VPN before calling kitchen of course.

Windows Server limit of maximum processes started by ShellExecuteEx

I have written a small Windows service in C++ that periodically runs .cmd file via ShellExecuteEx. Sometimes happens a issue: ShellExecute returns true - everythings is ok - but no child cmd.exe process was started and SHELLEXECUTEINFO.hProcess is NULL although I specified SHELLEXECUTEINFO.fMask = SEE_MASK_NOCLOSEPROCESS. Even it didn't start a simple.cmd:
date /T >> file.txt
Normally the .cmd files contain a php command to run a php script.
This issue occured when the whole system runs approx 100 child cmd.exe processes through this Windows service that is run under NETWORK_SERVICE account. Manually from the Explorer I was able to run such cmd process.
Is there a Windows System limit of maximum processes started by ShellExecuteEx ?

Auto Restart SH script on crash?

Hi there guys i have a server running a game I've created and it has three SH scripts that are required to run in separate terminals so what i wanna know is 2 things.
1:is there a way i can get a single script that i double click on and launch all three scripts to where i can see the shell (for Debugging)
2: Is there any way to have said scripts auto restart when they exit or crash? (for full automated access when the server is unattended by a dev)
Server Specs:
6gb ram 60gb SSD 6 core CPU
Ubuntu 14.04
with vnc for desktop control
Here's a SH script for you.
running=1
finish()
{
running=0
}
trap finish SIGINT
while (( running )); do
// Execute the command here that starts your server.
echo "Restarting server on crash.."
sleep 5
done
You can run this script for each server in it's own screen. That way you can see the console output of each one. For example:
screen -S YOURUNIQUENAME -m THESCRIPTABOVE.sh
In order to detach from the screen, hit CTRL + A then CTRL + D. You can get back to the screen by using screen -x YOURUNIQUENAME
For a nice guide on using the screen command, see this article: http://www.rackaid.com/blog/linux-screen-tutorial-and-how-to/ . It even has a video to show how it's used.

Running remotely Linux script from Windows and get execution result code

I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...
If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.

Resources