Spawning processes on Windows with Rust - windows

I'm on Windows 10, and trying to spawn processes with std::process::Command. There are some apps that I want to open with Command::new("cmd"), and then pass in an argument. For example, I want to pass in start ms-settings:windowsupdate as well as start ms-settings:appsfeatures, which would open "Windows Update" and "Apps & Features" in "Windows Settings". However, as it stands, you cannot have more than one instance of "Windows Settings" open at a time. So, I want to open those specific processes one at a time, and when I close one process, I want the other one to spawn. The only way I've managed to do it is by doing:
let processes = [
"start ms-settings:windowsupdate",
"start ms-settings:appsfeatures"
]
for process in &processes {
Command::new("cmd")
.arg("/K")
.arg(&process)
.creation_flags(0x00000010) // CREATE_NEW_CONSOLE
.status()
.expect("Process could not be spawned.");
}
which works, but it will open a command prompt when the first process spawns, and the only way to spawn the next process is to close the command prompt that the first process opens (opposed to just closing the window itself). I've tried other flags in .creation_flags(), but the other flags will open all processes at once, so only the last start ms-settings: process will be opened, since there can't be more than one instance. Is there a way to spawn these processes one at a time, without having a command prompt also spawn?

Related

How to prevent Powershell from killing its own background processes when spawned via SSH?

We have a Powershell script in C:\test\test.ps1. The script has the following content (nothing removed):
Start-Process -NoNewWindow powershell { sleep 30; }
sleep 10
When we open a command line window (cmd.exe) and execute that script by the following command
c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\test\test.ps1
we can see in the Windows task manager (as well as Sysinternals Process Explorer) that the script behaves as expected:
Immediately after having executed the command above, two new entries appear in the process list, one being Powershell executing the "main" script (test.ps1), and one being Powershell executing the "background script" ({ sleep 30; }).
When 10 seconds have passed, the first entry (related to test.ps1) disappears from the process list, while the second entry remains in the process list.
When additional 20 seconds have passed (that is, 30 seconds in sum), the second entry (related to { sleep 30; }) also disappears from the process list.
This is the expected behavior, because Start-Process starts new processes in the background no matter what, unless -Wait is given. So far, so good.
But now we have a hairy problem which already has cost us two days of debugging until we finally figured out the reason for the misbehavior of one of our scripts:
Actually, test.ps1 is executed via SSH.
That is, we have installed Microsoft's implementation of the OpenSSH server on a Windows Server 2019 and have configured it correctly. Using SSH clients on other machines (Linux and Windows), we can log into the Windows Server, and we can execute test.ps1 on the server via SSH by executing the following command on the clients:
ssh -i <appropriate_key> administrator#ip.of.windows.server c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\test\test.ps1
When observing the task manager on the Windows Server, we can again see the two new entries in the process list as described above as soon as this command is executed on a client.
However, both entries then disappear from the process list on the server after 10 seconds.
This means that the background process ({ sleep 30; }) gets killed as soon as the main process ends. This is the opposite of what is documented and of what should happen, and we really need to prevent it.
So the question is:
How can we change test.ps1 so that the background process ({ sleep 30; }) does not get killed under any circumstances when the script ends, even when the script is started via SSH?
Some side notes:
This is not an academic example. Actually, we have a fairly complex script system in place on that server which consists of about a dozen of Powershell scripts, one of them being the "main" script which executes the other scripts in the background as shown above; the background scripts themselves in turn might start further background scripts.
It is important to not start the main script in the background via SSH in the first place. The clients must process the output and the exit code of the main script, and must wait until the main script has done some work and returns.
That means that we must use Powershell's capabilities to kick off the background processes in the main script (or to kick off a third-party program which is able to launch background processes which don't get killed when the main script ends).
Note: The original advice to use jobs here was incorrect, as when the job is stopped along with the parent session, any child processes still get killed. As it was incorrect advice for this scenario, I've removed that content from this answer.
Unfortunately, when the PowerShell session ends, so too are child processes created from that session. However, when using PSRemoting we can tell Invoke-Command to run the command in a disconnected session with the -InDisconnectedSession parameter (aliased to -Disconnected):
$icArgs = #{
ComputerName = 'RemoteComputerName'
Credential = ( Get-Credential )
Disconnected = $true
}
Invoke-Command #icArcs { ping -t 127.0.0.1 }
From my testing with an infinite ping, if the parent PowerShell session closes, the remote session should continue executing. In my case, the ping command continued running until I stopped the process myself on the remote server.
The downside here is that it doesn't seem you are using PowerShell Remoting, but instead are invoking shell commands over SSH that happen to be a PowerShell script. You will also have to use PowerShell Core if you require the SSH transport for PowerShell remoting.

Getting the pid from application process launched from open command on macOS

From within a Java application on macOS, I use Runtime.getRuntime().exec("open -Wn filename") to launch a file with its Default application, lets call it the Viewing application (for example AdobeReader for pdf). That works fine.
My issue arises, when I want to close the viewing application (for example AdobeReader).
The problem is, that the open command itself is launched as a child process of the java application, but the open command launches the viewing application not as a child process, but as child of launchd(1). As a result, when I destroy the process from the Java application, only the open process is killed, but not the viewing application.
So far I could not manage to get a PID of the viewing application process to be able to kill it. With a ps I can only find it, when I have the application name, but that is exactly what I do not have, since I want to let the os decide about the viewing application.
Does anybody have an idea how I could
get the pid of the application that is launched from the open command, without knowing the applications Name or UTI(remember, open is not the parent process of the viewing application)?
or
make the launched application a child of the open process, so I can kill it by killing the open process?
or
any other possible solution?
Your ideas are very much appreciated.
I found a solution by getting the pid from the lsof command, since I know the filename:
lsof -t filename
Having the pid, I can kill the process, means the Viewing Application:
kill $(lsof -t filename)
The full solution looks like this:
String killCommand = "kill $(lsof -t " + filename+ ")";
ProcessBuilder builder = new ProcessBuilder("bash", "-c", killCommand);
builder.start();
Not very pretty, but it does the job.

Automate a Ruby command without it exiting

This hopefully should be an easy question to answer. I am attempting to have mumble-ruby run automatically I have everything up and running except after running this simple script it runs but ends. In short:
Running this from terminal I get "Press enter to terminate script" and it works.
Running this via a cronjob runs the script but ends it and runs cli.disconnect (I assume).
I want the below script to run automatically via a cronjob at a specified time and not end until the server shuts down.
#!/usr/bin/env ruby
require 'mumble-ruby'
cli = Mumble::Client.new('IP Address', Port, 'MusicBot', 'Password')
cli.connect
sleep(1)
cli.join_channel(5)
stream = cli.stream_raw_audio('/tmp/mumble.fifo')
stream.volume = 2.7
print 'Press enter to terminate script';
gets
cli.disconnect
Assuming you are on a Unix/Linux system, you can run it in a screen session. (This is a Unix command, not a scripting function.)
If you don't know what screen is, it's basically a "detachable" terminal session. You can open a screen session, run this script, and then detach from that screen session. That detached session will stay alive even after you log off, leaving your script running. (You can re-attach to that screen session later if you want to shut it down manually.)
screen is pretty neat, and every developer on Unix/Linux should be aware of it.
How to do this without reading any docs:
open a terminal session on the server that will run the script
run screen - you will now be in a new shell prompt in a new screen session
run your script
type ctrl-a then d (without ctrl; the "d" is for "detach") to detach from the screen (but still leave it running)
Now you're back in your first shell. Your script is still alive in your screen session. You can disconnect and the screen session will keep on trucking.
Do you want to get back into that screen and shut the app down manually? Easy! Run screen -r (for "reattach"). To kill the screen session, just reattach and exit the shell.
You can have multiple screen sessions running concurrently, too. (If there is more than one screen running, you'll need to provide an argument to screen -r.)
Check out some screen docs!
Here's a screen howto. Search "gnu screen howto" for many more.
Lots of ways to skin this cat... :)
My thought was to take your script (call it foo) and remove the last 3 lines. In your /etc/rc.d/rc.local file (NOTE: this applies to Ubuntu and Fedora, not sure what you're running - but it has something similar) you'd add nohup /path_to_foo/foo 2>&1 > /dev/null& to the end of the file so that it runs in the background. You can also run that command right at a terminal if you just want to run it and have it running. You have to make sure that foo is made executable with chmod +x /path_to_foo/foo.
Use an infinite loop. Try:
while running do
sleep(3600)
end
You can use exit to terminate when you need to. This will run the loop once an hour so it doesnt eat up processing time. An infinite loop before your disconnect method will prevent it from being called until the server shuts down.

Can I send some text to the STDIN of an active process under Windows?

I searched the web for that question and landed on Server Fault:
Can I send some text to the STDIN of an active process running in a screen session?
Seems like it is ridiculously easy to achieve this under Linux. But I need it for a Win32 Command Prompt.
Background: I have an application that polls STDIN and if I press the x key, the application terminates. Now, I want to do some automated testing, test the application and then shut it down.
Note: Just killing the process is not an option since I'm currently investigating problems that arise during the shutdown of my application.
.NET framework's Process and ProcessStartInfo classes can be used to create and control a process. Since Windows PowerShell can be used to instantiate .NET objects, the ability to control almost every aspect of a process is available from within PowerShell.
Here's how you can send the dir command to a cmd.exe process (make sure to wrap this in a .ps1 file and then execute the script):
$psi = New-Object System.Diagnostics.ProcessStartInfo;
$psi.FileName = "cmd.exe"; #process file
$psi.UseShellExecute = $false; #start the process from it's own executable file
$psi.RedirectStandardInput = $true; #enable the process to read from standard input
$p = [System.Diagnostics.Process]::Start($psi);
Start-Sleep -s 2 #wait 2 seconds so that the process can be up and running
$p.StandardInput.WriteLine("dir"); #StandardInput property of the Process is a .NET StreamWriter object

How to close a Terminal window while executing a file with node.js without stopping the process?

When I execute a file with node.js (by typing "node example.js", maybe a http server), then, when I close the Terminal window (Mac OS X Lion), the process is stopped and doesn't answers on requests anymore. The same happens if I type "ctrl c" or "ctrl z". How can I close the Terminal without stopping the process, so my server continues answering on requests.
Use a combination of the nohup prefix command (to keep the process from being killed when the terminal closes) and the & suffix (to run the process in the background so it doesn't tie up the terminal):
nohup node example.js &
You should also look into forever or similar tools that will also automatically restart the server if it crashes, and nodemon which will automatically restart it when you change the code.

Resources