Prevent subprocess from inheriting file descriptors - fork

Is there a safe and easy way of executing a subprocess, while not allowing it to have access to the parents file descriptors? Particularly in my case this happens:
The parent process listens to socket 127.0.0.1:8000.
The parent process uses exec to run a sub process. This sub process forks and starts a daemon.
The parent process closes.
The daemon now keeps the file descriptor open (keeps listening to port 8000).
Perhaps there is some command that can close all file descriptors before executing the sub process?
This problem occurs for example if you call the 'service someservice start'' commands from a web server script.
Perhaps there is some command that could run the service script in a "clean" context, something like:
run-detached service someservice start
Which would cause all file descriptors to be closed, environment variables to be unset and so on - so that the context the service runs within is as basic as possible.

You can't close file descriptors from outside the process (for obvious good reasons).
One thing you could do is to create a small shim for the program that is to be run in the subprocess, and convince the parent to exec() that instead (e.g. by putting it in place of the original). This small program can then close file descriptors and then exec the original program.

Related

How to close a file using Shell?

I have a file opened named "C/Users/Desktop/textfile.txt". How do I close this using a shell script? I have tried taskkill, shutdown, "exec 1>&-" and others. Since I am new to shell, I am unable to figure out how to make it work.
textfile.txt is a non-executable file. That means you can't 'run' it, or 'kill' it. However, you can delete it - that is if some other process is not holding a lock on it.
If you want to release the lock, you have to first find the process that is holding a handle to this file, kill that process, and then the OS will let you delete the file.
From shell (command line), you can use this tool handle (https://learn.microsoft.com/en-us/sysinternals/downloads/handle) to identify which process is holding your handle. Once you get the pid (process id) from that tool, you can use taskkill to terminate that process and release the lock on that file.
c:\Program Files\SysinternalsSuite>handle.exe | findstr /i "c:/users/desktop/textfile.txt"
All said, you want to be careful when you terminate processes abruptly like this. The process won't get a chance to clean up behind itself.

how to get a stdout on a batch timeout indicating that a forced exit occurred

using windows schedular,
I am calling a batch file on windows:
C:\ETL\Scheduler\Schedule.bat >> C:\ETL\Logs\batchlog.log 2>&1
My problem is that Schedule.bat takes a really long time to run, and I have had cases where the batch file timesout.
I realize that windows scheduler will show the timeout and the close connection to the batch file,
but I'm wondering if there's a way to put in the line "forced timeout" or something similar into batchlog.log before the batch exists?
Thanks.
I'd suggest you control timing out yourself with a second script, that runs every minute, for example.
It can be like this:
1) At the start of the main script a lock file is created (and deleted when script finishes)
2) the watchdog script starts and checks the lock file every minute, comparing the lock time with the current time.
3) if watchdog script sees that lock file has not been deleted after a predefined time, it terminates the main script and writes corresponding message into a log file.

Command line options for retrieving all of the files being accessed by a particular program

I'm trying to access the files being used by a given program in Windows, via a command line prompt in a .bat script. I've found the program Process Monitor, but can't find a CLI way to do this. How might I do this?
What you're looking for is Handle (also from Sysinternals), not Process Monitor.
Handle application must run elevated (administrator)
handle -p myproc
Will return all handles for processes begin with the 'myproc'.
handle -pid 1234
Will return all handles for process with pid 1234.
You could also use Process Explorer if you want GUI.

How to terminate only the inner batch job

Let's say there is some .bat file that is required to run inside .cmd batch script. This inner .bat file has a series of user interactions on a local host, and are only able to be ended using ctrl+c.
The question is: Is there some way to make the outer batch script resume after the inner script is terminated? Or is the ctrl+c the end all be all?
I've tried giving the inner script a different way out only to be told I'm not allowed to change that file. I've also done a fair amount of research and haven't found a solution. Forgive me if I've overlooked something! I'd like to avoid having two windows or extraneous termination messages pop up.
The only way I can think of to handle this is to use the following line in outer.cmd to call inner.bat -- with the disadvantage of receiving a new command prompt window for the execution of inner.bat:
start "" /WAIT cmd /C "inner.bat"
(Exchanging start and cmd does not work as the new window might unintentionally remain open.)
Note that for inner.bat, all the console input and output are handled via the new window, hence any redirections for outer.cmd (e. g., outer.cmd > "return.txt") will not include data from inner.bat.

Automatic encryption/decryption: detect file is closed in Mate/Gnome application

I'm writing a bash script to automatically decrypt file for editing and encrypt it back after file is closed. File type could be any: plain-text, office document, etc. I am on Linux Mint with Mate.
I'm stuck: can't reliably detect if file was closed in application so that script can proceed to encrypting it back and removing decrypted version.
The first version of the script simply used vim with text files. Script was calling it directly and hadn't been going any further until vim was closed. Now as I want to be able to do so with other files, I tried the following things:
xdg-open: exits immediately after calling the application associated with file type. Thus script continues and does no good.
xdg-open's modified function for calling an associated app: runs it inside the current script so now I see program exit. Works only if the application hasn't been running already. If it has, then new process is finished and script continues.
So what I am trying to do now is to watch somehow that file was closed in already running application. Currently experimenting with pluma/gedit and inotifywait. It doesn't work either - instantly after file was opened it detects CLOSE_NOWRITE,CLOSE event.
Is it at all possible to detect this without specific hooks for different applications? Possibly some X hooks?
Thank you.
You could use lsof to determine if a file is opened by a process:
myFile="/home/myUser/myFile"
/usr/sbin/lsof "$myFile" | grep "$myFile"
You can use a 1 second loop and wait until the lsof response is empty. I have used this to help prevent a script from opening a newly discovered file that is still being written or downloaded.
Not all processes hold a file open while they are using it. For example, vim holds open a temporary file (/home/myUser/.myFile.swp) and may only open the real file when loading or saving.
You might do something like this.
decrypt "TheFile"&
pluma "TheFile"
encrypt "TheFile"
The & at the end of a line will execute the line then fall through to Pluma. The script will pause until pluma closes.
I could offer more help if you post your script.

Resources