share perl curses UI object variable across multiple child processes - curses

I am writing a tool which spawns multiple child processes. In fact 3 levels of child processes to speed up the entire logic/process.
To display the output in terminal I have chosen Curses::UI. The curses ui objects/widgets are created at each level of parent/child relationship and manipulated in the last level of child processes. This multiple levels of child processes seem to be causing issues with the curses display.
I thought it would be stable if I shared just one curses ui object across all child/parent processes.
To achieve this sharing, I am trying to use Storable/Shareable module but not able to get it to run due to errors like these:
quicode sub { │
│ exit; │
│ } caused an error: 'exit' trapped by operation mask at (eval 99) line 2, at my_curser.pl line 147 │
code sub {──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
exit;
} caused an error: 'exit' trapped by operation mask at (eval 99) line 2, at my_curser.pl line 147
│ode sub { │
│ exit;
Is it possible to share curses ui object across mutliple processes ?

curses relies on C and terminal or terminal emulator state which is not reliably shareable between processes even from C, and is not visible to Perl wrappers such as UI::Curses. (A terminal has a single "current position"/cursor position; consider what happens if different subprocesses try to update widgets in different parts of the display at the same time.) As such, there is no way you can share these widgets between subprocesses.
In general, a better design is to dedicate a thread or process to the UI and distribute other aspects of processing to other threads/processes.

Related

Limitations on file append when using in multi-processed environment

My process creates a log file and appends a new line at the end of the file by using a, e.g:
fopen("log.txt", "a");
The order of the writes is not critical, but I need to ensure that fopen always succeeds. My question is, can the call above be executed from multiple processes at the same time on Windows, Linux and macOS without any race-condition?
If not, what is the most common and easy way to ensure I can write to the log file? There is file-lokcing, but also a file-lock (aka log.txt.lock) possible. Could anyone share some insights or resources which go more into detail?
If you do not use any synchronization between processes, you'll highly likely have moment when several processes will try to write to the file and the best you can get is mesh of input strings.
In order to synchronize any work in several processes (multiprocessing module). Use Lock. It will prevent several processes to do some work simultaneously.
It will look something like this:
import multiprocessing
# create lock in main process and "send" it to child processes.
lock = multiprocessing.Lock()
# ...
# in child Process
with lock:
do_some_work()
If you need more detailed example, feel free to ask.
Also you can check example in official docs

How does XLockDisplay() work across multiple processes?

I'm dealing with multiple processes that read eachothers's drawables and thus need synchronization. XLockDisplay is supposed to "lock out all other threads" from using the display, but does that apply across multiple processes?
Also, do all processes need to call XInitThreads or just the one(s) calling XLockDisplay?
XLockDisplay func (and LockDisplay macros) has to be used inside the same XClient app, ie process... They make no sense btw XClients (so btw 2 processes). This is a way to protect against multiple threads (so inside the same process) attempting to access the same X connection (eg see GLX-1.4, ch. 2.7)
In order to read the whole content (buffer) of another window, you could take a look at any app that makes a screenshot from your desktop or from a single window (see 'scrot' source code for example).
If you want to exchange data btw XClients, use their Properties/Atoms (see XLib ICCC).

How to execute multiple proc in tcl scripting

I have 4 proc in my tcl script. Each proc contain a while loop to wait for a task to be finished and to process the result files subsequently. My purpose now is to parallel this 4 process together instead of 1 by 1. Anyone has any idea?
Background:
The normal way before is I open 4 terminal in KDE/GNOME to execute the different tasks. 4 different tasks actually running together.
Tcl threads can do the job just fine: http://www.tcl.tk/man/tcl8.6/ThreadCmd/thread.htm
Of course you may just leave everything as it is and run your scripts in the background within one terminal, if that's what you are looking for, e.g.
script1.tcl &
script2.tcl &
threading is better option for this scenario and it gives better control for your subprocess. You refer the following link for simple example : https://www.activestate.com/blog/2016/09/threads-done-right-tcl

How can a terminal emulator know what processes are attached?

The OS X Terminal.app has an option to show the "active process name". It displays the name of (one of) the foreground processes within the terminal, with a reasonable degree of accuracy. For example, when running make, it shows the names of the various subprocesses involved in the build process (cc, collect, ld, etc.). How exactly does this work?
My leading hypothesis so far is that it tracks the foreground process group in the attached session, and picks the most-recently-started process within that process group. But, I'm not clear on what system calls or services it uses to implement this.

Are Process::detach and Process::wait mutually exclusive (Ruby)?

I'm refactoring a bit of concurrent processing in my Ruby on Rails server (running on Linux) to use Spawn. Spawn::fork_it documentation claims that forked processes can still be waited on after being detached: https://github.com/tra/spawn/blob/master/lib/spawn.rb (line 186):
# detach from child process (parent may still wait for detached process if they wish)
Process.detach(child)
However, the Ruby Process::detach documentation says you should not do this: http://www.ruby-doc.org/core/classes/Process.html
Some operating systems retain the status of terminated child processes until the parent collects that status (normally using some variant of wait(). If the parent never collects this status, the child stays around as a zombie process. Process::detach prevents this by setting up a separate Ruby thread whose sole job is to reap the status of the process pid when it terminates. Use detach only when you do not intent to explicitly wait for the child to terminate.
Yet Spawn::wait effectively allows you to do just that by wrapping Process::wait. On a side note, I specifically want to use the Process::waitpid2 method to wait on the child processes, instead of using the Spawn::wait method.
Will detach-and-wait not work correctly on Linux? I'm concerned that this may cause a race condition between the detached reaper thread and the waiting parent process, as to who collects the child status first.
The answer to this question is there in the documentation. Are you writing code for your own use in a controlled environment? Or to be used widely by third parties? Ruby is written to be widely used by third parties, so their recommendation is to not do something that could fail on "some operating systems". Perhaps the Spawn library is designed primarily for use on Linux machines and tested only on a small subset thereof where this tactic works.
If you're distributing the code you're writing to be used by anyone and everyone, I would take Ruby's approach.
If you control the environment where this code will be run, I would write two tests:
A test that spawns a process, detaches it and then waits for it.
A test that spawns a process and then just waits for it.
Count the failure rate for both and if they are equal (within a margin that you feel is acceptable), go for it!

Resources