pgrep -P, but for grandchildren not just children - bash

I am using:
pgrep -P $$
to get the child pids of $$. But I actually want a list of grandchildren and great grandchild too.
How do I do this tho? With a regular programming language we would do that with recursion for example, but with bash? Perhaps use a bash function?

I've already posted an attempted solution. It's short and effective, and seems in line with the OP's question, so I'll leave it as it is. However, it has some performance and portability problems that mean it's not a good general solution. This code attempts to fix the problems:
top_pid=$1
# Make a list of all process pids and their parent pids
ps_output=$(ps -e -o pid= -o ppid=)
# Populate a sparse array mapping pids to (string) lists of child pids
children_of=()
while read -r pid ppid ; do
[[ -n $pid && pid -ne ppid ]] && children_of[ppid]+=" $pid"
done <<< "$ps_output"
# Add children to the list of pids until all descendants are found
pids=( "$top_pid" )
unproc_idx=0 # Index of first process whose children have not been added
while (( ${#pids[#]} > unproc_idx )) ; do
pid=${pids[unproc_idx++]} # Get first unprocessed, and advance
pids+=( ${children_of[pid]-} ) # Add child pids (ignore ShellCheck)
done
# Do something with the list of pids (here, just print them)
printf '%s\n' "${pids[#]}"
The basic approach of using a breadth-first search to build up the tree has been retained, but the essential information about processes is obtained with a single (POSIX-compliant) run of ps. pgrep is no longer used because it is not in POSIX and it could be run many times. Also, a very inefficient way of removing items from the queue (copy all but one element of it) has been replaced with manipulation of an index variable.
Average (real) run time is 0.050s when run on pid 0 on my oldish Linux system with around 400 processes.
I've only tested it on Linux, but it only uses Bash 3 features and POSIX-compliant features of ps so it should work on other systems too.

Using nothing but bash builtins (not even ps or pgrep!):
#!/usr/bin/env bash
collect_children() {
# format of /proc/[pid]/stat file; group 1 is PID, group 2 is its parent
stat_re='^([[:digit:]]+) [(].*[)] [[:alpha:]] ([[:digit:]]+) '
# read process tree into a bash array
declare -g children=( ) # map each PID to a string listing its children
for f in /proc/[[:digit:]]*/stat; do # forcing initial digit skips /proc/net/stat
read -r line <"$f" && [[ $line =~ $stat_re ]] || continue
children[${BASH_REMATCH[2]}]+="${BASH_REMATCH[1]} "
done
}
# run a fresh collection, then walk the tree
all_children_of() { collect_children; _all_children_of "$#"; }
_all_children_of() {
local -a immediate_children
local child
read -r -a immediate_children <<<"${children[$1]}"
for child in "${immediate_children[#]}"; do
echo "$child"
_all_children_of "$child"
done
}
all_children_of "$#"
On my local system, time all_children_of 1 >/dev/null (invoking the function in an already-running shell) clocks in the neighborhood of 0.018s -- typically, 0.013s for the collect_children stage (the one-time action of reading the process tree), and 0.05s for the recursive walk of that tree triggered by the initial call of _all_children_of.
Prior timings were testing only the time needed for the walk, discarding the time needed for the scan.

The code below will print the PIDs of the current process and all its descendants. It uses a Bash array as a queue to implement a breadth-first search of the process tree.
unprocessed_pids=( $$ )
while (( ${#unprocessed_pids[#]} > 0 )) ; do
pid=${unprocessed_pids[0]} # Get first elem.
echo "$pid"
unprocessed_pids=( "${unprocessed_pids[#]:1}" ) # Remove first elem.
unprocessed_pids+=( $(pgrep -P $pid) ) # Add child pids
done

Probably a simple loop would do it:
# set a value for pid here
printf 'Children of %s:\n' $pid
for child in $(pgrep -P $pid); do
printf 'Children of %s:\n' $child
pgrep -P $child
done

If pgrep doesn't do what you want, you can always use ps directly. Options will be somewhat platform-dependent.
ps -o ppid,pid |
awk -v pid=$$ 'BEGIN { parent[pid] = 1 } # collect interesting parents
{ child[$2] = $1 } # collect parents of all processes
$1 == pid { parent[$2] = 1 }
END { for (p in child)
if (parent[child[p]])
print p }'
The variable names are not orthogonal -- parent collects the processes which are pid or one of its children as keys, i.e. the "interesting" parents, and child contains the parent of each process, with the process as the key and the parent as the value.

I ended up doing this with node.js and bash:
const async = require('async');
const cp = require('child_process');
export const getChildPids = (pid: number, cb: EVCb<Array<string>>) => {
const pidList: Array<string> = [];
const getMoreData = (pid: string, cb: EVCb<null>) => {
const k = cp.spawn('bash');
const cmd = `pgrep -P ${pid}`;
k.stderr.pipe(process.stderr);
k.stdin.end(cmd);
let stdout = '';
k.stdout.on('data', d => {
stdout += String(d || '').trim();
});
k.once('exit', code => {
if (code > 0) {
log.warning('The following command exited with non-zero code:', code, cmd);
}
const list = String(stdout).split(/\s+/).map(v => String(v || '').trim()).filter(Boolean);
if (list.length < 1) {
return cb(null);
}
for (let v of list) {
pidList.push(v);
}
async.eachLimit(list, 3, getMoreData, cb);
});
};
getMoreData(String(pid), err => {
cb(err, pidList);
});
};

Related

Parallel subshells doing work and report status

I am trying to do work in all subfolders in parallel and describe a status per folder once it is done in bash.
suppose I have a work function which can return a couple of statuses
#param #1 is the folder
# can return 1 on fail, 2 on sucess, 3 on nothing happend
work(){
cd $1
// some update thing
return 1, 2, 3
}
now I call this in my wrapper function
do_work(){
while read -r folder; do
tput cup "${row}" 20
echo -n "${folder}"
(
ret=$(work "${folder}")
tput cup "${row}" 0
[[ $ret -eq 1 ]] && echo " \e[0;31mupdate failed \uf00d\e[0m"
[[ $ret -eq 2 ]] && echo " \e[0;32mupdated \uf00c\e[0m"
[[ $ret -eq 3 ]] && echo " \e[0;32malready up to date \uf00c\e[0m"
) &>/dev/null
pids+=("${!}")
((++row))
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
echo "waiting for pids ${pids[*]}"
wait "${pids[#]}"
}
and what I want is, that it prints out all the folders per line, and updates them independently from each other in parallel and when they are done, I want that status to be written in that line.
However, I am unsure subshell is writing, which ones I need to capture how and so on.
My attempt above is currently not writing correctly, and not in parallel.
If I get it to work in parallel, I get those [1] <PID> things and [1] + 3156389 done ... messing up my screen.
If I put the work itself in a subshell, I don't have anything to wait for.
If I then collect the pids I dont get the response code to print out the text to show the status.
I did have a look at GNU Parallel but I think I cannot have that behaviour. (I think I could hack it that the finished jobs are printed, but I want all 'running' jobs are printed, and the finished ones get amended).
Assumptions/undestandings:
a separate child process is spawned for each folder to be processed
the child process generates messages as work progresses
messages from child processes are to be displayed in the console in real time, with each child's latest message being displayed on a different line
The general idea is to setup a means of interprocess communications (IC) ... named pipe, normal file, queuing/messaging system, sockets (plenty of ideas available via a web search on bash interprocess communications); the children write to this system while the parent reads from the system and issues the appropriate tput commands.
One very simple example using a normal file:
> status.msgs # initialize our IC file
child_func () {
# Usage: child_func <unique_id> <other> ... <args>
local i
for ((i=1;i<=10;i++))
do
sleep $1
# each message should include the child's <unique_id> ($1 in this case);
# parent/monitoring process uses this <unique_id> to control tput output
echo "$1:message - $1.$i" >> status.msgs
done
}
clear
( child_func 3 & )
( child_func 5 & )
( child_func 2 & )
while IFS=: read -r child msg
do
tput cup $child 10
echo "$msg"
done < <(tail -f status.msgs)
NOTES:
the (child_func 3 &) construct is one way to eliminate the OS message re: 'background process completed' from showing up in stdout (there may be other ways but I'm drawing a blank at the moment)
when using a file (normal, pipe) OP will want to look at a locking method (flock?) to insure messages from multiple children don't stomp each other
OP can get creative with the format of the messages printed to status.msgs in conjunction with parsing logic in the parent's while loop
assuming variable width messages OP may want to look at appending a tput el on the end of each printed message in order to 'erase' any characters leftover from a previous/longer message
exiting the loop could be as simple as keeping count of the number of child processes that send a message <id>:done, or keeping track of the number of children still running in the background, or ...
Running this at my command line generates 3 separate lines of output that are updated at various times (based on the sleep $1):
# no ouput to line #1
message - 2.10 # messages change from 2.1 to 2.2 to ... to 2.10
message - 3.10 # messages change from 3.1 to 3.2 to ... to 3.10
# no ouput to line #4
message - 5.10 # messages change from 5.1 to 5.2 to ... to 5.10
NOTE: comments not actually displayed in console
Based on #markp-fuso's answer:
printer() {
while IFS=$'\t' read -r child msg
do
tput cup $child 10
echo "$child $msg"
done
}
clear
parallel --lb --tagstring "{%}\t{}" work ::: folder1 folder2 folder3 | printer
echo
You can't control exit statuses like that. Try this instead, rework your work function to echo status:
work(){
cd $1
# some update thing &> /dev/null without output
echo "${1}_$status" #status=1, 2, 3
}
And than set data collection from all folders like so:
data=$(
while read -r folder; do
work "$folder" &
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
wait
)
echo "$data"

Running several bash commands and killing them after some time [duplicate]

I'd like to automatically kill a command after a certain amount of time. I have in mind an interface like this:
% constrain 300 ./foo args
Which would run "./foo" with "args" but automatically kill it if it's still running after 5 minutes.
It might be useful to generalize the idea to other constraints, such as autokilling a process if it uses too much memory.
Are there any existing tools that do that, or has anyone written such a thing?
ADDED: Jonathan's solution is precisely what I had in mind and it works like a charm on linux, but I can't get it to work on Mac OSX. I got rid of the SIGRTMIN which lets it compile fine, but the signal just doesn't get sent to the child process. Anyone know how to make this work on Mac?
[Added: Note that an update is available from Jonathan that works on Mac and elsewhere.]
GNU Coreutils includes the timeout command, installed by default on many systems.
https://www.gnu.org/software/coreutils/manual/html_node/timeout-invocation.html
To watch free -m for one minute, then kill it by sending a TERM signal:
timeout 1m watch free -m
Maybe I'm not understanding the question, but this sounds doable directly, at least in bash:
( /path/to/slow command with options ) & sleep 5 ; kill $!
This runs the first command, inside the parenthesis, for five seconds, and then kills it. The entire operation runs synchronously, i.e. you won't be able to use your shell while it is busy waiting for the slow command. If that is not what you wanted, it should be possible to add another &.
The $! variable is a Bash builtin that contains the process ID of the most recently started subshell. It is important to not have the & inside the parenthesis, doing it that way loses the process ID.
I've arrived rather late to this party, but I don't see my favorite trick listed in the answers.
Under *NIX, an alarm(2) is inherited across an execve(2) and SIGALRM is fatal by default. So, you can often simply:
$ doalarm () { perl -e 'alarm shift; exec #ARGV' "$#"; } # define a helper function
$ doalarm 300 ./foo.sh args
or install a trivial C wrapper to do that for you.
Advantages Only one PID is involved, and the mechanism is simple. You won't kill the wrong process if, for example, ./foo.sh exited "too quickly" and its PID was re-used. You don't need several shell subprocesses working in concert, which can be done correctly but is rather race-prone.
Disadvantages The time-constrained process cannot manipulate its alarm clock (e.g., alarm(2), ualarm(2), setitimer(2)), since this would likely clear the inherited alarm. Obviously, neither can it block or ignore SIGALRM, though the same can be said of SIGINT, SIGTERM, etc. for some other approaches.
Some (very old, I think) systems implement sleep(2) in terms of alarm(2), and, even today, some programmers use alarm(2) as a crude internal timeout mechanism for I/O and other operations. In my experience, however, this technique is applicable to the vast majority of processes you want to time limit.
There is also ulimit, which can be used to limit the execution time available to sub-processes.
ulimit -t 10
Limits the process to 10 seconds of CPU time.
To actually use it to limit a new process, rather than the current process, you may wish to use a wrapper script:
#! /usr/bin/env python
import os
os.system("ulimit -t 10; other-command-here")
other-command can be any tool. I was running a Java, Python, C and Scheme versions of different sorting algorithms, and logging how long they took, whilst limiting execution time to 30 seconds. A Cocoa-Python application generated the various command lines - including the arguments - and collated the times into a CSV file, but it was really just fluff on top of the command provided above.
I have a program called timeout that does that - written in C, originally in 1989 but updated periodically since then.
Update: this code fails to compile on MacOS X because SIGRTMIN is not defined, and fails to timeout when run on MacOS X because the `signal()` function there resumes the `wait()` after the alarm times out - which is not the required behaviour. I have a new version of `timeout.c` which deals with both these problems (using `sigaction()` instead of `signal()`). As before, contact me for a 10K gzipped tar file with the source code and a manual page (see my profile).
/*
#(#)File: $RCSfile: timeout.c,v $
#(#)Version: $Revision: 4.6 $
#(#)Last changed: $Date: 2007/03/01 22:23:02 $
#(#)Purpose: Run command with timeout monitor
#(#)Author: J Leffler
#(#)Copyright: (C) JLSS 1989,1997,2003,2005-07
*/
#define _POSIX_SOURCE /* Enable kill() in <unistd.h> on Solaris 7 */
#define _XOPEN_SOURCE 500
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <errno.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include "stderr.h"
#define CHILD 0
#define FORKFAIL -1
static const char usestr[] = "[-vV] -t time [-s signal] cmd [arg ...]";
#ifndef lint
/* Prevent over-aggressive optimizers from eliminating ID string */
const char jlss_id_timeout_c[] = "#(#)$Id: timeout.c,v 4.6 2007/03/01 22:23:02 jleffler Exp $";
#endif /* lint */
static void catcher(int signum)
{
return;
}
int main(int argc, char **argv)
{
pid_t pid;
int tm_out;
int kill_signal;
pid_t corpse;
int status;
int opt;
int vflag = 0;
err_setarg0(argv[0]);
opterr = 0;
tm_out = 0;
kill_signal = SIGTERM;
while ((opt = getopt(argc, argv, "vVt:s:")) != -1)
{
switch(opt)
{
case 'V':
err_version("TIMEOUT", &"#(#)$Revision: 4.6 $ ($Date: 2007/03/01 22:23:02 $)"[4]);
break;
case 's':
kill_signal = atoi(optarg);
if (kill_signal <= 0 || kill_signal >= SIGRTMIN)
err_error("signal number must be between 1 and %d\n", SIGRTMIN - 1);
break;
case 't':
tm_out = atoi(optarg);
if (tm_out <= 0)
err_error("time must be greater than zero (%s)\n", optarg);
break;
case 'v':
vflag = 1;
break;
default:
err_usage(usestr);
break;
}
}
if (optind >= argc || tm_out == 0)
err_usage(usestr);
if ((pid = fork()) == FORKFAIL)
err_syserr("failed to fork\n");
else if (pid == CHILD)
{
execvp(argv[optind], &argv[optind]);
err_syserr("failed to exec command %s\n", argv[optind]);
}
/* Must be parent -- wait for child to die */
if (vflag)
err_remark("time %d, signal %d, child PID %u\n", tm_out, kill_signal, (unsigned)pid);
signal(SIGALRM, catcher);
alarm((unsigned int)tm_out);
while ((corpse = wait(&status)) != pid && errno != ECHILD)
{
if (errno == EINTR)
{
/* Timed out -- kill child */
if (vflag)
err_remark("timed out - send signal %d to process %d\n", (int)kill_signal, (int)pid);
if (kill(pid, kill_signal) != 0)
err_syserr("sending signal %d to PID %d - ", kill_signal, pid);
corpse = wait(&status);
break;
}
}
alarm(0);
if (vflag)
{
if (corpse == (pid_t) -1)
err_syserr("no valid PID from waiting - ");
else
err_remark("child PID %u status 0x%04X\n", (unsigned)corpse, (unsigned)status);
}
if (corpse != pid)
status = 2; /* I don't know what happened! */
else if (WIFEXITED(status))
status = WEXITSTATUS(status);
else if (WIFSIGNALED(status))
status = WTERMSIG(status);
else
status = 2; /* I don't know what happened! */
return(status);
}
If you want the 'official' code for 'stderr.h' and 'stderr.c', contact me (see my profile).
Perl one liner, just for kicks:
perl -e '$s = shift; $SIG{ALRM} = sub { print STDERR "Timeout!\n"; kill INT => $p }; exec(#ARGV) unless $p = fork; alarm $s; waitpid $p, 0' 10 yes foo
This prints 'foo' for ten seconds, then times out. Replace '10' with any number of seconds, and 'yes foo' with any command.
The timeout command from Ubuntu/Debian when compiled from source to work on the Mac. Darwin
10.4.*
http://packages.ubuntu.com/lucid/timeout
My variation on the perl one-liner gives you the exit status without mucking with fork() and wait() and without the risk of killing the wrong process:
#!/bin/sh
# Usage: timelimit.sh secs cmd [ arg ... ]
exec perl -MPOSIX -e '$SIG{ALRM} = sub { print "timeout: #ARGV\n"; kill(SIGTERM, -$$); }; alarm shift; $exit = system #ARGV; exit(WIFEXITED($exit) ? WEXITSTATUS($exit) : WTERMSIG($exit));' "$#"
Basically the fork() and wait() are hidden inside system(). The SIGALRM is delivered to the parent process which then kills itself and its child by sending SIGTERM to the whole process group (-$$). In the unlikely event that the child exits and the child's pid gets reused before the kill() occurs, this will NOT kill the wrong process because the new process with the old child's pid will not be in the same process group of the parent perl process.
As an added benefit, the script also exits with what is probably the correct exit status.
#!/bin/sh
( some_slow_task ) & pid=$!
( sleep $TIMEOUT && kill -HUP $pid ) 2>/dev/null & watcher=$!
wait $pid 2>/dev/null && pkill -HUP -P $watcher
The watcher kills the slow task after given timeout; the script waits for the slow task and terminates the watcher.
Examples:
The slow task run more than 2 sec and was terminated
Slow task interrupted
( sleep 20 ) & pid=$!
( sleep 2 && kill -HUP $pid ) 2>/dev/null & watcher=$!
if wait $pid 2>/dev/null; then
echo "Slow task finished"
pkill -HUP -P $watcher
wait $watcher
else
echo "Slow task interrupted"
fi
This slow task finished before the given timeout
Slow task finished
( sleep 2 ) & pid=$!
( sleep 20 && kill -HUP $pid ) 2>/dev/null & watcher=$!
if wait $pid 2>/dev/null; then
echo "Slow task finished"
pkill -HUP -P $watcher
wait $watcher
else
echo "Slow task interrupted"
fi
Try something like:
# This function is called with a timeout (in seconds) and a pid.
# After the timeout expires, if the process still exists, it attempts
# to kill it.
function timeout() {
sleep $1
# kill -0 tests whether the process exists
if kill -0 $2 > /dev/null 2>&1 ; then
echo "killing process $2"
kill $2 > /dev/null 2>&1
else
echo "process $2 already completed"
fi
}
<your command> &
cpid=$!
timeout 3 $cpid
wait $cpid > /dev/null 2>&
exit $?
It has the downside that if your process' pid is reused within the timeout, it may kill the wrong process. This is highly unlikely, but you may be starting 20000+ processes per second. This could be fixed.
How about using the expect tool?
## run a command, aborting if timeout exceeded, e.g. timed-run 20 CMD ARGS ...
timed-run() {
# timeout in seconds
local tmout="$1"
shift
env CMD_TIMEOUT="$tmout" expect -f - "$#" <<"EOF"
# expect script follows
eval spawn -noecho $argv
set timeout $env(CMD_TIMEOUT)
expect {
timeout {
send_error "error: operation timed out\n"
exit 1
}
eof
}
EOF
}
pure bash:
#!/bin/bash
if [[ $# < 2 ]]; then
echo "Usage: $0 timeout cmd [options]"
exit 1
fi
TIMEOUT="$1"
shift
BOSSPID=$$
(
sleep $TIMEOUT
kill -9 -$BOSSPID
)&
TIMERPID=$!
trap "kill -9 $TIMERPID" EXIT
eval "$#"
I use "timelimit", which is a package available in the debian repository.
http://devel.ringlet.net/sysutils/timelimit/
A slight modification of the perl one-liner will get the exit status right.
perl -e '$s = shift; $SIG{ALRM} = sub { print STDERR "Timeout!\n"; kill INT => $p; exit 77 }; exec(#ARGV) unless $p = fork; alarm $s; waitpid $p, 0; exit ($? >> 8)' 10 yes foo
Basically, exit ($? >> 8) will forward the exit status of the subprocess. I just chose 77 at the exit status for timeout.
Isn't there a way to set a specific time with "at" to do this?
$ at 05:00 PM kill -9 $pid
Seems a lot simpler.
If you don't know what the pid number is going to be, I assume there's a way to script reading it with ps aux and grep, but not sure how to implement that.
$ | grep someprogram
tony 11585 0.0 0.0 3116 720 pts/1 S+ 11:39 0:00 grep someprogram
tony 22532 0.0 0.9 27344 14136 ? S Aug25 1:23 someprogram
Your script would have to read the pid and assign it a variable.
I'm not overly skilled, but assume this is doable.

Binary tree of directories UNIX

I have a task to create a binary tree of directories in bash shell, the depth is given as a first argument of the script. Every directory has to be named with the second argument + the depth of the tree which the directory is in.
Example: ./tree.sh 3 name should create the following structure:
name11
/ \
name21 name22
/ \ / \
name31 name32 name33 name34
I don't really have an idea how to do this, Can't even start. It is harder than anything i have done in bash up until now.. Any help will be very much appreciated.
Thanks in advance.
With recursion:
#!/bin/bash
level=$1
current_level=$2; current_level=${current_level:=1}
last_number=$3; last_number=${last_number:=1}
prefix="name"
# test to stop recursion
[[ $level -eq $(($current_level-1)) ]] && exit
# first node
new_number=$(($current_level*10+$last_number*2-1))
mkdir "$prefix$new_number"
(
cd "$prefix$new_number"
$0 $level $(($current_level+1)) $(($last_number*2-1)) &
)
# second node, not in level 1
if [[ $current_level -ne 1 ]]; then
new_number=$(($current_level*10+$last_number*2))
mkdir "$prefix$new_number"
cd "$prefix$new_number"
$0 $level $(($current_level+1)) $(($last_number*2)) &
fi
Test with ./tree.sh 3
Even though other languages are more suitable in implementing a link list, I don't know why this post got a negative vote.
Here's this expert, shared something good for searching, take a look:
https://gist.github.com/iestynpryce/4153007
NOTE: An implementation of a Binary Sort Tree in Bash. Object-like behaviour has been faked using eval. Remember that eval in shell scripting can be evil. BT and BST have difference, you can google it.
#!/bin/bash
#
# Binary search tree is of the form:
# 10
# / \
# / \
# 4 16
# / \ /
# 1 7 12
#
# Print the binary search tree by doing a recursive call on each node.
# Call the left node, print the value of the current node, call the right node.
# Cost is O(N), where N is the number of elements in the tree, as we have to
# visit each node once.
print_binary_search_tree() {
local node="$*";
# Test is the node id is blank, if so return
if [ "${node}xxx" == "xxx" ]; then
return;
fi
print_binary_search_tree $(eval ${node}.getLeftChild)
echo $(${node}.getValue)
print_binary_search_tree $(eval ${node}.getRightChild)
}
### Utility functions to generate a BST ###
# Define set 'methods'
set_node_left() {
eval "${1}.getLeftChild() { echo "$2"; }"
}
set_node_right() {
eval "${1}.getRightChild() { echo "$2"; }"
}
set_node_value() {
eval "${1}.getValue() { echo "$2"; }"
}
# Generate unique id:
gen_uid() {
# prefix 'id' to the uid generated to guarentee
# it starts with chars, and hence will work as a
# bash variable
echo "id$(uuidgen|tr -d '-')";
}
# Generates a new node 'object'
new_node() {
local node_id="$1";
local value="$2";
local left="$3";
local right="$4";
eval "${node_id}set='set'";
eval "set_node_value $node_id $value";
eval "set_node_left $node_id $right";
eval "set_node_right $node_id $right";
}
# Inserts a value into a tree with a root node with identifier '$id'.
# If the node, hence the tree does not exist it creates it.
# If the root node is at the either end of the list you'll reach the
# worst case complexity of O(N), where N is the number of elements in
# the tree. (Average case will be 0(logN).)
tree_insert() {
local id="$1"
local value="$2";
# If id does not exist, create it
if [ -z "$(eval "echo \$${id}set")" ]; then
eval "new_node $id $value";
# If id exists and the value inserted is less than or equal to
# the id's node's value.
# - Go down the left branch
elif [[ $value -le $(${id}.getValue) ]]; then
# Go down to an existing left node if it exists, otherwise
# create it.
if [ "$(eval ${id}.getLeftChild)xxx" != "xxx" ]; then
tree_insert $(eval ${id}.getLeftChild) $value
else
local uid=$(gen_uid);
tree_insert $uid $value;
set_node_left $id $uid;
fi
# Else go down the right branch as the value inserted is larger
# than the id node's value.
else
# Go down the right node if it exists, else create it
if [ "$(eval ${id}.getRightChild)xxx" != "xxx" ]; then
tree_insert $(eval ${id}.getRightChild) $value
else
local uid=$(gen_uid);
tree_insert $uid $value;
set_node_right $id $uid;
fi
fi
}
# Insert an unsorted list of numbers into a binary search tree
for i in 10 4 16 1 7 12; do
tree_insert bst $i;
done
# Print the binary search tree out in order
print_binary_search_tree bst
Actually, I think, it's super easy to implement aa BST in BASH.
How:
Just create a :) damn .txt :) FILE for maintaining the BST.
Here, I'm not going to show how you can implement the CRUD operation for inserting/populating or deleting/updating a BST nodes if implemented using a simple .txt file, but it works as far as printing values. I'll work on it and share the solution soon.
Here is my solution: Just FYSA In BASH, I used a .txt file approach and tried for printing the same from any root node here: https://stackoverflow.com/a/67341334/1499296

Remove temporary files at end of bourne shell script

I've tried to use trap to remove a temporary file at the end of a Bourne shell script, but this doesn't work:
trap "trap \"rm \\\"$out\\\"\" EXIT INT TERM" 0
This is inside a function, by the way, hence the attempt at a nested trap.
How do I do it?
You can only have one trap set for each signal. If different parts of your script need to perform different cleanup actions, you’ll have to create lists of cleanup actions. Then set a single trap handler that performs all the required cleanup actions.
Here’s an example:
set -xv
PROG="$(basename -- "${0}")"
# set up your trap handler
TEMP_FILES=()
trap_handler() {
for F in "${TEMP_FILES[#]}"; do
rm -f "${F}"
done
}
trap trap_handler 0 1 2 3 15
something_that_uses_temp_files() {
mytemp="$(mktemp -t "${PROG}")"
TEMP_FILES+=("${mytemp}")
date > "${mytemp}"
# ...
}
# ...
something_that_uses_temp_files
# ...
There’s a single trap handler, but you can register cleanup actions from anywhere in the script by appending to the TEMP_FILES array. The cleanup actions can be registered from inside functions too.
If you’re not using a shell with arrays, the basic idea is the same, but the implementation details will be a little bit different. For example, you could store the list as a colon-separated string variable, using the ${parameter%%word} expansions in every POSIX shell to iterate through its elements in the trap handler:
#!/bin/sh
set -xv
PROG="$(basename -- "${0}")"
# set up your trap handler
TEMP_FILES=""
trap_handler() {
while [ -n "${TEMP_FILES}" ]; do
CUR_FILE="${TEMP_FILES%%:*}"
TEMP_FILES="${TEMP_FILES#*:}"
if [ "${CUR_FILE}" = "${TEMP_FILES}" ]; then
# there were no colons -- CUR_FILE is the last file to process
TEMP_FILES=""
fi
if [ -n "${CUR_FILE}" ]; then
rm -f "${CUR_FILE}"
fi
done }
trap trap_handler 0 1 2 3 15
something_that_uses_temp_files() {
mytemp="$(mktemp -t "${PROG}")"
TEMP_FILES="${TEMP_FILES}:${mytemp}"
date > "${mytemp}"
# ... }
# ...
something_that_uses_temp_files
something_that_uses_temp_files
# ...

why doesn't this timer work?

I'm trying to make a script to start a second counter. [but later I want to add minutes too] but so far, it just keeps echoing 0, 0, 0, 0, over and over. :\
#!/bin/bash
seconds=0;
count()
{
export seconds=$[seconds + 1]
sleep 1;
count
}
count&
N=$!
trap "kill $N; exit 0;" 2
while true; do
echo $seconds
sleep 1;
done
The & makes it run in a subshell, which means that it has its own set of environment variables independent of the current script. Find another way (or another language) to do this.
Ignacio's answer explains that your subshell's environment is not visible to your parent process.
One way to create slaves like this is co-processes (with coproc in zsh and newer bash or with special syntax in ksh). Your bash probably doesn't support this yet.
Here's a variation on your idea that uses signals to send the updates to the parent. I've retained your basic structure where it doesn't conflict:
count() {
parent=$1
kill -ALRM $parent
sleep 1
count $parent
}
trap 'seconds=$[$seconds + 1]' ALRM
count $$ &
trap "kill $!; exit 0" INT
while true
do
echo $seconds
done

Resources