Using /usr/bin/osascript JS to automate my task, struggling with a check if process is already running or not:
const app = Application.currentApplication()
app.includeStandardAdditions = true
function run(args) {
const query = args[0]
let response = 'Wrong command passed'
if (query === 'on') { // need to check if process named "asdf" is already running
response = 'Process turned ON'
} else if (query === 'off') { // need to check if process named "asdf" is already running
response = 'Process turned OFF'
}
return response
}
JXA documentation could be better, i want to implement a check in an if construction. I've tried to make it using:
const se = Application('System Events')
const process = se.processes.byName('processname')
But it has no effect.
Solved myself:
const PNAME = `ps aux | grep processname | grep -v grep | wc -l | xargs echo`
Getting "processname", if it's running, it returns 1, otherwise 0.
Were I to call out to a shell to do this, I would aim to make it as an efficient combination of commands as possible. xargs, wc, and the second pipe into grep are all unnecessary: if grep processname matches, the exit status of the command will be 0, and in all other cases, non-zero. It looks like the only reason you pipe through to those other programs is because you didn't utilise the most effective set of program options when calling ps:
const PNAME = 'ps -Acxo comm | grep processname > /dev/null; echo $(( 1 - $? ))'
Even this use of grep is unnecessary, as bash can pattern match for you:
const PNAME = '[[ "$( ps -Acxo comm )" =~ processname ]]; echo $(( 1 - $? ))'
But, putting that to one side, I wouldn't get a shell script to do this unless I were writing a shell script. JXA is very capable of enumerating processes:
sys = Application('com.apple.SystemEvents');
sys.processes.name();
Then, to determine whether a specific named process, e.g. TextEdit, is running:
sys.processes['TextEdit'].exists();
which will return true or false accordingly.
Solved myself:
const PNAME = `ps aux | grep processname | grep -v grep | wc -l | xargs echo`
Getting processname, if it's running, it returns 1, otherwise 0. All what's left to do is:
if (app.doShellScript(PNAME) < 1) {
// do something
} else {
// do something else
}
Related
Jenkins pipeline I need to execute the shell command and the result is the value of def variable.
What shall I do? Thank you
def projectFlag = sh("`kubectl get deployment -n ${namespace}| grep ${project} | wc -l`")
//
if ( "${projectFlag}" == 1 ) {
def projectCI = sh("`kubectl get deployment ${project} -n ${namespace} -o jsonpath={..image}`")
echo "$projectCI"
} else if ( "$projectCI" == "${imageTag}" ) {
sh("kubectl delete deploy ${project} -n ${namespaces}")
def redeployFlag = '1'
echo "$redeployFlag"
if ( "$projectCI" != "${imageTag}" ){
sh("kubectl set image deployment/${project} ${appName}=${imageTag} -n ${namespaces}")
}
else {
def redeployFlag = '2'
}
I believe you're asking how to save the result of a shell command to a variable for later use?
The way to do this is to use some optional parameters available on the shell step interface. See https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#sh-shell-script for the documentation
def projectFlag = sh(returnStdout: true,
script: "`kubectl get deployment -n ${namespace}| grep ${project} | wc -l`"
).trim()
Essentially set returnStdout to true. The .trim() is critical for ensuring you don't pickup a \n newline character which will ruin your evaluation logic.
I am using:
pgrep -P $$
to get the child pids of $$. But I actually want a list of grandchildren and great grandchild too.
How do I do this tho? With a regular programming language we would do that with recursion for example, but with bash? Perhaps use a bash function?
I've already posted an attempted solution. It's short and effective, and seems in line with the OP's question, so I'll leave it as it is. However, it has some performance and portability problems that mean it's not a good general solution. This code attempts to fix the problems:
top_pid=$1
# Make a list of all process pids and their parent pids
ps_output=$(ps -e -o pid= -o ppid=)
# Populate a sparse array mapping pids to (string) lists of child pids
children_of=()
while read -r pid ppid ; do
[[ -n $pid && pid -ne ppid ]] && children_of[ppid]+=" $pid"
done <<< "$ps_output"
# Add children to the list of pids until all descendants are found
pids=( "$top_pid" )
unproc_idx=0 # Index of first process whose children have not been added
while (( ${#pids[#]} > unproc_idx )) ; do
pid=${pids[unproc_idx++]} # Get first unprocessed, and advance
pids+=( ${children_of[pid]-} ) # Add child pids (ignore ShellCheck)
done
# Do something with the list of pids (here, just print them)
printf '%s\n' "${pids[#]}"
The basic approach of using a breadth-first search to build up the tree has been retained, but the essential information about processes is obtained with a single (POSIX-compliant) run of ps. pgrep is no longer used because it is not in POSIX and it could be run many times. Also, a very inefficient way of removing items from the queue (copy all but one element of it) has been replaced with manipulation of an index variable.
Average (real) run time is 0.050s when run on pid 0 on my oldish Linux system with around 400 processes.
I've only tested it on Linux, but it only uses Bash 3 features and POSIX-compliant features of ps so it should work on other systems too.
Using nothing but bash builtins (not even ps or pgrep!):
#!/usr/bin/env bash
collect_children() {
# format of /proc/[pid]/stat file; group 1 is PID, group 2 is its parent
stat_re='^([[:digit:]]+) [(].*[)] [[:alpha:]] ([[:digit:]]+) '
# read process tree into a bash array
declare -g children=( ) # map each PID to a string listing its children
for f in /proc/[[:digit:]]*/stat; do # forcing initial digit skips /proc/net/stat
read -r line <"$f" && [[ $line =~ $stat_re ]] || continue
children[${BASH_REMATCH[2]}]+="${BASH_REMATCH[1]} "
done
}
# run a fresh collection, then walk the tree
all_children_of() { collect_children; _all_children_of "$#"; }
_all_children_of() {
local -a immediate_children
local child
read -r -a immediate_children <<<"${children[$1]}"
for child in "${immediate_children[#]}"; do
echo "$child"
_all_children_of "$child"
done
}
all_children_of "$#"
On my local system, time all_children_of 1 >/dev/null (invoking the function in an already-running shell) clocks in the neighborhood of 0.018s -- typically, 0.013s for the collect_children stage (the one-time action of reading the process tree), and 0.05s for the recursive walk of that tree triggered by the initial call of _all_children_of.
Prior timings were testing only the time needed for the walk, discarding the time needed for the scan.
The code below will print the PIDs of the current process and all its descendants. It uses a Bash array as a queue to implement a breadth-first search of the process tree.
unprocessed_pids=( $$ )
while (( ${#unprocessed_pids[#]} > 0 )) ; do
pid=${unprocessed_pids[0]} # Get first elem.
echo "$pid"
unprocessed_pids=( "${unprocessed_pids[#]:1}" ) # Remove first elem.
unprocessed_pids+=( $(pgrep -P $pid) ) # Add child pids
done
Probably a simple loop would do it:
# set a value for pid here
printf 'Children of %s:\n' $pid
for child in $(pgrep -P $pid); do
printf 'Children of %s:\n' $child
pgrep -P $child
done
If pgrep doesn't do what you want, you can always use ps directly. Options will be somewhat platform-dependent.
ps -o ppid,pid |
awk -v pid=$$ 'BEGIN { parent[pid] = 1 } # collect interesting parents
{ child[$2] = $1 } # collect parents of all processes
$1 == pid { parent[$2] = 1 }
END { for (p in child)
if (parent[child[p]])
print p }'
The variable names are not orthogonal -- parent collects the processes which are pid or one of its children as keys, i.e. the "interesting" parents, and child contains the parent of each process, with the process as the key and the parent as the value.
I ended up doing this with node.js and bash:
const async = require('async');
const cp = require('child_process');
export const getChildPids = (pid: number, cb: EVCb<Array<string>>) => {
const pidList: Array<string> = [];
const getMoreData = (pid: string, cb: EVCb<null>) => {
const k = cp.spawn('bash');
const cmd = `pgrep -P ${pid}`;
k.stderr.pipe(process.stderr);
k.stdin.end(cmd);
let stdout = '';
k.stdout.on('data', d => {
stdout += String(d || '').trim();
});
k.once('exit', code => {
if (code > 0) {
log.warning('The following command exited with non-zero code:', code, cmd);
}
const list = String(stdout).split(/\s+/).map(v => String(v || '').trim()).filter(Boolean);
if (list.length < 1) {
return cb(null);
}
for (let v of list) {
pidList.push(v);
}
async.eachLimit(list, 3, getMoreData, cb);
});
};
getMoreData(String(pid), err => {
cb(err, pidList);
});
};
Reference the 2nd to last line in my script. For some reason Perl is not able to access the variable $perlPort how can I fix this? Note: $perlPort is a bash variable location before my perl script
perl -e '
{
package MyWebServer;
use HTTP::Server::Simple::CGI;
use base qw(HTTP::Server::Simple::CGI);
my %dispatch = (
"/" => \&resp_hello,
);
sub handle_request {
my $self = shift;
my $cgi = shift;
my $path = $cgi->path_info();
my $handler = $dispatch{$path};
if (ref($handler) eq "CODE") {
print "HTTP/1.0 200 OK\r\n";
$handler->($cgi);
} else {
print "HTTP/1.0 404 Not found\r\n";
print $cgi->header,
$cgi->start_html("Not found"),
$cgi->h1("Not found"),
$cgi->end_html;
}
}
sub resp_hello {
my $cgi = shift; # CGI.pm object
return if !ref $cgi;
my $who = $cgi->param("name");
print $cgi->header,
$cgi->start_html("Hello"),
$cgi->h1("Hello Perl"),
$cgi->end_html;
}
}
my $pid = MyWebServer->new($perlPort)->background();
print "Use 'kill $pid' to stop server.\n";'
export perlPort
perl -e '
...
my $pid = MyWebServer->new($ENV{perlPort})->background();
'
You can use -s switch to pass variables. See http://perldoc.perl.org/perlrun.html
perl -se '
...
my $pid = MyWebBrowser->new($perlPort)->background();
...' -- -perlPort="$perlPort"
You can still pass command line arguments to your script. Replace $perlPort with $ARGV[0], then call you script as
perl -e $' ...
my $pid = MyWebServer->new($ARGV[0])->background();
print "Use \'kill $pid\' to stop server.\n";' "$perlPort"
Note the other problem: You can't include single quotes inside a single-quoted string in bash. You can work around this by using a $'...'-quoted string as the argument to Perl, which can contain escaped single quotes. If your script doesn't need to read from standard input, it would be a better idea to have perl read from a here-document instead.
perl <<'EOF' "$perlPort"
{
package MyWebServer;
use HTTP::Server::Simple::CGI;
...
my $pid = MyWebServer->new($ARGV[0])->background();
print "Use 'kill $pid' to stop server.\n";
EOF
The best idea is to simply use a script file instead of trying to construct the script on the command line.
perl -e '
...
my $pid = MyWebServer->new('$perlPort')->background();
...
I've the following dtrace one-liner:
sudo dtrace -n 'syscall:::entry { #num[probefunc] = count(); }'
which prints number of syscall count by program (after hitting Ctrl-C.
How do I add filter above probe to only apply to a process by its name (e.g. php)? Similar to dtruss -n <name>.
Ok, this is fairly straight forward, since it can be checked in dtruss how the filtering is done:
$ grep -C5 NAME $(which dtruss)
syscall:::entry
/(OPT_command && pid == $target) ||
(OPT_pid && pid == PID) ||
(OPT_name && NAME == strstr(NAME, execname)) ||
(OPT_name && execname == strstr(execname, NAME)) ||
(self->child)/
{
/* set start details */
where NAME is the process name.
So the one-liner command is (replace php with your process name):
sudo dtrace -n '
inline string NAME = "php";
syscall:::entry
/(NAME == strstr(NAME, execname)) || (execname == strstr(execname, NAME))/
{ #num[probefunc] = count(); }
'
My question is similar to this one: How to detect if my shell script is running through a pipe?. The difference is that the shell script I’m working on is written in Node.js.
Let’s say I enter:
echo "foo bar" | ./test.js
Then how can I get the value "foo bar" in test.js?
I’ve read Unix and Node: Pipes and Streams but that only seems to offer an asynchronous solution (unless I’m mistaken). I’m looking for a synchronous solution. Also, with this technique, it doesn’t seem very straightforward to detect if the script is being piped or not.
TL;DR My question is two-fold:
How to detect if a Node.js script is running through a shell pipe, e.g. echo "foo bar" | ./test.js?
If so, how to read out the piped value in Node.js?
I just found out a simpler answer to part of my question.
To quickly and synchronously detect if piped content is being passed to the current script in Node.js, use the process.stdin.isTTY boolean:
$ node -p -e 'process.stdin.isTTY'
true
$ echo 'foo' | node -p -e 'process.stdin.isTTY'
undefined
So, in a script, you could do something like this:
if (process.stdin.isTTY) {
// handle shell arguments
} else {
// handle piped content (see Jerome’s answer)
}
The reason I didn’t find this before is because I was looking at the documentation for process, where isTTY is not mentioned at all. Instead, it’s mentioned in the TTY documentation.
Pipes are made to handle small inputs like "foo bar" but also huge files.
The stream API makes sure that you can start handling data without waiting for the huge file to be totally piped through (this is better for speed & memory). The way it does this is by giving you chunks of data.
There is no synchronous API for pipes. If you really want to have the whole piped input in your hands before doing something, you can use
note: use only node >= 0.10.0 because the example uses the stream2 API
var data = '';
function withPipe(data) {
console.log('content was piped');
console.log(data.trim());
}
function withoutPipe() {
console.log('no content was piped');
}
var self = process.stdin;
self.on('readable', function() {
var chunk = this.read();
if (chunk === null) {
withoutPipe();
} else {
data += chunk;
}
});
self.on('end', function() {
withPipe(data);
});
test with
echo "foo bar" | node test.js
and
node test.js
It turns out that process.stdin.isTTY is not reliable because you can spawn a child process that is not a TTY.
I found a better solution here using file descriptors.
You can test to see if your program with piped in or out with these functions:
function pipedIn(cb) {
fs.fstat(0, function(err, stats) {
if (err) {
cb(err)
} else {
cb(null, stats.isFIFO())
}
})
}
function pipedOut(cb) {
fs.fstat(1, function(err, stats) {
if (err) {
cb(err)
} else {
cb(null, stats.isFIFO())
}
})
}
pipedIn((err, x) => console.log("in", x))
pipedOut((err, x) => console.log("out", x))
Here's some tests demonstrating that it works.
❯❯❯ node pipes.js
in false
out false
❯❯❯ node pipes.js | cat -
in false
out true
❯❯❯ echo 'hello' | node pipes.js | cat -
in true
out true
❯❯❯ echo 'hello' | node pipes.js
in true
out false
❯❯❯ node -p -e "let x = require('child_process').exec(\"node pipes.js\", (err, res) => console.log(res))"
undefined
in false
out false
❯❯❯ node -p -e "let x = require('child_process').exec(\"echo 'hello' | node pipes.js\", (err, res) => console.log(res))"
undefined
in true
out false
❯❯❯ node -p -e "let x = require('child_process').exec(\"echo 'hello' | node pipes.js | cat -\", (err, res) => console.log(res))"
undefined
in true
out true
❯❯❯ node -p -e "let x = require('child_process').exec(\"node pipes.js | cat -\", (err, res) => console.log(res))"
undefined
in false
out true
If you need to pipe into nodejs using an inline --eval string in bash, cat works too:
$ echo "Hello" | node -e "console.log(process.argv[1]+' pipe');" "$(cat)"
# "Hello pipe"
You need to check stdout (not stdin like suggested elsewhere) like this:
if (process.stdout.isTTY) {
// not piped
} else {
// piped
}