Confirmation about pgrep returning itself - ruby

I have read several posts here about cases where pgrep 'seems' to return itself even though it never should. The key seems to be the difference between how bash and sh function. Except that in my case, I have confirmed that sh really is a link to bash.
I'm running on SuSE 12 x86_64
/bin/sh is a link to bash
/bin/bash is the real binary
I have a Ruby script which calls pgrep like this:
cmd="/usr/bin/pgrep -lf \"#{target}\""
pidList=`#{cmd}`
I need to use the full command line because I'm actually using an argument to uniquely identify a specific 'java' process.
Now, due to some unrelated foolishness, I almost immediately do a ps -p on each of the pids returned. For a while, this was causing me great grief because the ps would sometimes return nothing. Eventually I was able to catch a case where the ps on the pid returned the pgrep command. But it was the pgrep command itself, not something like sh -c "pgrep -f blah"
To recap:
pgrep never returns itself. But differences in sh vs bash can cause it to show a subshell. But I verified that sh is a link to bash, so there should be no difference in behavior.
What I suspect (and am looking for confirmation for) is that an extra subcommand is being created because of the Ruby backticks and that is what is (only sometimes.. timing issues?) being picked up by the pgrep command.
This has been a real pain and I want to make sure the fix I implement will truly make the problem go away. Given the code I'm working with, I'm either going to
append a | grep -v grep to the end of my command
throw out any results containing 'grep' while looping through the returned results within the Ruby script
I figure #2 is faster, but it still irks me that I have to filter out pgrep itself.
Am I on the right track or do you think something else is at play?
Thanks for your time!

The problem is not in shell flavor: the shell process which calls pgrep also shows among processes (and has the searched string in its full command), so we need to filter it out like this:
pgrep -f target | grep -v $$

The answer is already in the comments to my question, but I figure I'll close this out with an official answer.
The piece of information I was missing is that
When bash is invoked as sh it behaves as a POSIX sh, not bash. – Jörg W Mittag Jan 23 at 23:31
So yes, pgrep was behaving normally. But when you call it from a Ruby script via backticks, you still need to filter out 'pgrep'

Related

Determining all the processes started with a given executable in Linux

I have this need to collect\log all the command lines that were used to start a process on my machine during the execution of a Perl script which happens to be a test automation script. This Perl script starts the executable in question (MySQL) multiple times with various command lines and I would like to inspect all of the command lines of those invocations. What would be the right way to do this? One possibility i see is run something like "ps -aux | grep mysqld | grep -v grep" in a loop in a shell script and capture the results in a file but then I would have to do some post processing on this and remove duplicates etc and I could possibly miss some process command lines because of timing issues. Is there a better way to achieve this.
Processing the ps output can always miss some processes. It will only capture the ones currently existing. The best way would be to modify the Perl script to log each command before or after it executes it.
If that's not an option, you can get the child pids of the perl script by running:
pgrep -P $pid -a
-a gives the full process command. $pid is the pid of the perl script. Then process just those.
You could use strace to log calls to execve.
$ strace -f -o strace.out -e execve perl -e 'system("echo hello")'
hello
$ egrep ' = 0$' strace.out
11232 execve("/usr/bin/perl", ["perl", "-e", "system(\"echo hello\")"], 0x7ffc6d8e3478 /* 55 vars */) = 0
11233 execve("/bin/echo", ["echo", "hello"], 0x55f388200cf0 /* 55 vars */) = 0
Note that strace.out will also show the failed execs (where execve returned -1), hence the egrep command to find the successful ones. A successful execve call does not return, but strace records it as if it returned 0.
Be aware that this is a relatively expensive solution because it is necessary to include the -f option (follow forks), as perl will be doing the exec call from forked subprocesses. This is applied recursively, so it means that your MySQL executable will itself run through strace. But for a one-off diagnostic test it might be acceptable.
Because of the need to use recursion, any exec calls done from your MySQL executable will also appear in the strace.out, and you will have to filter those out. But the PID is shown for all calls, and if you were to log also any fork or clone calls (i.e. strace -e execve,fork,clone), you would see both the parent and child PIDs, in the form <parent_pid> clone(......) = <child_pid> so then you should hopefully then have enough information to reconstruct the process tree and decide which processes you are interested in.

Proper syntax for bash script line

Writing a script to retrieve various environment parameters back from a list of servers. My script returns no value when ran but the same command returns the desired value outside of a script.
I have tried using a couple of variations to retrieve the same data. One of the commands fails because of restrictions placed on the accounts I have access to. The second command works but only if executed in an elevated mode.
This fails with access denied (pwdx is restricted)
dzdo pgrep -f /some/path | xargs pwdx
This works outside of a script but returns no value within a script
dzdo /bin/readlink -e /proc/"$(pgrep -f /some/path)"/cwd
When using "bash -x" to execute my scriipt, I see the "readlink" code is blank.
Ideally, I would like to return the PID and path of the process running as the "pgrep" command does. I can work with the path alone as returned by the "readlink" version returns. The end goal is to gather the information from several servers for audit purposes. (version, etc.)
Am I using the wrong syntax for the "readlink" command? I'm fairly new to coding bash scripts so I appreciate any guidance to help understand when to to what if I'm using a command in a script vs command line.
If pwdx is the restricted program, you need to run that with dzdo, not pgrep.
pgrep -f /some/path | dzdo xargs pwdx

Is there a way to redirect all stdout and stderr to systemd journal from within script?

I like the idea of using systemd's journal to view and manage the logs of my own scripts. I have become aware you can log to journal from my user scripts on a per message basis..
echo 'hello' | systemd-cat -t myscript -p emerg
Is there a way to redirect all messages to journald, even those generated by other commands? Like..
exec &> systemd-cat
Update:
Some partial success.
Tried Inian's suggestion from terminal.
~/scripts/myscript.sh 2>&1 | systemd-cat -t myscript.sh
and it worked, stdout and stderr were directed to systemd's journal.
Curiously,
~/scripts/myscript.sh &> | systemd-cat -t myscript.sh
didn't work in my Bash terminal.
I still need to find a way to do this inside my script for when other programs call my script.
I tried..
exec 2>&1 | systemd-cat -t myscript.sh
but it doesn't work.
Update 2:
From terminal
systemd-cat ~/scripts/myscript.sh
works. But I'm still looking for a way to do this from within the script.
A pipe to systemd-cat is a process which needs to run concurrently with your script. Bash offers a facility for this, though it's not portable to POSIX sh.
exec > >(systemd-cat -t myscript -p emerg) 2>&1
The >(command) process substitution starts another process and returns a pseudo-filename (something like /dev/fd/63) which you can redirect into. This is basically a wrapper for the mkfifo hacks you could do if you wanted to port this to POSIX sh.
If your script happens to not be a shell script, but some other programming language that allows loading extension modules linked to -lsystemd, there is another way. There is a library function sd_journal_stream_fd that quite precisely matches the task at hand. Calling it from bash itself (as opposed to some child) seems difficult at best. In Python for instance, it is available as systemd.journal.stream. What this function does in essence is connecting a unix domain stream socket and communicating what kind of data is being transmitted (e.g. priority). The difficult part with a shell here is making it connect a unix domain socket (as opposed to connecting in a child).
The key idea to this answer was given by Freenode/libera.chat user grawity.
Apparently, and for reasons that are beyond me, you can't really redirect all stdout and stderr to journald from within a script because it has to be piped in. To work around that I found a trick people were using with syslog's logger which works similarly.
You can wrap all your code into a function and then pipe the function into systemd-cat.
#!/bin/bash
mycode(){
echo "hello world"
echor "echo typo producing error"
}
mycode | systemd-cat -t myscript.sh
exit 0
And then to search journal logs..
journalctl -t myscript.sh --since yesterday
I'm disappointed there isn't a more direct way of doing this.

Storing execution time of a command in a variable

I am trying to write a task-runner for command line. No rationale. Just wanted to do it. Basically it just runs a command, stores the output in a file (instead of stdout) and meanwhile prints a progress indicator of sorts on stdout and when its all done, prints Completed ($TIME_HERE).
Here's the code:
#!/bin/bash
task() {
TIMEFORMAT="%E"
COMMAND=$1
printf "\033[0;33m${2:-$COMMAND}\033[0m\n"
while true
do
for i in 1 2 3 4 5
do
printf '.'
sleep 0.5
done
printf "\b\b\b\b\b \b\b\b\b\b"
sleep 0.5
done &
WHILE=$!
EXECTIME=$({ TIMEFORMAT='%E';time $COMMAND >log; } 2>&1)
kill -9 $WHILE
echo $EXECTIME
#printf "\rCompleted (${EXECTIME}s)\n"
}
There are some unnecessarily fancy bits in there I admit. But I went through tons of StackOverflow questions to do different kinds of fancy stuff just to try it out. If it were to be applied anywhere, a lot of fat could be cut off. But it's not.
It is to be called like:
task "ping google.com -c 4" "Pinging google.com 4 times"
What it'll do is print Pinging google.com 4 times in yellow color, then on the next line, print a period. Then print another period every .5 seconds. After five periods, start from the beginning of the same line and repeat this until the command is complete. Then it's supposed to print Complete ($TIME_HERE) with (obviously) the time it took to execute the command in place of $TIME_HERE. (I've commented that part out, the current version would just print the time).
The Issue
The issue is that that instead of the execution time, something very weird gets printed. It's probably something stupid I'm doing. But I don't know where that problem originates from. Here's the output.
$ sh taskrunner.sh
Pinging google.com 4 times
..0.00user 0.00system 0:03.51elapsed 0%CPU (0avgtext+0avgdata 996maxresident)k 0inputs+16outputs (0major+338minor)pagefaults 0swaps
Running COMMAND='ping google.com -c 4';EXECTIME=$({ TIMEFORMAT='%E';time $COMMAND >log; } 2>&1);echo $EXECTIME in a terminal works as expected, i.e. prints out the time (3.559s in my case.)
I have checked and /bin/sh is a symlink to dash. (However that shouldn't be a problem because my script runs in /bin/bash as per the shebang on the top.)
I'm looking to learn while solving this issue so a solution with explanation will be cool. T. Hanks. :)
When you invoke a script with:
sh scriptname
the script is passed to sh (dash in your case), which will ignore the shebang line. (In a shell script, a shebang is a comment, since it starts with a #. That's not a coincidence.)
Shebang lines are only interpreted for commands started as commands, since they are interpreted by the system's command launcher, not by the shell.
By the way, your invocation of time does not correctly separate the output of the time builtin from any output the timed command might sent to stderr. I think you'd be better with:
EXECTIME=$({ TIMEFORMAT=%E; time $COMMAND >log.out 2>log.err; } 2>&1)
but that isn't sufficient. You will continue to run into the standard problems with trying to put commands into string variables, which is that it only works with very simple commands. See the Bash FAQ. Or look at some of these answers:
How to escape a variable in bash when passing to command line argument
bash quotes in variable treated different when expanded to command
Preserve argument splitting when storing command with whitespaces in variable
find command fusses on -exec arg
Using an environment variable to pass arguments to a command
(Or probably hundreds of other similar answers.)

Spawning background process under different user in bash

I know I can run this command to spawn a background process and get the PID:
PID=`$SCRIPT > /dev/null 2>&1 & echo $!`
and to run a command under different user:
su - $USER -c "$COMMAND"
I don't want the script to run as root and I can't quite figure out how to combine the two and get the PID of the spawned process.
Thanks!
I think you want the runuser command. General syntax:
runuser -l userNameHere -c 'command'
I suspect that if you set your $SCRIPT variable to the above (with appropriate changes), your first command will do what you want.
To elaborate on: See the following: - stackoverflow.com/questions/9119885/…
See particularly the following quote from Chris Dodd:
Unfortunately there's no easy way to do this prior to bash version 4, when $BASHPID was
introduced. One thing you can do is to write a tiny program that prints its parent PID:...
If you have bash 4 and BASHPID, see $$ in a script vs $$ in a subshell
I don't have version 4, so I can't provide an example of it's usage.
Or write a tiny C program which execvs it's arguments and make it setuid to USER.
Or even make a setuid shell script (not generally recommended). Hopefully the USER is fixed; if not, get the source for runuser, this is essentially what runuser (not a POSIX command) does.
PID=`su - $USER -c "$SCRIPT > /dev/null 2>&1 & echo $!"`
The problems with the your use of su (above) include:
the $! is being executed in the context of the -c sub-shell of su, not the current shell where PID is,
you're requesting that your SCRIPT be run as a login shell, so you don't even know if USER's shell supports $!,
you have no control over the parent-child process chain that su (and the user's shell) create.
IOW, when you use
PID=`$SCRIPT > /dev/null 2>&1 & echo $!`
there's only one program involved, bash, and two (maybe three?) processes that you pretty much have complete control over. When you throw su into the mix, that changes things much more than is apparent on the surface -- bash and su support similar arguments, right?!?
For obvious reasons, su does mucho magic to protect it and its' children's environment from attacks; it doesn't even like being put in the background....
It's kind of late, but here is a two liner will work, seems to need to be two so that it doesn't wait for the $SCRIPT to complete:
su $USER -c "$SCRIPT 2>&1 & >> $LogOrNull echo $! > /some/writeable/path"
PID="$(cat /some/writeable/path)"
/some/writeable/path will need to be writeable by $USER
And the user running these commands will need to have read access

Resources