Checking if user has process running - shell

I have been using this to check if the process i want to edit is already running.
Now this returns if any user has this process running, but since multiple users now run it, i need this line to only return true if the current user has it running. I already have something to execute something as_user and the username is saved in ME.
if ps ax | grep -v grep | grep -v -i SCREEN | grep $SERVICE > /dev/null

$LOGNAME provides the current user name. So in case you are using the command to run from X user and want to check for that specific user process, then you can add additional grep for $LOGNAME. I am using SUSE-Linux. In case you are using any other OS, Please specify.

Related

Stopping a task in shell script if logs contain specific string

I am trying to run a command in shell script and would like to exit it if the processing logs (not sure what you call the logs that are outputted on terminal while the task is running) contains the string "INFO | Next session will start at"
I tried using grep but because the string "INFO | Next session will start at" is not in a stdout it does not detect while the command is running.
The specific command I'm running is below
pipenv run python3 run.py --config accounts/user/config.yml
By 'processing logs' I mean the log output before the stdout is displayed in the terminal.
...
[D 211127 10:07:12 init:400] atx-agent version 0.10.0
[D 211127 10:07:12 init:403] device wlan ip: route ip+net: no such network interface
[11/27 10:07:12] INFO | Time delta has set to 00:11:51.
[11/27 10:07:13] INFO | Kill atx agent.
[11/27 09:59:32] INFO | Next session will start at: 10:28:30 (2021/11/27).
[11/27 09:59:32] INFO | Time left: 00:28:57.
I am trying to do this because the yml file I'm trying to run has a limit on what time you can execute it, and I would like to exit the task if the time is not met.
I tried to give as much context but if there's something missing please let me know.
This may work:
pipenv run python3 run.py --config accounts/user/config.yml |
sed "/INFO | Next session will start at/q"
sed prints the piped input, until it matches the expression and quits (q). The program will receive SIGPIPE (broken pipe) when it tries to continue writing, and (likely) exit. It's the same as what happens when you do something like find | head.
You could also use kill in a shell wrapper:
sh -c 'pipenv run python3 run.py --config accounts/user/config.yml |
{ sed "/INFO | Next session will start at/q"; kill -- -$$; }'
Notes:
The program may print a different log if stdout is not a terminal.
If you want to match a literal string, you could use grep -Fm 1 PATTERN, but other log output will be hidden. grep fails if no match, which can be useful.
This will work any shell, including zsh. zsh or bash can also be used for the kill wrapper.
There are other approaches. This thread focuses on tail, but is a useful reference: https://superuser.com/questions/270529/monitoring-a-file-until-a-string-is-found

How to listen for, and act upon, a Mac notification using Automator?

Is it possible to listen for a specific notification on Mac and act upon it using Automator?
I regularly use an app that runs a background job then sends a notification when it's finished. The app stays open after the job is finished so I'd like to use Automator to quit the app when the notification is received.
If it's not possible in Automator is there another way I could do this?
More context: the app is actually launched by a Folder Action created using Automator. It detects when a specific SD card is inserted and runs a backup app on that SD card. So maybe there's something I can add to that Folder Action workflow that can detect the notification?
While it's certainly possible to query the sqlite database containing notifications in macOS, it seems to me like an unnecessarily complicated route, and I would first try the following...
In your workflow, add a 'Run Shell Script' action at the end, containing something like this:
while [ $(ps -e | grep "[N]ameOfBackgroundProcess" | wc -l) -gt 0 ]; do
sleep 3
done
killall "NameOfAppToQuit" 1&>2 /dev/null
The while loop checks whether the background job is still running.
ps -e lists all running processes.
grep "[N]ameOf..." gets all lines containing the name. Brackets around the first letter excludes the grep process itself from the output.
wc -l counts the lines.
-gt 0 checks if the number is greater than zero.
When the loop is done, that means the process has exited so we quit the app with killall.
As for the notification route...
I haven't figured everything out, but this might give you a head start:
#!/usr/bin/env bash
# Get the directory of the Notification Center database (works for me in Big Sur):
db_dir=$(lsof -p $(ps aux | grep -m1 usernoted | awk '{ print $2 }')| awk '{ print $NF }' | grep 'db2/db$' | xargs dirname)
# Get the app_id:
app_id=$(sqlite3 "$db_dir"/db 'SELECT app_id FROM app WHERE identifier="com.example.identifier";')
# Get relevant records:
sqlite3 "$db_dir"/db "SELECT * FROM record WHERE app_id='$app_id';"
# And this is where I leave you.
To explore the database in a GUI, try https://sqlitebrowser.org/

Print lines of wbinfo -u matching pattern

I have a Redhat 6.8 cluster with several nodes in it that I'm trying to sync the Active Directory UID and GID to utilizing winbind. I am attempting to sync the output of wbinfo -u to all the nodes but only want the relevant AD accounts with the uid and gid fields populated. I tried this with:
for i in `wbinfo -u`; do id ${i} | awk '/uid/{ print $0}' ; done
I end up getting all of the wbinfo -u results as if I ran it by itself.
Is there a way to just grep/awk/sed the results with uid in the beginning of it? I apologize for not showing the output of what I ran, this system isn't connected.
Ok, I think I figured it out. When I run the:
for i in wbinfo -u; do id ${i} ; done
It outputs all of the errors on accounts without uid/gid as well as the accounts with the successful attempts with uid/gid to the screen, but when I redirected the output of the command to /tmp/test:
for i in wbinfo -u; do id ${i} ; done > /tmp/test
It only outputs the successful results, which is what I needed. So I guess 'id' when redirected only shows successful attempts. Go figure. Thanks guys.

Mail command executing commands inside of string passed in as message body

Ive got a script that checks if a process is running or not, using ps -ef and some grep. If the process is running, it does nothing. If the process isnt running, it restarts the process, and then sends an email to me stating that the process died.
This script currently runs ever 5 minutes from a admin account's crontab.
Peusdocode + problem code:
#!/bin/sh
# declare a ton of environment variables.
ATONOFVARS=/lots/of/qualified/paths
# declare relevant logging functions
loggingFunction()
# declare failure message to pass into mail (real code)
PAGEMESSAGE="MyServer: process not found in ps -ef. Attempting to Restart... Please check log file in /tmp"
PAGESUBJECT="ProcessHealthCheck.sh - MyServer: process not found in ps -ef"
EMAILLIST="my.email#mycorp.com"
# do the actual check (back to peusdo code)
if [ $(ps -ef | grep process | grep adminAccount | grep -v grep | wc -1) ]
restartProcess()
FAILURE=1
fi
# log the failure and send an email to me (real code)
if [ $FAILURE -eq 1 ]
then
logMsg ""
logMsg "The process is not detected in ps -ef!"
logMsg "Restarting..."
logMsg ""
#send emails out
echo $PAGEMESSAGE | mail -s $PAGESUBJECT $EMAILLIST
fi
To test this, I deliberately killed the PID of my process I want monitored, so this cronjob would have to restart it, I also wanted to test the email capability. The script restarts my process exactly like I want it, and I get the email.
However the email says in the message body: "MyServer: process not found in
the entire listing of ps -ef. Attempting to Restart... Please check log file in /tmp". Also, the addresses that the email was sent to, are lines from ps -ef as well. As in my email had 2000 recipients. Sample addresses: grep#myServer.mydomain.com, process#myServer.mydomain.com
Does anyone know whats going on? Why is mail executing a command that is in a string? Since finding this out, I have changed the string to remove any possibility of a unix command.

Shell script runs from command line, not cron

I have a script that updates a server with some stats once per day. The script works as intended when running from command line, but when running from cron some of the variables are not passed to curl.
Here is an example of the code:
#!/bin/sh
PATH=/bin:/sbin:/usr/bin:/usr/sbin
/bin/sh /etc/profile
MACADDR=$(ifconfig en0 | grep ether | awk '{print $2}')
DISKUSED=$(df / | awk '{print $3}' | tail -n1)
DISKSIZE=$(df / | awk '{print $2}' | tail -n1)
# HTTP GET PARAMS
GET_DELIM="&"
GET_MAC="macaddr"
GET_DS="disk_size"
GET_DU="disk_used"
# Put together the query
QUERY1=$GET_MAC=$MACADDR$GET_DELIM$GET_DS=$DISKSIZE$GET_DELIM$GET_DU=$DISK_USED
curl http://192.168.100.150/status.php?$QUERY1
The result in the cron job is http://192.168.100.150/status.php?macaddr=&disk_size=&disk_used=
I am not sure if it is some problem with the variables, or possibly with awk trying to parse data with no terminal size specified, etc.
Any help is appreciated.
When you're running into problems like this it's almost always an environment issue.
Dump the results of "env" to a file and inspect that. You can also run your script with top line of
#!/bin/sh -x
to see what's happening to all the variables. You might want to use a wrapper script so you can redirect the output this provides for analysis.
Very first command in your script ifconfig is found in /sbin/ifconfig on Mac. And the default PATH variable for cron jobs is set to: /usr/bin:/bin That's the reason probably rest of your commands are also failing.
It is better to set the PATH manually at the top of your script. Something like:
export PATH=$PATH:/sbin
One problem I've run into with crons is that variables you take for granted do not exist. The main one you take for granted is the path variable.
Echo what you have set as your path when being run from the command line and put that in the top of your script (or in the top of the crontab).
Alternatively, specify the full path to each command - ifconfig, awk, grep, etc.
I would guess that will fix the problem.

Resources