How to get the time of a process created remotely via ssh? - bash

I am currently writing a script whose purpose is to kill a process whose running time exceeds a threshold. The "killer" is run on a bunch of hosts and the jobs are send by a server to each host. My idea was to use 'ps' to get the running time of the job but all it prints is
17433 17433 ? 00:00:00 python
whatever the time I wait.
I tried to find a simplified example to avoid posting all the code I wrote. Let's call S the server and H the host.
If I do the following steps:
1) ssh login#H from the server
2) python myscript.py (now logged on the host)
3) ps -U login (still on the host)
I get the same result than the one above 00:00:00 as far as the time is concerned.
How can I get the real execution time ? When I do everything locally on my machine, it works fine.
I thank you very much for your help.
V.

Alternatively, you can look at the creation of the pid file in /var/run, assuming your process created one and use the find command to see if it exceeds a certain threshold:
find /var/run/ -name "myProcess.pid" -mtime +1d
This will return the filename if it meets the criteria (last modified at least 1 day ago). You probably also want to check to make sure the process is actually running as it may have crashed and left the pid behind.

If you want how long the process has been alive, you could try
stat -c %Y /proc/`pgrep python`
..which will give it back to you in epoch time. If alternately you want the kill in one go, I suggest using the find mentioned above (but perhaps point it at /proc)

Try this out:
ps kstart_time -ef | grep myProc.py | awk '{print $5}'
This will show the start date/time of the proccess myProc.py:
[ 19:27 root#host ~ ]# ps kstart_time -ef | grep "httpd\|STIME" | awk '{print $5}'
STIME
19:25
Another option is etime.
etime is the elapsed time since the process was started, in the form dd-hh:mm:ss. dd is the number of days; hh, the number of hours; mm, the number of minutes; ss, the number of seconds.
[ 19:47 root#host ~ ]# ps -eo cmd,etime
CMD ELAPSED
/bin/sh 2-16:04:45
And yet another way to do this:
Get the process pid and read off the timestamp in the corresponding subdirectory in /proc.
First, get the process pid using the ps command (ps -ef or ps aux)
Then, use the ls command to display the creation timestamp of the directory.
[ 19:57 root#host ~ ]# ls -ld /proc/1218
dr-xr-xr-x 5 jon jon 0 Sep 20 16:14 /proc/1218
You can tell from the timestamp that the process 1218 began executing on Sept 20, 16:14.

Related

PS command does not show running process

I have very long running shell script ( runs more than 24 hours) .
It is very simple script. It just reads xml file from a dir and perform sed operation of the content of the file. There are 1 millions xml files in the dir.
My script name is like runDataManipulation.sh
When I run following command
ps -ef | grep "runDa*"
then sometime I see my process as
username 34535 1 48 11:42:01 - 224:22 /usr/bin/ksh ./runDataManipulation.sh
But if I run exactly same command after couple of seconds then I don't see above process at all.
As my process is running all the time so I expect that ps command to show the process all the time.
If I do grep on the process id of my script like ..
ps -ef | grep 34535
then sometime I see result like
username 34535 1 51 11:42:01 - 229:22 [ksh]
sometime I see result like
username 45678 34535 0 14:12:11 - 0:0 [sed]
My main questions is that ... why do I not see my process when I grep for my process using script name. I am using AIX 6.1.
It looks to me like one script is spawning off your script in another process.
If you look at the results of your ps command below the first line is showing the process id of 34535, this is the main id (say parent id).
username 34535 1 51 11:42:01 - 229:22 [ksh]
This process in turn is firing off another process, this can be seen below, notice the id of the parent process (34535) is mentioned in the line below, the first number is the main process Id and the second number is the calling process.
username 45678 34535
If you changed your ps command to include the sed command you should always see some results if your scripts are still running!

Solaris logadm log rotation

I would lie to understand how logadm works.
So by looking at the online materials I wrote a small script which redirects a date into a log file and sleeps for 1 second. this will run in a infinite loop.
#!/usr/bin/bash
while true
do
echo `date` >>/var/tmp/temp.log
sleep 1
done
After this I had executed below commands:
logadm -w /var/tmp/temp.log -s 100b
logadm -V
My intention from above commands is log(/var/tmp/temp.log) should be rotated for every 100 bytes.
But after setting these , when I run the script in background, I see the log file is not rotated.
# ls -lrth /var/tmp/temp.log*
-rw-r--r-- 1 root root 7.2K Jun 15 08:56 /var/tmp/temp.log
#
As I understand, you have to call it to do work, e.g. from crontab or manually like logadm -c /var/tmp/temp.log (usually placecd in crontab)
Sidenode: you could simply write date >> /var/tmp/temp.log w/o echo.
This not how I would normally do this, plus I think you may have misunderstood the -w option.
The -w option updates /etc/logadm.conf with the parameters on the command line, and logadm is then run at 10 minutes past 3am (on the machine I checked).
I took your script and ran it, then ran:
"logadm -s 100b /var/tmp/temp.log"
and it worked fine. Give it a try! :-)

Unix script shows last created file that can be occur unknown time

I prepared a script that can check display last created file.
file_to_search=find /var/lib/occas/domains/domain1/servers/traffic-1/logs/ -name "traffic-1.log*" 2>/dev/null | sort -n | tail -1 | cut -f2 -d" "
grep "Event: Invoke :" $file_to_search | awk 'BEGIN { FS = ">" } ; { print $1 }' | sort | uniq -ic >> /home/appuser/scripts/Traffic/EventInvoke_pl-1_Istanbul.txt.backup.$(date +"%Y-%m-%d")
I have following log files in this path: /var/lib/occas/domains/domain1/servers/traffic-1/logs/ but these files are being created changeable period. So, if I put this script to crontab for example 5 minutes, it can show same sometimes file and this is not what i want. I need a script that is showing last created file but when the file occurs. Help me, please?
10:54 traffic-1.log00023
11:01 traffic-1.log00024
11:05 traffic-1.log00025
11:06 traffic-1.log00026
11:09 traffic-1.log00027
11:18 traffic-1.log00028
11:23 traffic-1.log00029
11:34 traffic-1.log00030
11:39 traffic-1.log00031
11:40 traffic-1.log00032
How much delay between the generation of the log entry and the display would you be willing to take? In theory, you could start the cron job every minute, but I wouldn't do this.
Much easier would be a script which runs unattended and, in a loop, repeatedly checks the last line of the log file, and if it changes, does whatever needs to be done.
There are however two issues to observe:
The first is easy: You should at least sleep for 1 or 2 seconds after each "polling", otherwise your script will eat up a lot of system resources.
The second is a bit tricky: It could be, that your script terminates for whatever reason, and it this happens, you need to have it restarted automatically. One way would be to set up a "watchdog": Your script, in addition to checking the log file, touches a certain file every time it did the checking (no matter whether a new event need to be reported or not). The watchdog, which could be a cron job running every, say, 10 minutes, would verify that your script is still alive (i.e. that the output file of your script had been touched during the past couple of seconds), and if not, would start a new copy of your script. This means that you could loose a 10 minute timewindow, but since it is likely a very rare event, that your script crashes, this will, hopefully, not an issue.

how can I list terminals currently in use

I want to set the terminal my script is running in as a variable in a bash shell script.
as in tty7, or pts/0 or ttyacm0 etc...
I tried printenv , sudo printenv and declare -xp
but in the list I only saw ssh_term. But I know I have a script running in /dev/tty6
so it isnt listing all the terminals in use, just the current terminal.
is there a simple way to list all the shells in use?
UPDATE:
who -a seems like all the terminals used in the uptime durration.
the ones that say old are the ones where I know there are other scripts running.
But what is this +/- business?
j0h - tty6 2014-05-16 07:50 old 9593
LOGIN tty1 2014-05-15 19:10 1675 id=1
j0h + tty7 2014-05-15 19:13 old 1936
If I have understood the problem correctly you are looking for terminals used by a particular script, if so you can use something like:
x=($(ps aux | grep script_name)| awk '{print $7}') #you may have to check which column to filter
all terminals used by script would be in array x then you can
for i in ${x[*]}
do
echo $i
done
for getting individual values
To obtain the current shell that you are using, there is the command
tty
that print the file name of the terminal connected to standard input e.g. /dev/pts/51
To see all the shell you can use w or who.
who -a and who -p should give you some information more...
Read the man to have a quick view on the possibilities. (You can select the user...)
Update:
Let we say your script is called MyScript.sh. If you add as a 1st line
#!/bin/bash
you change the attribute
chmod u+x MyScript.sh
and you execute it with ./MyScript.sh later you can search directly them with
pgrep -wal MyScript.sh
(It will return the pid of the processes)

Is there a simple and robust way to create a "singleton" process in Bash?

Environment: Recent Ubuntu, non-standard packages are OK as long as they are not too exotic.
I have a data processor bash script that processes data from stdin:
$ cat data | process_stdin.sh
I can change the script.
I have a legacy data producer system (that I can not change) that logs in to a machine via SSH and calls the script, piping it data. Pseudocode:
foo#producer $ cat data | ssh foo#processor ./process_stdin.sh
The legacy system launches ./process_stdin.sh a zillion times per day.
I would like to keep ./process_stdin.sh running indefinitely at processor machine, to get rid of process launch overhead. Legacy producer will call some kind of wrapper that will somehow pipe the data to the actual processor process.
Is there a robust unix-way way to do what I want with minimum code? I do not want to change ./process_stdin.sh (much) — the full rewrite is already scheduled, but, alas, not soon enough — and I can not change data producer.
A (not so) dirty hack could be the following:
As foo on processor, create a fifo and run a tail -f redirected to stdin of process_stdin.sh, possibly in an infinite loop:
foo#processor:~$ mkfifo process_fifo
foo#processor:~$ while true; do tail -f process_fifo | process_stdin.sh; done
Don't worry, at this point process_stdin.sh is just waiting for some stuff to arrive on the fifo process_fifo. The infinite loop is just here in case something wrong happens, so that it is relaunched.
Then you can send your data thus:
foo#producer:~$ cat data | ssh foo#processor "cat > process_fifo"
Hope this will give you some ideas!
Flock do the job.
The same command asked 3 times shortly, but waiting until the lock is free.
# flock /var/run/mylock -c 'sleep 5 && date' &
[1] 21623
# flock /var/run/mylock -c 'sleep 5 && date' &
[2] 21626
# flock /var/run/mylock -c 'sleep 5 && date' &
[3] 21627
# Fri Jan 6 12:09:14 UTC 2017
Fri Jan 6 12:09:19 UTC 2017
Fri Jan 6 12:09:24 UTC 2017

Resources