When viewing submitted jobs managed by Slurm, I would like to have the time limit column (specified by %l) to show only hours, instead of the usual days-hours:minutes:seconds format. This is the command I am currently using:
squeue --format="%.6i %.5P %.25j %.8u %.8T %.10M %.5l %.15b %.5C %.6D %R" --sort=+i --me
and this is the example output:
276350 qgpu jobname username RUNNING 1:14:14 1-00:00:00 gres:gpu:v100:1 18 1 s31n02
So, in this case, I would like the elapsed time to remain as is (1:14:14), but the time limit to change from 1-00:00:00 to 24. Is there a way to do it?
This is the way Slurm displays the dates. Elapsed time will eventually be displayed the same way (days-hours:minutes:seconds) after 23:59:59.
You can use a wrapper script to convert into a different format. Or if you know the time limit is no more than a day, just set the time limit to 23:59:00 by using --time=1439.
salloc -N1 --time=1439 bash
Using your squeue command:
166 mypartition interactive jyvet RUNNING 7:36 23:59:00 N/A 1 1 mynode
Related
This question already has answers here:
How can I calculate time elapsed in a Bash script?
(20 answers)
Closed 10 months ago.
I want to know the time duration since a systemd service was started, using the --property=ActiveEnterTimestamp i can see the time it has started, and i would like to compare it with current time and get a value in seconds or minutes, how can i achieve this with bash
If i use this solution i am getting a string, but i cannot actually get any time object to make the decision, any help on this would be appreciated.
You could use GNU date to convert the ActiveEnterTimestamp value to seconds-since-the-epoch, then subtract the current seconds-since-the-epoch to get the running time in seconds.
servicestartsec=$(date -d "$(systemctl show --property=ActiveEnterTimestamp your-service-here | cut -d= -f2)" +%s)
serviceelapsedsec=$(( $(date +%s) - servicestartsec))
Substitute "your-service-here" for your actual service name.
The first line assigns the start time in seconds by extracting the date portion of systemctl show --property=ActiveEnterTimestamp... (using cut to extract the second =-delimited field) and then passing it to GNU date and asking for output in seconds-since-the-epoch.
The second line simply subtracts that start time from the current time to get an elapsed time in seconds. Divide that as needed to get elapsed minutes, hours, etc.
I am using NetScaler FreeBSD, which recognizes many of the UNIX like commands, grep, awk, crontab… etc.
I run the following command to get the number of connected users that we have on the system
#> nsconmsg -g aaa_cur_ica_conn -d stats
OUTPUT (numbered lines):
Line1: Displaying current counter value information
Line2: NetScaler V20 Performance Data
Line3: NetScaler NS11.1: Build 63.9.nc, Date: Oct 11 2019, 06:17:35
Line4:
Line5: reltime:mili second between two records Sun Jun 28 23:12:15 2020
Line6: Index reltime counter-value symbol-name&device-no
Line7: 1 2675410 605 aaa_cur_ica_conn
…
…
From above output - I only need the number of connected users (represented in Line 7, 3rd column (605 to be precise), along with the Hostname and Time (of the running script)
Now, to extract this important 3rd column number i.e. 605, along with the hostname, and time of data collected - I wrote the following script:
printf '%s - %s - %s\n' "$(hostname)" "$(date '+%H:%M')" "$(nsconmsg -g aaa_cur_ica_conn -d stats | grep aaa_cur_ica_conn | awk '{print $3}')"
The result is perfect, showing hostname, time, and the number of connected users as follows:
Hostname - 09:00 – 605
Now can anyone please shed light on how I can:
Run this script every day - 5am to 5pm (12hours)?
Each time scripts runs - append a file on a remote Unix share with the output?
I appreciate this might be a bit if a challenge... however would be grateful for any bash scripts wizards out there that can create magic!
Thanks in advance!
I would suggest a quick look into the FreeBSD Handbook or For People New to Both FreeBSD and UNIX® so that you could get familiar with the operating system and tools that could help you achieve better what you want.
For example, there is a utility/command named cron
The software utility cron is a time-based job scheduler in Unix-like computer operating systems.
For example, to run something all days between 5am to 5pm every minute, you could use something like:
* 05-17 * * * command
Try more options here: https://crontab.guru/#*_05-17_*_*_*.
There are more tools for scheduling commands, for example at (https://en.wikipedia.org/wiki/At_(command)) but this something you need to evaluate and read more about it.
Now regarding the command, you are using to get the "number of connected users", you could avoid the grep and just used awk for example:
awk '/aaa_cur_ica_conn/ {print $3}'
This will print only column 3 if line contains aaa_cur_ica_conn, but as before I invite you to read more about the topic so that you could bet a better overview and better understand the commands.
Last but not least, check this link How do I ask a good question? the better you could format, and elaborate your question the easy for others to give an answer.
I am getting a GC overhead issue when i am running a count for a date range as it has a huge data to pull, so need a logic to run the query for a specific date range (for example to run query for every 30 days without missing any data ) and sum it up all at the end.
I have tried to run the query for every 30 days, but in this approach ,there might be some chances to miss the data count for few days.
currently , I wrote the below code and am able to run the query successfully,but its very time consuming process, so need a help to change this code for every month or some specific date range instead of running below.
while [ ${PART_START_DATE} -le ${RUN_START_DAY} ]
do
fb_TEST=$(($fb_TEST+$(hive -S -e "use ${DATABASE};set hive.cli.print.header=false;select count(*) from fb_wrk_tab where date = '${PART_START_DATE}';")))
PART_START_DATE=`date -d "${PART_START_DATE} 1 days" +%Y%m%d`
echo "fbwrk_TEST count is"$fb_TEST >> ${LOG_FILE}
done
I'm trying to find the best and most efficient way to resume reading a file from a given point.
The given file is being written frequently (this is a log file).
This file is rotated on a daily basis.
In the log file I'm looking for a pattern 'slow transaction'. End of such lines have a number into parentheses. I want to have the sum of the numbers.
Example of log line:
Jun 24 2015 10:00:00 slow transaction (5)
Jun 24 2015 10:00:06 slow transaction (1)
This is easy part that I could do with awk command to get total of 6 with above example.
Now my challenge is that I want to get the values from this file on a regular basis. I've an external system that polls a custom OID using SNMP. When hitting this OID the Linux host runs a couple of basic commands.
I want this SNMP polling event to get the number of events since the last polling only. I don't want to have the total every time, just the total of the newly added lines.
Just to mention that only bash can be used, or basic commands such as awk sed tail etc. No perl or advanced programming language.
I hope my description will be clear enough. Apologizes if this is duplicate. I did some researches before posting but did not find something that precisely correspond to my need.
Thank you for any assistance
In addition to the methods in the comment link, you can also simply use dd and stat to read the logfile size, save it and sleep 300 then check the logfile size again. If the filesize has changed, then skip over the old information with dd and read the new information only.
Note: you can add a test to handle the case where the logfile is deleted and then restarted with 0 size (e.g. if $((newsize < size)) then read all.
Here is a short example with 5 minute intervals:
#!/bin/bash
lfn=${1:-/path/to/logfile}
size=$(stat -c "%s" "$lfn") ## save original log size
while :; do
newsize=$(stat -c "%s" "$lfn") ## get new log size
if ((size != newsize)); then ## if change, use new info
## use dd to skip over existing text to new text
newtext=$(dd if="$lfn" bs="$size" skip=1 2>/dev/null)
## process newtext however you need
printf "\nnewtext:\n\n%s\n" "$newtext"
size=$((newsize)); ## update size to newsize
fi
sleep 300
done
I need a Bash script to accept 1 argument representing a time in hhmmss format, and from that derive a second time 3 minutes before that.
I've been trying to use date -d:
#! /bin/bash
DATE=`date +%Y%m%d`
TIME=$1
NEWTIME=`date -d "$DATE $TIME - 3 minutes" +%H%M%S`
echo $NEWTIME
In action:
$ ./myscript.sh 123456
invalid date `20141022 123456 - 3 minutes'
It seems the problem is with the 6 character time format because 4 characters (eg 1234) works. The subtraction of the 3 minutes is not the problem because I get the same error when I remove it.
It has occurred to me I could parse the time into a more palatable format before sending it to date. I tried inserting delimiters by adding this line:
TIME=${TIME:0:2}:${TIME:2:2}:${TIME:4:2}
It accepted that format but the answer to the - 3 minutes part was inexplicably very wrong (it subtracted 2 hours and 1 minute):
$ ./myscript.sh 123456
103356
Vexing.
It has also occurred to me that I might be able to provide date with an input format, like strptime which I'm familiar with from Python. I've found references to strptime in the context of Bash but I've been unable to get it to do anything.
Does anyone have any suggestions on getting the hhmmss time-string to work? Any help is much appreciated.
FYI: I'm trying to avoid changing the 6 character input format because that would involve changing other scripts as well as getting certain human users to alter long-entrenched habits. I'm also trying to avoid outsourcing this task to another language. (I could easily do this in Python). I want a Bash solution to this problem, if there is one.
TIME=093000
TIME=${TIME:0:2}:${TIME:2:2}:${TIME:4:2} # your line
date -d "2014-10-20 $TIME 3 mins ago" +%H%M%S
Output:
092700