I am trying to retrieve the time in seconds from the process that started. But I am able to get just the day but not the complete date time. Below is the thing that I made:
ps --user <user Name> -o uid,pid,lstart,cmd:50 --no-heading |
tail -n +2 |
while read PROC_UID PROC_PID PROC_LSTART PROC_CMD; do
echo $PROC_LSTART
done
Thu
Tue
Fri
Thu
Thu
While the lstart should give me something like :
Thu Jan 26 09:00:21 2017
The "read" command reads a space character as a field delimiter, so it is reading the lstart output as five separate fields, not a single field. Try this:
ps --user <user Name> -o uid,pid,lstart,cmd:50 --no-heading | tail -n +2 |
while read PROC_UID PROC_PID PROC_L1 PROC_L2 PROC_L3 PROC_L4 PROC_L5 PROC_CMD; do
echo $PROC_L1 $PROC_L2 $PROC_L3 $PROC_L4 $PROC_L5
done
Related
I am having the below outputs and I need to get the time difference in seconds.
------------------------------
Wed Nov 23 15:09:20 2016
------------------------------
Wed Nov 23 15:27:47 2016
------------------------------
Generally month should be the same on all cases so we can escape it, the same for the year, I may get different values for the day of week and the day for sure, the difference for sure will be in seconds and minutes and might be in hours ...
I tried some awks and cut by : but I still having an issue.
Thanks in advance !
Any help appreciated !
My first perl script ever :
# extract two dates and calculate difference in s
# http://stackoverflow.com/questions/40781429/get-the-time-difference-in-seconds/
#
# cat time_diff.txt | grep -e "20[0-2][0-9]" | perl time_difference.pl
use Date::Parse;
$date_str1 = <STDIN>;
$date_str2 = <STDIN>;
$date1 = str2time($date_str1);
$date2 = str2time($date_str2);
print $date2-$date1;
print "\n";
Too bad you cannot use date -d, I was proud of this one-liner :
cat time_diff.txt | grep -e "20[0-2][0-9]" | xargs -i date -d{} +%s | (read -d "\n" t1 t2; echo $t2-$t1 | bc)
Tested with bash and zsh on Linux Mint 17.3
I have a unix shell script like below. I wanted to preappend a timestamp in front of every line of out.log. The general solution was create another script preappend.sh and execute the script like this:
(./a.sh 2>&1 ) | ./b.sh > out.log
However the original shell script has a line exec 2>out.log (I have commented this out below for my testing earlier). In real life this line is not commented. Could someone teach me how I would preappend the timestamp in out.log when there is a exec 2> in place?
benny
------ my script a.sh ---------
#!/bin/sh
#exec 2>out.log
set -x
echo 'hello world'
sleep 2
echo 'you rocks'
------end---------
---- preappend.sh ---
#!/bin/bash
while read line ; do
echo "$(date '+%Y%m%d %H:%M:%S'): ${line}"
done
-------end------------
Does this address the problem:
origScript.sh 2>&1 | awk '{ printf strftime() " " $0 "\n" }'
We can do a small test to check if this is working -
while [ 1 ]
do
date
(>&2 echo "error")
sleep 1
done 2>&1 | awk '{ printf strftime() " " $0 "\n" }'
It returns something like this:
Tue Sep 20 19:11:43 UTC 2016 Tue Sep 20 19:11:43 UTC 2016
Tue Sep 20 19:11:43 UTC 2016 error
Tue Sep 20 19:11:44 UTC 2016 Tue Sep 20 19:11:44 UTC 2016
Tue Sep 20 19:11:44 UTC 2016 error
...
I have a number of files in the form foo_[SECONDS.MILLISECONDS]_bar.tar.gz and for each file I would like to be to get a datetime value (YYYYMMDDHHMMSS) for each file.
So far I have
ls -1 /filestore/*.tar.gz | cut -d _ -f 2 | date -f -
But this errors along the lines of
date: invalid date '1467535262.712041352'
How should a bash pipeline of epoch values be converted into a datetime string?
MWE
mkdir tmpBLAH
touch tmpBLAH/foo_1467483118.640314986_bar.tar.gz
touch tmpBLAH/foo_1467535262.712041352_bar.tar.gz
ls -1 tmpBLAH/*.tar.gz | cut -d _ -f 2 | date -f -
To convert epoch time to datetimem, please try the following command:
date -d #1346338800 +'%Y%m%d%H%M%S'
1346338800 is a epoch time.
About your case, for comand line as following:
echo 1467535262.712041352 | cut -d '.' -f 1 | xargs -I{} date -d #{} +'%Y%m%d%H%M%S'
you will get:
20160703174102
Something like this?
for f in /filestore/*.tar.gz; do
epoch=${f#*_}
date -d #${epoch%%.*} +%Y%m%d%H%M%S
done
The syntax of the date command differs between platforms; I have assumed GNU date, as commonly found on Linux. (You could probably use date -f if you add the # before each timestamp, but I am not in a place where I can test this right now.) Running a loop makes some things easier, such as printing both the input file name and the converted date, while otherwise a pipeline would be the most efficient and idiomatic solution.
As an aside, basically never use ls in scripts.
First, the -1 option to ls is useless, because ls prints its output one file per line by default, it's just that when the output is a terminal (not a pipe), it pretty-prints in columns. You can check that fact by just running ls | cat.
Then, date converts epoch timestamps safely only if prefixed with an #.
% date -d 0
Sun Jul 3 00:00:00 CEST 2016
% LANG=C date -d #0
Thu Jan 1 01:00:00 CET 1970
% date -d 12345
date: invalid date '12345'
% date -d #12345
Thu Jan 1 04:25:45 CET 1970
Which gives:
printf "%s\n" tmpBLAH/foo_*_bar.tar.gz | sed 's/.*foo_/#/; s/_bar.*//' | date -f -
You can do:
for i in foo_*_bar.tar.gz; do date -d "#$(cut -d_ -f2 <<<"$i")" '+%Y%m%d%H%M%S'; done
The epoch time is provided with the -d #<time> and the desired format is '+%Y%m%d%H%M%S'.
Example:
% for i in foo_*_bar.tar.gz; do date -d "#$(cut -d_ -f2 <<<"$i")" '+%Y%m%d%H%M%S'; done
20160703001158
20160703144102
So basically i want to merge a couple of CSV files. Im using the following script to do that :
paste -d , *.csv > final.txt
However this has worked for me in the past but this time it doesn't work. It appends the data next to each other as opposed to below each other. For instance two files that contain records in the following format
CreatedAt ID
Mon Jul 07 20:43:47 +0000 2014 4.86249E+17
Mon Jul 07 19:58:29 +0000 2014 4.86238E+17
Mon Jul 07 19:42:33 +0000 2014 4.86234E+17
When merged give
CreatedAt ID CreatedAt ID
Mon Jul 07 20:43:47 +0000 2014 4.86249E+17 Mon Jul 07 18:25:53 +0000 2014 4.86215E+17
Mon Jul 07 19:58:29 +0000 2014 4.86238E+17 Mon Jul 07 17:19:18 +0000 2014 4.86198E+17
Mon Jul 07 19:42:33 +0000 2014 4.86234E+17 Mon Jul 07 15:45:13 +0000 2014 4.86174E+17
Mon Jul 07 15:34:13 +0000 2014 4.86176E+17
Would anyone know what the reason behind this is? Or what i can do to force merge below records?
Assuming that all the csv files have the same format and all start with the same header,
you can write a little script as the following to append all files in only one and to take only one time the header.
#!/bin/bash
OutFileName="X.csv" # Fix the output name
i=0 # Reset a counter
for filename in ./*.csv; do
if [ "$filename" != "$OutFileName" ] ; # Avoid recursion
then
if [[ $i -eq 0 ]] ; then
head -1 "$filename" > "$OutFileName" # Copy header if it is the first file
fi
tail -n +2 "$filename" >> "$OutFileName" # Append from the 2nd line each file
i=$(( $i + 1 )) # Increase the counter
fi
done
Notes:
The head -1 or head -n 1 command print the first line of a file (the head).
The tail -n +2 prints the tail of a file starting from the lines number 2 (+2)
Test [ ... ] is used to exclude the output file from the input list.
The output file is rewritten each time.
The command cat a.csv b.csv > X.csv can be simply used to append a.csv and b csv in a single file (but you copy 2 times the header).
The paste command pastes the files one on a side of the other. If a file has white spaces as lines you can obtain the output that you reported above.
The use of -d , asks to paste command to define fields separated by a comma ,, but this is not the case for the format of the files you reported above.
The cat command instead concatenates files and prints on the standard output, that means it writes one file after the other.
Refer to man head or man tail for the syntax of the single options (some version allows head -1 other instead head -n 1)...
Alternative simple answer, this as combine_csv.sh:
#!/bin/bash
{ head -n 1 $1 && tail -q -n +2 $*; }
can be used like this:
pattern="my*filenames*.csv"
combine_csv.sh ${pattern} > result.csv
Thank you so much #wahwahwah.
I used your script to make nautilus-action, but it work correctly only with this changes:
#!/bin/bash
for last; do true; done
OutFileName=$last/RESULT_`date +"%d-%m-%Y"`.csv # Fix the output name
i=0 # Reset a counter
for filename in "$last/"*".csv"; do
if [ "$filename" != "$OutFileName" ] ; # Avoid recursion
then
if [[ $i -eq 0 ]] ; then
head -1 "$filename" > "$OutFileName" # Copy header if it is the first file
fi
tail -n +2 "$filename" >> "$OutFileName" # Append from the 2nd line each file
i=$(( $i + 1 )) # Increase the counter
fi
done
in BASH I can't think of a good way to do this but I only want to see the past 30 days of entries in /var/log/messages*. The issue to me is how do I do that with just the Month and Day. For example:
Sep 2 14:26:13 <SOME ENTRY>
Sep 4 14:26:13 <SOME ENTRY>
Sep 9 14:26:13 <SOME ENTRY>
Sep 14 14:26:13 <SOME ENTRY>
etc..
Any ideas ? HELP! ha ha
I think this is close. This will give you a sorted list of entries (most recent first) through the start of August. Depending on when you run it, it will give you as much as ~60 days instead of 30. On average, I suppose it would give you about 45. The other downside is that you need to adjust the grep statement at the end of the pipe as the date advances.
sort -k1Mr -k2nr <file> | grep -E "Aug|Sep"
a little late but...
egrep "^$(date '+%b %e' -2d)" /var/log/messages
-- This works --- but ugly --
-- Print only the searches that meet the date in each loop iteration (i.e last X num days)
for (( i=0; i<=${MAXSEARCHDAYS}; i++)) ;do
egrep $(date --date "now -${i} days" +%b) ${USBFOUND} | grep $(date --date "now -${i} days" +%e) >> ${TEMPFILE}
done
sort -k1,1M -k2,2n ${TEMPFILE} | uniq >> ${LOGFILE}