I've been using splitmuxsink to split the recordings of a live video stream by length or by the "split now" cmd. The resulting video files are all timestamped with the "global time" from when the pipeline started.
File1 00:00 - 05:00
File2 05:00 - 10:00
File3 10:00 - 15:00
This is causing issues when playing back the files in certain video players that are expected a time stamp starting at 0.
What I'd like to do is reset the time stamping every time the recordings are split and a new file is started.
File1 00:00 - 05:00
File2 00:00 - 05:00
File3 00:00 - 05:00
Related
1633036680022 , This is epoch result i got from elasticsearch.
if i tried to convert this epcho to human-readable date,
So i used epochconverter
And i used bash command to convert this in my terminal,
$ date -d #1633036680022
Tuesday 15 November 53718 05:30:22 PM IST
This output from terminal say the Year 53718, because the epoch '1633036680022' is in milliseconds.
All i want is ,epoch in seconds.
You can divide by 1000 and convert to timestamp.
date -d #"$(echo "1633036680022/1000" | bc)"
Strip milliseconds with bash (output only first 10 digits):
x="1633036680022"
date -d "#${x:0:10}"
I am using SunOS 5.10. I would like the contents of an "ls -l" to be directed into a file that can be read into a database. However the time format varies. Below is a sample of the output of an ls -l. Why do the files ls_txt.sh and nohup.out have a timestamp and not a year value?
-rw-rw-r-- 1 gilmog other 57 Jul 25 2017 fnd2.txt
-rw-rw-r-- 1 gilmog other 702 Jan 24 2018 handySh
-rw-rw-r-- 1 gilmog other 189 Nov 7 23:20 ls_txt.sh
-rw------- 1 gilmog other 3915 Sep 12 03:58 nohup.out
-rw-rw-r-- 1 gilmog other 1655 Jan 24 2018 npiFn.sas
Caution: do not parse the output of ls. Its output is meant for human consumption, to understand the contents of the filesystem. If you want a program to know time information about a file, use stat1.
Now, with that out of the way, I'll answer your question. The time varies because that's how it's defined to work. From the POSIX documentation on ls:
The field shall contain the appropriate date and timestamp of when the file was last modified. In the POSIX locale, the field shall be the equivalent of the output of the following date command:
date "+%b %e %H:%M"
if the file has been modified in the last six months, or:
date "+%b %e %Y"
(where two characters are used between %e and %Y ) if the file has not been modified in the last six months or if the modification date is in the future, except that, in both cases, the final produced by date shall not be included and the output shall be as if the date command were executed at the time of the last modification date of the file rather than the current time. When the LC_TIME locale category is not set to the POSIX locale, a different format and order of presentation of this field may be used.
This definition makes a horrible mess for a program to parse. So, to reiterate: do not parse ls output.
1 If you don't have stat on your Solaris box, then you might just have to rely on ls. I'm sorry. The command for that is approximately ls -siv -# -/ c -%all z.
Let's say that we have multiple .log files on the prod unix machine(Sunos) in a directory:
For example:
ls -tlr
total 0
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 file2017-01.log
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 file2016-02.log
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 todo2015-01.log
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 fix20150223.log
The purpose here is that via nawk I extract specific info from the logs( parse logs ) and "transform" them to .csv files in order to load them to ORACLE tables afterwards.
Although the nawk has been tested and works like a charm, how could I automate a bash script that does the following:
1) For a list of given files in this path
2) nawk (to do my extraction of specific data/info from the log file)
3) Output separately each file to a unique .csv to another directory
4) remove the .log files from this path
What does concern me is that the loadstamp/timestamp on each file ending that is different. I have implemented a script that works only for the latest date. (eg. last month). But I want to load all the historical data and I am bit stuck.
To visualize, my desired/target output is this:
bash-4.4$ ls -tlr
total 0
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 file2017-01.csv
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 file2016-02.csv
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 todo2015-01.csv
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 fix20150223.csv
How could this bash script please be achieved? The loading will only takes one time, it's historical as mentioned.
Any help could be extremely useful.
An implementation written for readability rather than terseness might look like:
#!/usr/bin/env bash
for infile in *.log; do
outfile=${infile%.log}.csv
if awk -f yourscript <"$infile" >"$outfile"; then
rm -f -- "$infile"
else
echo "Processing of $infile failed" >&2
rm -f -- "$outfile"
fi
done
To understand how this works, see:
Globbing -- the mechanism by which *.log is replaced with a list of files with that extension.
The Classic for Loop -- The for infile in syntax, used to iterate over the results of the glob above.
Parameter expansion -- The ${infile%.log} syntax, used to expand the contents of the infile variable with any .log suffix pruned.
Redirection -- the syntax used in <"$infile" and >"$outfile", opening stdin and stdout attached to the named files; or >&2, redirecting logs to stderr. (Thus, when we run awk, its stdin is connected to a .log file, and its stdout is connected to a .csv file).
I have cron job which processes data every 15 minutes(12:00, 12:15, etc...) I need a bash function/script which determines how many seconds until the next processing cycle relative to the current time. If current time = "15:09:00 2016"
the next processing cycle would be 360 sec. Any ideas? thanks.
Get the current time in seconds since the UNIX epoch
$ now=$(date +%s)
then compute that value mod 900 (900 seconds is 15 minutes) and subtract that from 900.
$ echo $((900 - now % 900))
The date command allows a date to be provided following the -d --date option. date also understands relative dates (e.g. + 6 min, +3 days, etc..). So if you need to know what 6 minutes in the future is you can simply use date -d "+ 6 min" to find the exact time that will be. e.g.
$ date
Fri Jun 10 15:22:45 CDT 2016
$ date -d "+ 6 min"
Fri Jun 10 15:28:47 CDT 2016
Currently I have a csv file like this:
11:00 p.m.
11:00 p.m.
03:00 p.m.
03:00 p.m.
05:00 a.m.
05:00 a.m.
07:00 a.m.
12:00 p.m.
07:00 a.m.
05:00 a.m.
I want to delete the duplicates that are in sequential rows so the output will be this:
11:00 p.m.
03:00 p.m.
05:00 a.m.
07:00 a.m.
12:00 p.m.
07:00 a.m.
05:00 a.m.
I do not want to delete all duplicates, just duplicates that are in sequential rows, for example if the 4th and 5th row match, delete one of the duplicate rows. Is there an easy way to do this without having to run a for-loop?
Try uniq.
It can do what exactly you want to do.
With awk
awk '$0 != prev; {prev=$0}' file.txt