What's a good way to grab data from an MPEG Transport Stream file to use within a script? - mpeg

I've recently started to use tsduck, which I know can give me information about each and every packet within a ts file. But when I pipe the output to a text file, its way too much data and crashes often when I am trying to view it. Similarly, if i were to read that file into a script for e.g a Python script, it just seems so slow and sluggish, especially for long duration HD content where the number of ts packets will go into the millions.
Yes utilities like tsduck has its uses, but I have some very specific things I want to find out about the stream for e.g how consistent is PAT and PMT, or if continuity indictors are present... the list goes on.
Yes there are tools out there like Manzanita that gives this information, but what if you wanted to create your own script to do these things?
So the question is, what's a good way to grab data from an MPEG transport stream file such that you can use a script to analyse the data?

Related

Ruby detect when an mp4 file is "complete"

I am using obs to sequentially record many mp4 files. They are stored in a directory under a naming convention of clip_{n}.mp4. A ruby program detects the files in the directory and lists them for the user. However, the second you hit record in obs, the file is already created while it is still being written. So the ruby program detects and incomplete file to list to the user. I am looking for a way to exclude this incomplete file until obs is finished recording and saving the clip. I gave tried using system %Q[lsof #{file_path}] but it is quite slow. Is there any better way to detect an "incomplete" file?

Loading a remote file into ffmpeg efficiently

My use case requires transcoding a remote MOV file that can’t be stored locally. I was hoping to use http protocol to stream the file into ffmpeg. This works, but I’m observing this to be a very expensive operation with (seemingly) redundant network traffic, so am looking for suggestions.
What I see is that ffmpeg starts out with a Range request “0-“ (which brings in the entire file), followed by a number of open-ended requests (no ending offset) at different positions, each of which makes the http server return large chunks of the file again and again, from the starting position to the very end.
For example, http range requests for a short 10MB file look like this:
bytes=0-
bytes=10947419-
bytes=36-
bytes=3153008-
bytes=5876422-
Is there another input method that would be more network-efficient for my use case? I control the server where the video file resides, so I’m flexible in what code runs there.
Any help is greatly appreciated

two programs accessing one file

New to this forum - looks great!
I have some Processing code that periodically reads data wirelessly from remote devices and writes that data as bytes to a file, e.g. data.dat. I want to write an Objective C program on my Mac Mini using Xcode to read this file, parse the data, and act on the data if data values indicate a problem. My question is: can my two different programs access the same file asynchronously without a problem? If this is a problem can you suggest a technique that will allow these operations?
Thanks,
Kevin H.
Multiple processes can read from the same file at a time without any problem. A process can also read from a file while another writes without problem, although you'll have to take care to ensure that you read in any new data that was written. Multiple processes should not write to the same file at at the same time, though. The OS will let you do it, but the ordering of data will be undefined, and you'll like overwrite data—in general, you're gonna have a bad time if you do that. So you should take care to ensure that only one process writes to a file at a time.
The simplest way to protect a file so that only one process can write to it at a time is with the C function flock(), although that function is admittedly a bit rudimentary and may or may not suit your use case.

In Haskell, in Windows 7, can I read a file that is already write-locked by another program?

I have a 3rd party program that is running continuously, and is logging events in a text file. I want to write a small Haskell program that reads the text file while the other program is running and warns me when certain events are logged.
I looked around and it seems as if, for Windows, readFile is single write OR multiple read - it does not allow single write and multiple read. As I understand it, this is to avoid side effects like the write changing the file after/during reads.
Is there some way for me to work around this constraint on locks? The log file is only appended, and I am only looking for specific rows in the file, so I really don't mind if I don't get the most recent write, as I am interested in eventual consistency and will keep checking the file.

need a shell script to log datastream from bluetooth (+visualising?)

Let's suppose, I have some device /dev/btooth or something like that which outputs a stream consisting of four bytes separated by some certain marker (let's suppose it is a sequence 0,255,13) with a speed of 10 quadruples a second.
I need to log this stream to a file of lines. What I don't want to do is to log write to files so often (I'm afraid my SSD won't survive).
So, the stream is like:
25,64,0,255,13,10,3,0,15,0,255,13,53,65,254,252,0,255,13,...
the file should look like
10,0,0,15
53,65,254,252
Drawing live plots would be even better, though I'm not sure it's possible in shell.
There might be some other solutions (logging+visualising on Mac OS X?)

Resources