ffmpeg takes too long to start - bash

I have this command in python script, in a loop:
ffmpeg -i somefile.mp4 -ss 00:03:12 -t 00:00:35 piece.mp4 -loglevel error -stats
It cuts out pieces of input file (-i). Input filename, as well as start time (-ss) and length of the piece I cut out (-t) varies, so it reads number of mp4 files and cuts out number of pieces from each one. During execution of the script it might be called around 100 times. My problem is that each time before it starts, there is a delay of 6-15 seconds and it adds up to significant time. How can I get it to start immediately?
Initially I thought it was process priority problem, but I noticed that even during the "pause", all processors work at 100%, so apparently some work is being done.
The script (process_videos.py):
import subprocess
import sys
import math
import time
class TF:
"""TimeFormatter class (TF).
This class' reason for being is to convert time in short
form, e.g. 1:33, 0:32, or 23 into long form accepted by
mp4cut function in bash, e.g. 00:01:22, 00:00:32, etc"""
def toLong(self, shrt):
"""Converts time to its long form"""
sx = '00:00:00'
ladd = 8 - len(shrt)
n = sx[:ladd] + shrt
return n
def toShort(self, lng):
"""Converts time to short form"""
if lng[0] == '0' or lng[0] == ':':
return self.toShort(lng[1:])
else:
return lng
def toSeconds(self, any_time):
"""Converts time to seconds"""
if len(any_time) < 3:
return int(any_time)
tt = any_time.split(':')
if len(any_time) < 6:
return int(tt[0])*60 + int(tt[1])
return int(tt[0])*3600 + int(tt[1])*60 + int(tt[2])
def toTime(self, secsInt):
""""""
tStr = ''
hrs, mins, secs = 0, 0, 0
if secsInt >= 3600:
hrs = math.floor(secsInt / 3600)
secsInt = secsInt % 3600
if secsInt >= 60:
mins = math.floor(secsInt / 60)
secsInt = secsInt % 60
secs = secsInt
return str(hrs).zfill(2) + ':' + str(mins).zfill(2) + ':' + str(secs).zfill(2)
def minus(self, t_start, t_end):
""""""
t_e = self.toSeconds(t_end)
t_s = self.toSeconds(t_start)
t_r = t_e - t_s
hrs, mins, secs = 0, 0, 0
if t_r >= 3600:
hrs = math.floor(t_r / 3600)
t_r = t_r - (hrs * 3600)
if t_r >= 60:
mins = math.floor(t_r / 60)
t_r = t_r - (mins * 60)
secs = t_r
hrsf = str(hrs).zfill(2)
minsf = str(mins).zfill(2)
secsf = str(secs).zfill(2)
t_fnl = hrsf + ':' + minsf + ':' + secsf
return t_fnl
def go_main():
tf = TF()
vid_n = 0
arglen = len(sys.argv)
if arglen == 2:
with open(sys.argv[1], 'r') as f_in:
lines = f_in.readlines()
start = None
end = None
cnt = 0
for line in lines:
if line[:5] == 'BEGIN':
start = cnt
if line[:3] == 'END':
end = cnt
cnt += 1
if start == None or end == None:
print('Invalid file format. start = {}, end = {}'.format(start,end))
return
else:
lines_r = lines[start+1:end]
del lines
print('videos to process: {}'.format(len(lines_r)))
f_out_prefix = ""
for vid in lines_r:
vid_n += 1
print('\nProcessing video {}/{}'.format(vid_n, len(lines_r)))
f_out_prefix = 'v' + str(vid_n) + '-'
dat = vid.split('!')[1:3]
title = dat[0]
dat_t = dat[1].split(',')
v_pieces = len(dat_t)
piece_n = 0
video_pieces = []
cmd1 = "echo -n \"\" > tmpfile"
subprocess.run(cmd1, shell=True)
print(' new tmpfile created')
for v_times in dat_t:
piece_n += 1
f_out = f_out_prefix + str(piece_n) + '.mp4'
video_pieces.append(f_out)
print(' piece filename {} added to video_pieces list'.format(f_out))
v_times_spl = v_times.split('-')
v_times_start = v_times_spl[0]
v_times_end = v_times_spl[1]
t_st = tf.toLong(v_times_start)
t_dur = tf.toTime(tf.toSeconds(v_times_end) - tf.toSeconds(v_times_start))
cmd3 = ["ffmpeg", "-i", title, "-ss", t_st, "-t", t_dur, f_out, "-loglevel", "error", "-stats"]
print(' cutting out piece {}/{} - {}'.format(piece_n, len(dat_t), t_dur))
subprocess.run(cmd3)
for video_piece_name in video_pieces:
cmd4 = "echo \"file " + video_piece_name + "\" >> tmpfile"
subprocess.run(cmd4, shell=True)
print(' filename {} added to tmpfile'.format(video_piece_name))
vname = f_out_prefix[:-1] + ".mp4"
print(' name of joined file: {}'.format(vname))
cmd5 = "ffmpeg -f concat -safe 0 -i tmpfile -c copy joined.mp4 -loglevel error -stats"
to_be_joined = " ".join(video_pieces)
print(' joining...')
join_cmd = subprocess.Popen(cmd5, shell=True)
join_cmd.wait()
print(' joined!')
cmd6 = "mv joined.mp4 " + vname
rename_cmd = subprocess.Popen(cmd6, shell=True)
rename_cmd.wait()
print(' File joined.mp4 renamed to {}'.format(vname))
cmd7 = "rm " + to_be_joined
rm_cmd = subprocess.Popen(cmd7, shell=True)
rm_cmd.wait()
print('rm command completed - pieces removed')
cmd8 = "rm tmpfile"
subprocess.run(cmd8, shell=True)
print('tmpfile removed')
print('All done')
else:
print('Incorrect number of arguments')
############################
if __name__ == '__main__':
go_main()
process_videos.py is called from bash terminal like this:
$ python process_videos.py video_data
video_data file has the following format:
BEGIN
!first_video.mp4!3-23,55-1:34,2:01-3:15,3:34-3:44!
!second_video.mp4!2-7,12-44,1:03-1:33!
END
My system details:
System: Host: snowflake Kernel: 5.4.0-52-generic x86_64 bits: 64 Desktop: Gnome 3.28.4
Distro: Ubuntu 18.04.5 LTS
Machine: Device: desktop System: Gigabyte product: N/A serial: N/A
Mobo: Gigabyte model: Z77-D3H v: x.x serial: N/A BIOS: American Megatrends v: F14 date: 05/31/2012
CPU: Quad core Intel Core i5-3570 (-MCP-) cache: 6144 KB
clock speeds: max: 3800 MHz 1: 1601 MHz 2: 1601 MHz 3: 1601 MHz 4: 1602 MHz
Drives: HDD Total Size: 1060.2GB (55.2% used)
ID-1: /dev/sda model: ST31000524AS size: 1000.2GB
ID-2: /dev/sdb model: Corsair_Force_GT size: 60.0GB
Partition: ID-1: / size: 366G used: 282G (82%) fs: ext4 dev: /dev/sda1
ID-2: swap-1 size: 0.70GB used: 0.00GB (0%) fs: swap dev: /dev/sda5
Info: Processes: 313 Uptime: 16:37 Memory: 3421.4/15906.9MB Client: Shell (bash) inxi: 2.3.56
UPDATE:
Following Charles' advice, I used performance sampling:
# perf record -a -g sleep 180
...and here's the report:
Samples: 74K of event 'cycles', Event count (approx.): 1043554519767
Children Self Command Shared Object
- 50.56% 45.86% ffmpeg libavcodec.so.57.107.100
- 3.10% 0x4489480000002825
0.64% 0x7ffaf24b92f0
- 2.12% 0x5f7369007265646f
av_default_item_name
1.39% 0
- 44.48% 40.59% ffmpeg libx264.so.152
5.78% x264_add8x8_idct_avx2.skip_prologue
3.13% x264_add8x8_idct_avx2.skip_prologue
2.91% x264_add8x8_idct_avx2.skip_prologue
2.31% x264_add8x8_idct_avx.skip_prologue
2.03% 0
1.78% 0x1
1.26% x264_add8x8_idct_avx2.skip_prologue
1.09% x264_add8x8_idct_avx.skip_prologue
1.06% x264_me_search_ref
0.97% x264_add8x8_idct_avx.skip_prologue
0.60% x264_me_search_ref
- 38.01% 0.00% ffmpeg [unknown]
4.10% 0
- 3.49% 0x4489480000002825
0.70% 0x7ffaf24b92f0
0.56% 0x7f273ae822f0
0.50% 0x7f0c4768b2f0
- 2.29% 0x5f7369007265646f
av_default_item_name
1.99% 0x1
10.13% 10.12% ffmpeg [kernel.kallsyms]
- 3.14% 0.73% ffmpeg libavutil.so.55.78.100
2.34% av_default_item_name
- 1.73% 0.21% ffmpeg libpthread-2.27.so
- 0.70% pthread_cond_wait##GLIBC_2.3.2
- 0.62% entry_SYSCALL_64_after_hwframe
- 0.62% do_syscall_64
- 0.57% __x64_sys_futex
0.52% do_futex
0.93% 0.89% ffmpeg libc-2.27.so
- 0.64% 0.64% swapper [kernel.kallsyms]
0.63% secondary_startup_64
0.21% 0.18% ffmpeg libavfilter.so.6.107.100
0.20% 0.11% ffmpeg libavformat.so.57.83.100
0.12% 0.11% ffmpeg ffmpeg
0.11% 0.00% gnome-terminal- [unknown]
0.09% 0.07% ffmpeg libm-2.27.so
0.08% 0.07% ffmpeg ld-2.27.so
0.04% 0.04% gnome-terminal- libglib-2.0.so.0.5600.4

When you put -ss afer -i, mplayer will not use the keyframes to jump into the frame. ffmpeg will decode the video from the beginning of the video. That's where the 6-15 second delay with 100% CPU usage came from.
You can put -ss before the -i e.g:
ffmpeg -ss 00:03:12 -i somefile.mp4 -t 00:00:35 piece.mp4 -loglevel error -stats
This will make ffmpeg use the keyframes and directly jumps to the starting time.

Related

Rename images same as video

I saved one frame from each video. I want to rename the frame as the video, but I'm not sure how.
import cv2
import os
def video2frames(video_path, output_path,f_num):
if not os.path.exists(output_path):
os.makedirs(output_path)
cap = cv2.VideoCapture(video_path)
index = 0
while cap.isOpened():
Ret, Mat = cap.read()
if Ret:
index += 1
if index != f_num:
continue
cv2.imwrite(output_path + '/' + str(index) + '.png', Mat)
else:
break
cap.release()
return
def multiple_video2frames( video_path, output_path, f_num ):
list_videos = os.listdir(video_path)
for video in list_videos:
video_base = os.path.basename(video)
input_file = video_path + '/' + video
out_path = output_path + '/' + video_base
video2frames(input_file, out_path, f_num)
return
# run all
video_path = 'videos' # all videos
output_path = 'final' # location on ur pc
multiple_video2frames( video_path, output_path, 300 )
Searched: How can I extract and save image frames from a large number of videos all at once using OpenCV Python?

Libavformat/FFMPEG: Muxing into mp4 with AVFormatContext drops the final frame, depending on the number of frames

I am trying to use libavformat to create a .mp4 video
with a single h.264 video stream, but the final frame in the resulting file
often has a duration of zero and is effectively dropped from the video.
Strangely enough, whether the final frame is dropped or not depends on how many
frames I try to add to the file. Some simple testing that I outline below makes
me think that I am somehow misconfiguring either the AVFormatContext or the
h.264 encoder, resulting in two edit lists that sometimes chop off the final
frame. I will also post a simplified version of the code I am using, in case I'm
making some obvious mistake. Any help would be greatly appreciated: I've been
struggling with this issue for the past few days and have made little progress.
I can recover the dropped frame by creating a new mp4 container using ffmpeg
binary with the copy codec if I use the -ignore_editlist option. Inspecting
the file with a missing frame using ffprobe, mp4trackdump, or mp4file --dump, shows that the final frame is dropped if its sample time is exactly the
same the end of the edit list. When I make a file that has no dropped frames, it
still has two edit lists: the only difference is that the end time of the edit
list is beyond all samples in files that do not have dropped frames. Though this
is hardly a fair comparison, if I make a .png for each frame and then generate
a .mp4 with ffmpeg using the image2 codec and similar h.264 settings, I
produce a movie with all frames present, only one edit list, and similar PTS
times as my mangled movies with two edit lists. In this case, the edit list
always ends after the last frame/sample time.
I am using this command to determine the number of frames in the resulting stream,
though I also get the same number with other utilities:
ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 video_file_name.mp4
Simple inspection of the file with ffprobe shows no obviously alarming signs to
me, besides the framerate being affected by the missing frame (the target was
24):
$ ffprobe -hide_banner testing.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testing.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.45.100
Duration: 00:00:04.13, start: 0.041016, bitrate: 724 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 100x100, 722 kb/s, 24.24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
Metadata:
handler_name : VideoHandler
The files that I generate programatically always have two edit lists, one of
which is very short. In files both with and without a missing frame, the
duration one of the frames is 0, while all the others have the same duration
(512). You can see this in the ffmpeg output for this file that I tried to put
100 frames into, though only 99 are visible despite the file containing all 100
samples.
$ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing.mp4
...
<edited to remove the class printing>
type:'edts' parent:'trak' sz: 48 100 948
type:'elst' parent:'edts' sz: 40 8 40
track[0].edit_count = 2
duration=41 time=-1 rate=1.000000
duration=4125 time=0 rate=1.000000
type:'mdia' parent:'trak' sz: 808 148 948
type:'mdhd' parent:'mdia' sz: 32 8 800
type:'hdlr' parent:'mdia' sz: 45 40 800
ctype=[0][0][0][0]
stype=vide
type:'minf' parent:'mdia' sz: 723 85 800
type:'vmhd' parent:'minf' sz: 20 8 715
type:'dinf' parent:'minf' sz: 36 28 715
type:'dref' parent:'dinf' sz: 28 8 28
Unknown dref type 0x206c7275 size 12
type:'stbl' parent:'minf' sz: 659 64 715
type:'stsd' parent:'stbl' sz: 151 8 651
size=135 4CC=avc1 codec_type=0
type:'avcC' parent:'stsd' sz: 49 8 49
type:'stts' parent:'stbl' sz: 32 159 651
track[0].stts.entries = 2
sample_count=99, sample_duration=512
sample_count=1, sample_duration=0
...
AVIndex stream 0, sample 99, offset 5a0ed, dts 50688, size 3707, distance 0, keyframe 1
Processing st: 0, edit list 0 - media time: -1, duration: 504
Processing st: 0, edit list 1 - media time: 0, duration: 50688
type:'udta' parent:'moov' sz: 98 1072 1162
...
The last frame has zero duration:
$ mp4trackdump -v testing.mp4
...
mp4file testing.mp4, track 1, samples 100, timescale 12288
sampleId 1, size 6943 duration 512 time 0 00:00:00.000 S
sampleId 2, size 3671 duration 512 time 512 00:00:00.041 S
...
sampleId 99, size 3687 duration 512 time 50176 00:00:04.083 S
sampleId 100, size 3707 duration 0 time 50688 00:00:04.125 S
Non-mangled videos that I generate have similar structure, as you can see in
this video that had 99 input frames, all of which are visible in the output.
Even though the sample_duration is set to zero for one of the samples in the
stss box, it is not dropped from the frame count or when reading the frames back
in with ffmpeg.
$ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing_99.mp4
...
type:'elst' parent:'edts' sz: 40 8 40
track[0].edit_count = 2
duration=41 time=-1 rate=1.000000
duration=4084 time=0 rate=1.000000
...
track[0].stts.entries = 2
sample_count=98, sample_duration=512
sample_count=1, sample_duration=0
...
AVIndex stream 0, sample 98, offset 5d599, dts 50176, size 3833, distance 0, keyframe 1
Processing st: 0, edit list 0 - media time: -1, duration: 504
Processing st: 0, edit list 1 - media time: 0, duration: 50184
...
$ mp4trackdump -v testing_99.mp4
...
sampleId 98, size 3814 duration 512 time 49664 00:00:04.041 S
sampleId 99, size 3833 duration 0 time 50176 00:00:04.083 S
One difference that jumps out to me is that the mangled file's second edit list
ends at time 50688, which coincides with the last sample, while the non-mangled
file's edit list ends at 50184, which is after the time of the last sample
at 50176. As I mentioned before, whether the last frame is clipped depends on
the number of frames I encode and mux into the container: 100 input frames
results in 1 dropped frame, 99 results in 0, 98 in 0, 97 in 1, etc...
Here is the code that I used to generate these files, which is a MWE script
version of library functions that I am modifying. It is written in Julia,
which I do not think is important here, and calls the FFMPEG library version
4.3.1. It's more or less a direct translation from of the FFMPEG muxing
demo, although the codec
context here is created before the format context. I am presenting the code that
interacts with ffmpeg first, although it relies on some helper code that I will
put below.
The helper code just makes it easier to work with nested C structs in Julia, and
allows . syntax in Julia to be used in place of C's arrow (->) operator for
field access of struct pointers. Libav structs such as AVFrame appear as a
thin wrapper type AVFramePtr, and similarly AVStream appears as
AVStreamPtr etc... These act like single or double pointers for the purposes
of function calls, depending on the function's type signature. Hopefully it will
be clear enough to understand if you are familiar with working with libav in C,
and I don't think looking at the helper code should be necessary if you don't
want to run the code.
# Function to transfer array to AVPicture/AVFrame
function transfer_img_buf_to_frame!(frame, img)
img_pointer = pointer(img)
data_pointer = frame.data[1] # Base-1 indexing, get pointer to first data buffer in frame
for h = 1:frame.height
data_line_pointer = data_pointer + (h-1) * frame.linesize[1] # base-1 indexing
img_line_pointer = img_pointer + (h-1) * frame.width
unsafe_copyto!(data_line_pointer, img_line_pointer, frame.width) # base-1 indexing
end
end
# Function to transfer AVFrame to AVCodecContext, and AVPacket to AVFormatContext
function encode_mux!(packet, format_context, frame, codec_context; flush = false)
if flush
fret = avcodec_send_frame(codec_context, C_NULL)
else
fret = avcodec_send_frame(codec_context, frame)
end
if fret < 0 && !in(fret, [-Libc.EAGAIN, VIO_AVERROR_EOF])
error("Error $fret sending a frame for encoding")
end
pret = Cint(0)
while pret >= 0
pret = avcodec_receive_packet(codec_context, packet)
if pret == -Libc.EAGAIN || pret == VIO_AVERROR_EOF
break
elseif pret < 0
error("Error $pret during encoding")
end
stream = format_context.streams[1] # Base-1 indexing
av_packet_rescale_ts(packet, codec_context.time_base, stream.time_base)
packet.stream_index = 0
ret = av_interleaved_write_frame(format_context, packet)
ret < 0 && error("Error muxing packet: $ret")
end
if !flush && fret == -Libc.EAGAIN && pret != VIO_AVERROR_EOF
fret = avcodec_send_frame(codec_context, frame)
if fret < 0 && fret != VIO_AVERROR_EOF
error("Error $fret sending a frame for encoding")
end
end
return pret
end
# Set parameters of test movie
nframe = 100
width, height = 100, 100
framerate = 24
gop = 0
codec_name = "libx264"
filename = "testing.mp4"
((width % 2 !=0) || (height % 2 !=0)) && error("Encoding error: Image dims must be a multiple of two")
# Make test images
imgstack = map(x->rand(UInt8,width,height),1:nframe);
pix_fmt = AV_PIX_FMT_GRAY8
framerate_rat = Rational(framerate)
codec = avcodec_find_encoder_by_name(codec_name)
codec == C_NULL && error("Codec '$codec_name' not found")
# Allocate AVCodecContext
codec_context_p = avcodec_alloc_context3(codec) # raw pointer
codec_context_p == C_NULL && error("Could not allocate AVCodecContext")
# Easier to work with pointer that acts like a c struct pointer, type defined below
codec_context = AVCodecContextPtr(codec_context_p)
codec_context.width = width
codec_context.height = height
codec_context.time_base = AVRational(1/framerate_rat)
codec_context.framerate = AVRational(framerate_rat)
codec_context.pix_fmt = pix_fmt
codec_context.gop_size = gop
ret = avcodec_open2(codec_context, codec, C_NULL)
ret < 0 && error("Could not open codec: Return code $(ret)")
# Allocate AVFrame and wrap it in a Julia convenience type
frame_p = av_frame_alloc()
frame_p == C_NULL && error("Could not allocate AVFrame")
frame = AVFramePtr(frame_p)
frame.format = pix_fmt
frame.width = width
frame.height = height
# Allocate picture buffers for frame
ret = av_frame_get_buffer(frame, 0)
ret < 0 && error("Could not allocate the video frame data")
# Allocate AVPacket and wrap it in a Julia convenience type
packet_p = av_packet_alloc()
packet_p == C_NULL && error("Could not allocate AVPacket")
packet = AVPacketPtr(packet_p)
# Allocate AVFormatContext and wrap it in a Julia convenience type
format_context_dp = Ref(Ptr{AVFormatContext}()) # double pointer
ret = avformat_alloc_output_context2(format_context_dp, C_NULL, C_NULL, filename)
if ret != 0 || format_context_dp[] == C_NULL
error("Could not allocate AVFormatContext")
end
format_context = AVFormatContextPtr(format_context_dp)
# Add video stream to AVFormatContext and configure it to use the encoder made above
stream_p = avformat_new_stream(format_context, C_NULL)
stream_p == C_NULL && error("Could not allocate output stream")
stream = AVStreamPtr(stream_p) # Wrap this pointer in a convenience type
stream.time_base = codec_context.time_base
stream.avg_frame_rate = 1 / convert(Rational, stream.time_base)
ret = avcodec_parameters_from_context(stream.codecpar, codec_context)
ret < 0 && error("Could not set parameters of stream")
# Open the AVIOContext
pb_ptr = field_ptr(format_context, :pb)
# This following is just a call to avio_open, with a bit of extra protection
# so the Julia garbage collector does not destroy format_context during the call
ret = GC.#preserve format_context avio_open(pb_ptr, filename, AVIO_FLAG_WRITE)
ret < 0 && error("Could not open file $filename for writing")
# Write the header
ret = avformat_write_header(format_context, C_NULL)
ret < 0 && error("Could not write header")
# Encode and mux each frame
for i in 1:nframe # iterate from 1 to nframe
img = imgstack[i] # base-1 indexing
ret = av_frame_make_writable(frame)
ret < 0 && error("Could not make frame writable")
transfer_img_buf_to_frame!(frame, img)
frame.pts = i
encode_mux!(packet, format_context, frame, codec_context)
end
# Flush the encoder
encode_mux!(packet, format_context, frame, codec_context; flush = true)
# Write the trailer
av_write_trailer(format_context)
# Close the AVIOContext
pb_ptr = field_ptr(format_context, :pb) # get pointer to format_context.pb
ret = GC.#preserve format_context avio_closep(pb_ptr) # simply a call to avio_closep
ret < 0 && error("Could not free AVIOContext")
# Deallocation
avcodec_free_context(codec_context)
av_frame_free(frame)
av_packet_free(packet)
avformat_free_context(format_context)
Below is the helper code that makes accessing pointers to nested c structs not a
total pain in Julia. If you try to run the code yourself, please enter this in
before the logic of the code shown above. It requires
VideoIO.jl, a Julia wrapper to libav.
# Convenience type and methods to make the above code look more like C
using Base: RefValue, fieldindex
import Base: unsafe_convert, getproperty, setproperty!, getindex, setindex!,
unsafe_wrap, propertynames
# VideoIO is a Julia wrapper to libav
#
# Bring bindings to libav library functions into namespace
using VideoIO: AVCodecContext, AVFrame, AVPacket, AVFormatContext, AVRational,
AVStream, AV_PIX_FMT_GRAY8, AVIO_FLAG_WRITE, AVFMT_NOFILE,
avformat_alloc_output_context2, avformat_free_context, avformat_new_stream,
av_dump_format, avio_open, avformat_write_header,
avcodec_parameters_from_context, av_frame_make_writable, avcodec_send_frame,
avcodec_receive_packet, av_packet_rescale_ts, av_interleaved_write_frame,
avformat_query_codec, avcodec_find_encoder_by_name, avcodec_alloc_context3,
avcodec_open2, av_frame_alloc, av_frame_get_buffer, av_packet_alloc,
avio_closep, av_write_trailer, avcodec_free_context, av_frame_free,
av_packet_free
# Submodule of VideoIO
using VideoIO: AVCodecs
# Need to import this function from Julia's Base to add more methods
import Base: convert
const VIO_AVERROR_EOF = -541478725 # AVERROR_EOF
# Methods to convert between AVRational and Julia's Rational type, because it's
# hard to access the AV rational macros with Julia's C interface
convert(::Type{Rational{T}}, r::AVRational) where T = Rational{T}(r.num, r.den)
convert(::Type{Rational}, r::AVRational) = Rational(r.num, r.den)
convert(::Type{AVRational}, r::Rational) = AVRational(numerator(r), denominator(r))
"""
mutable struct NestedCStruct{T}
Wraps a pointer to a C struct, and acts like a double pointer to that memory.
The methods below will automatically convert it to a single pointer if needed
for a function call, and make interacting with it in Julia look (more) similar
to interacting with it in C, except '->' in C is replaced by '.' in Julia.
"""
mutable struct NestedCStruct{T}
data::RefValue{Ptr{T}}
end
NestedCStruct{T}(a::Ptr) where T = NestedCStruct{T}(Ref(a))
NestedCStruct(a::Ptr{T}) where T = NestedCStruct{T}(a)
const AVCodecContextPtr = NestedCStruct{AVCodecContext}
const AVFramePtr = NestedCStruct{AVFrame}
const AVPacketPtr = NestedCStruct{AVPacket}
const AVFormatContextPtr = NestedCStruct{AVFormatContext}
const AVStreamPtr = NestedCStruct{AVStream}
function field_ptr(::Type{S}, struct_pointer::Ptr{T}, field::Symbol,
index::Integer = 1) where {S,T}
fieldpos = fieldindex(T, field)
field_pointer = convert(Ptr{S}, struct_pointer) +
fieldoffset(T, fieldpos) + (index - 1) * sizeof(S)
return field_pointer
end
field_ptr(a::Ptr{T}, field::Symbol, args...) where T =
field_ptr(fieldtype(T, field), a, field, args...)
function check_ptr_valid(p::Ptr, err::Bool = true)
valid = p != C_NULL
err && !valid && error("Invalid pointer")
valid
end
unsafe_convert(::Type{Ptr{T}}, ap::NestedCStruct{T}) where T =
getfield(ap, :data)[]
unsafe_convert(::Type{Ptr{Ptr{T}}}, ap::NestedCStruct{T}) where T =
unsafe_convert(Ptr{Ptr{T}}, getfield(ap, :data))
function check_ptr_valid(a::NestedCStruct{T}, args...) where T
p = unsafe_convert(Ptr{T}, a)
GC.#preserve a check_ptr_valid(p, args...)
end
nested_wrap(x::Ptr{T}) where T = NestedCStruct(x)
nested_wrap(x) = x
function getproperty(ap::NestedCStruct{T}, s::Symbol) where T
check_ptr_valid(ap)
p = unsafe_convert(Ptr{T}, ap)
res = GC.#preserve ap unsafe_load(field_ptr(p, s))
nested_wrap(res)
end
function setproperty!(ap::NestedCStruct{T}, s::Symbol, x) where T
check_ptr_valid(ap)
p = unsafe_convert(Ptr{T}, ap)
fp = field_ptr(p, s)
GC.#preserve ap unsafe_store!(fp, x)
end
function getindex(ap::NestedCStruct{T}, i::Integer) where T
check_ptr_valid(ap)
p = unsafe_convert(Ptr{T}, ap)
res = GC.#preserve ap unsafe_load(p, i)
nested_wrap(res)
end
function setindex!(ap::NestedCStruct{T}, i::Integer, x) where T
check_ptr_valid(ap)
p = unsafe_convert(Ptr{T}, ap)
GC.#preserve ap unsafe_store!(p, x, i)
end
function unsafe_wrap(::Type{T}, ap::NestedCStruct{S}, i) where {S, T}
check_ptr_valid(ap)
p = unsafe_convert(Ptr{S}, ap)
GC.#preserve ap unsafe_wrap(T, p, i)
end
function field_ptr(::Type{S}, a::NestedCStruct{T}, field::Symbol,
args...) where {S, T}
check_ptr_valid(a)
p = unsafe_convert(Ptr{T}, a)
GC.#preserve a field_ptr(S, p, field, args...)
end
field_ptr(a::NestedCStruct{T}, field::Symbol, args...) where T =
field_ptr(fieldtype(T, field), a, field, args...)
propertynames(ap::T) where {S, T<:NestedCStruct{S}} = (fieldnames(S)...,
fieldnames(T)...)
Edit: Some things that I have already tried
Explicitly setting the stream duration to be the same number as the number of frames that I add, or a few more beyond that
Explicitly setting the stream start time to zero, while the first frame has a PTS of 1
Playing around with encoder parameters, as well as gop_size, using B frames, etc.
Setting the private data for the mov/mp4 muxer to set the movflag negative_cts_offsets
Changing the framerate
Tried different pixel formats, such as AV_PIX_FMT_YUV420P
Also to be clear while I can just transfer the file into another while ignoring the edit lists to work around this problem, I am hoping to not make damaged mp4 files in the first place.
I had a similar issue, where the final frame was missing and this caused the resulting calculated FPS to be different from what I expected.
It doesn't seem like you are setting AVPacket's duration field. I found out that relying on automatic duration (leaving the field to 0) showed that issue you describe.
If you have constant framerate you can calculate how much the duration should be, E.G. set it to 512 for a 12800 time base (= 1/25 of a second) for 25 FPS. Hopefully that helps.

ffmpeg-python trim why not concat

I want to split the video, do some logical processing, and finally merge it
import ffmpeg
info = ffmpeg.probe("test.mp4")
vs = next(c for c in info['streams'] if c['codec_type'] == 'video')
num_frames = vs['nb_frames']
arr = []
in_file = ffmpeg.input('test.mp4')
for i in range(int(int(num_frames) / 30) + 1):
startTime = i * 30 + 1
endTime = (1 + i) * 30
if endTime >= int(num_frames):
endTime = int(num_frames)
# more more
arr.append(in_file.trim(start_frame=startTime, end_frame=endTime))
(
ffmpeg
.concat(arr)
.output('out.mp4')
.run()
)
I don't understand why this is happening
TypeError: Expected incoming stream(s) to be of one of the following types: ffmpeg.nodes.FilterableStream; got <class 'list'>
Perhaps this is a little too late but you could try
.concat(*arr)
This worked for me with a list of defined start- and endframes

PyMQI bad consuming performance with persistence

I'm testing the performance of IBM MQ (running the latest version in a local docker container)
I use a persistent queue.
On the producer side, I can get higher throughput by running multiple producing applications in parallel.
However, on the consumer side, I cannot increase the throughput by parallelizing consumer processes. On the contrary, the throughput is even worse for multiple consumers than for one single consumer.
What could be the reason for the poor consuming performance?
It shouldn't be due to the hardware limit as I'm comparing the consumption with the production and I did only message consumption without any other processing.
Does the GET perform the commit for each message? I don't find any explicit commit method in PyMQI though.
put_demo.py
#!/usr/bin/env python3
import pymqi
import time
queue_manager = 'QM1'
channel = 'DEV.APP.SVRCONN'
host = '127.0.0.1'
port = '1414'
queue_name = 'DEV.QUEUE.1'
message = b'Hello from Python!'
conn_info = '%s(%s)' % (host, port)
nb_messages = 1000
t0 = time.time()
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
for i in range(nb_messages):
try:
queue.put(message)
except pymqi.MQMIError as e:
print(f"Fatal error: {str(e)}")
queue.close()
qmgr.disconnect()
t1 = time.time()
print(f"tps: {nb_messages/(t1-t0):.0f} nb_message_produced: {nb_messages}")
get_demo.py
#!/usr/bin/env python3
import pymqi
import time
import os
queue_manager = 'QM1'
channel = 'DEV.APP.SVRCONN'
host = '127.0.0.1'
port = '1414'
queue_name = 'DEV.QUEUE.1'
conn_info = '%s(%s)' % (host, port)
nb_messages = 1000
nb_messages_consumed = 0
t0 = time.time()
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
gmo = pymqi.GMO(Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_FAIL_IF_QUIESCING)
gmo.WaitInterval = 1000
while nb_messages_consumed < nb_messages:
try:
msg = queue.get(None, None, gmo)
nb_messages_consumed += 1
except pymqi.MQMIError as e:
if e.reason == 2033:
# No messages, that's OK, we can ignore it.
pass
queue.close()
qmgr.disconnect()
t1 = time.time()
print(f"tps: {nb_messages_consumed/(t1-t0):.0f} nb_messages_consumed: {nb_messages_consumed}")
run results
> for i in {1..10}; do ./put_demo.py & done
tps: 385 nb_message_produced: 1000
tps: 385 nb_message_produced: 1000
tps: 383 nb_message_produced: 1000
tps: 379 nb_message_produced: 1000
tps: 378 nb_message_produced: 1000
tps: 377 nb_message_produced: 1000
tps: 377 nb_message_produced: 1000
tps: 378 nb_message_produced: 1000
tps: 374 nb_message_produced: 1000
tps: 374 nb_message_produced: 1000
> for i in {1..10}; do ./get_demo.py & done
tps: 341 nb_messages_consumed: 1000
tps: 339 nb_messages_consumed: 1000
tps: 95 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
get_demo.py updated version using syncpoint and batch commit
#!/usr/bin/env python3
import pymqi
import time
import os
queue_manager = 'QM1'
channel = 'DEV.APP.SVRCONN'
host = '127.0.0.1'
port = '1414'
queue_name = 'DEV.QUEUE.1'
conn_info = '%s(%s)' % (host, port)
nb_messages = 1000
commit_batch = 10
nb_messages_consumed = 0
t0 = time.time()
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
gmo = pymqi.GMO(Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_FAIL_IF_QUIESCING | pymqi.CMQC.MQGMO_SYNCPOINT)
gmo.WaitInterval = 1000
while nb_messages_consumed < nb_messages:
try:
msg = queue.get(None, None, gmo)
nb_messages_consumed += 1
if nb_messages_consumed % commit_batch == 0:
qmgr.commit()
except pymqi.MQMIError as e:
if e.reason == 2033:
# No messages, that's OK, we can ignore it.
pass
queue.close()
qmgr.disconnect()
t1 = time.time()
print(f"tps: {nb_messages_consumed/(t1-t0):.0f} nb_messages_consumed: {nb_messages_consumed}")
Thanks.

PyQt Copy folder with progress and transfer rate/speed

I'd like to expand my current code with the ability to show the transfer rate/speed of the files being copied. Im working on Windows 10 with py 3.6 and Qt 5.8. Here is my code:
import os
import shutil
from PyQt5.QtWidgets import QApplication, QWidget, QVBoxLayout, QLabel, QProgressBar, QFileDialog
class FileCopyProgress(QWidget):
def __init__(self, parent=None, src=None, dest=None):
super(FileCopyProgress, self).__init__()
self.src = src
self.dest = dest
self.build_ui()
def build_ui(self):
hbox = QVBoxLayout()
lbl_src = QLabel('Source: ' + self.src)
lbl_dest = QLabel('Destination: ' + self.dest)
self.pb = QProgressBar()
self.pb.setMinimum(0)
self.pb.setMaximum(100)
self.pb.setValue(0)
hbox.addWidget(lbl_src)
hbox.addWidget(lbl_dest)
hbox.addWidget(self.pb)
self.setLayout(hbox)
self.setWindowTitle('File copy')
self.auto_start_timer = QTimer()
self.auto_start_timer.singleShot(2000, lambda: self.copyFilesWithProgress(self.src, self.dest, self.progress, self.copydone))
self.show()
def progress(self, done, total):
progress = int(round((done/float(total))*100))
try:
self.pb.setValue(progress)
except:
pass
app.processEvents()
def copydone(self):
self.pb.setValue(100)
self.close()
def countfiles(self, _dir):
files = []
if os.path.isdir(_dir):
for path, dirs, filenames in os.walk(_dir):
files.extend(filenames)
return len(files)
def makedirs(self, dest):
if not os.path.exists(dest):
os.makedirs(dest)
#pyqtSlot()
def copyFilesWithProgress(self, src, dest, callback_progress, callback_copydone):
numFiles = self.countfiles(src)
if numFiles > 0:
dest = os.path.join(dest, src.replace(BASE_DIR, '').replace('\\', ''))
print(''.join(['Destination: ', dest]))
self.makedirs(dest)
numCopied = 0
for path, dirs, filenames in os.walk(src):
for directory in dirs:
destDir = path.replace(src,dest)
self.makedirs(os.path.join(destDir, directory))
for sfile in filenames:
srcFile = os.path.join(path, sfile)
destFile = os.path.join(path.replace(src, dest), sfile)
shutil.copy(srcFile, destFile)
numCopied += 1
callback_progress(numCopied, numFiles)
callback_copydone()
BASE_DIR = 'C:\\dev'
app = QApplication([])
FileCopyProgress(src="C:\dev\pywin32-221", dest='C:\dev\copied')
# Run the app
app.exec_()
This code opens a gui with a progressbar showing progress while copying files. A simple label with the current transfer rate/speed (approximate) would be rlly nice :)
Unfortuanetly i can't find any examples, can someone give me a hint or maybe a working example please?
EDIT:
I did a remake and now I have transfer rate, time elapsed and time remaining. The data seems to be realistic. I have only one problem: Lets assume i have a folder/file that time remaining is 7 sec -> currently it starts with 7 sec and gets an update every 1 second. We expect that in the next step it would display 6 sec but instead it goes:
5 sec
3 sec
1.5 sec
1.4 sec
1.3 sec
1.2 sec
1.1 sec and so on
Where is the mistake?
class FileCopyProgress(QWidget):
def __init__(self, parent=None, src=None, dest=None):
super(FileCopyProgress, self).__init__()
self.src = src
self.dest = dest
self.rate = "0"
self.total_time = "0 s"
self.time_elapsed = "0 s"
self.time_remaining = "0 s"
self.build_ui()
def build_ui(self):
hbox = QVBoxLayout()
lbl_src = QLabel('Source: ' + self.src)
lbl_dest = QLabel('Destination: ' + self.dest)
self.pb = QProgressBar()
self.lbl_rate = QLabel('Transfer rate: ' + self.rate)
self.lbl_time_elapsed = QLabel('Time Elapsed: ' + self.time_elapsed)
self.lbl_time_remaining = QLabel('Time Remaining: ' + self.time_remaining)
self.pb.setMinimum(0)
self.pb.setMaximum(100)
self.pb.setValue(0)
hbox.addWidget(lbl_src)
hbox.addWidget(lbl_dest)
hbox.addWidget(self.pb)
hbox.addWidget(self.lbl_rate)
hbox.addWidget(self.lbl_time_elapsed)
hbox.addWidget(self.lbl_time_remaining)
self.setLayout(hbox)
self.setWindowTitle('File copy')
self.auto_start_timer = QTimer()
self.auto_start_timer.singleShot(100, lambda: self.copy_files_with_progress(self.src, self.dest, self.progress, self.copy_done))
self.copy_timer = QTimer()
self.copy_timer.timeout.connect(lambda: self.process_informations())
self.copy_timer.start(1000)
self.show()
#pyqtSlot()
def process_informations(self):
time_elapsed_raw = time.clock() - self.start_time
self.time_elapsed = '{:.2f} s'.format(time_elapsed_raw)
self.lbl_time_elapsed.setText('Time Elapsed: ' + self.time_elapsed)
# example - Total: 100 Bytes, bisher kopiert 12 Bytes/s
time_remaining_raw = self._totalSize/self._copied
self.time_remaining = '{:.2f} s'.format(time_remaining_raw) if time_remaining_raw < 60. else '{:.2f} min'.format(time_remaining_raw)
self.lbl_time_remaining.setText('Time Remaining: ' + self.time_remaining)
rate_raw = (self._copied - self._copied_tmp)/1024/1024
self.rate = '{:.2f} MB/s'.format(rate_raw)
self.lbl_rate.setText('Transfer rate: ' + self.rate)
self._copied_tmp = self._copied
def progress(self):
self._progress = (self._copied/self._totalSize)*100
try:
self.pb.setValue(self._progress)
except:
pass
app.processEvents()
def get_total_size(self, src):
return sum( os.path.getsize(os.path.join(dirpath,filename)) for dirpath, dirnames, filenames in os.walk(src) for filename in filenames ) # total size of files in bytes
def copy_done(self):
self.pb.setValue(100)
print("done")
self.close()
def make_dirs(self, dest):
if not os.path.exists(dest):
os.makedirs(dest)
#pyqtSlot()
def copy_files_with_progress(self, src, dst, callback_progress, callback_copydone, length=16*1024*1024):
self._copied = 0
self._copied_tmp = 0
self._totalSize = self.get_total_size(src)
print(''.join(['Pre Dst: ', dst]))
dst = os.path.join(dst, src.replace(BASE_DIR, '').replace('\\', ''))
print(''.join(['Src: ', src]))
print(''.join(['Dst: ', dst]))
self.make_dirs(dst)
self.start_time = time.clock()
for path, dirs, filenames in os.walk(src):
for directory in dirs:
destDir = path.replace(src, dst)
self.make_dirs(os.path.join(destDir, directory))
for sfile in filenames:
srcFile = os.path.join(path, sfile)
destFile = os.path.join(dst, sfile)
# destFile = os.path.join(path.replace(src, dst), sfile)
with open(srcFile, 'rb') as fsrc:
with open(destFile, 'wb') as fdst:
while 1:
buf = fsrc.read(length)
if not buf:
break
fdst.write(buf)
self._copied += len(buf)
callback_progress()
try:
self.copy_timer.stop()
except:
print('Error: could not stop QTimer')
callback_copydone()
That's all about logic, think in the following way:
You have a TOTAL size of all files, let's say you have 100 dashes: [----- ... -----]
Now you get the starting time when you started transferring your files.
Choose some interval, let's have for example 2 secs.
So, after 2 secs see how much of the total you already transferred, in other words how much of files you already have in the new dir. Let's say you transferred 26 dashes.
The calc would be, 26 dashes/2secs = 13dashes per seconds.
Going deeper we have 26% of the content downloaded in 2 seconds since the total is 100 or 13% per second.
Even further we can calc also the prediction of time.
Total Time = 100%/13% = 7.6 secs
Time Elapsed = 2secs
Time Remaining = 7.6 - 2 = 5.6 secs
I think you got the idea...
Note: You just gotta keep verifying and remaking all these calcs in each 2 seconds, if you choose 2 secs of course, and updating the information to the user. If you wanna be even more precise or show more frequently updated information to the user just reduce that interval to let's say 0.5 secs and make the calc on top of milliseconds. It's up to you how often you update the information, that's just an overview of how to do the math. ")

Resources