I need to extract the middle frame of a gif animation.
Imagemagick:
convert C:\temp\orig.gif -coalesce C:\temp\frame.jpg
generates the frames properly:
However when I extract a single frame:
convert C:\temp\orig.gif[4] -coalesce C:\temp\frame.jpg
then the frame is malformed, as if the -coalesce option was ignored:
Extraction of individual frames with Pillow and ffmpeg also results in malformed frames, tested on a couple of gifs.
Download gif: https://i.imgur.com/Aus8JpT.gif
I need to be able to extract middle frames of every gif version in either PIL, Imagemagick of ffmpeg (ideally PIL).
You are attempting to coalesce a single input image into single output image. What you got is what you asked for.
Instead you should "flatten" frames 0-4 into a single output image:
convert C:\temp\orig.gif[0-4] -flatten C:\temp\frame.jpg
If you use "-coalesce" you'll get 5 frames of output in frame-0.jpg through frame-4.jpg, the last of them being the image you wanted.
Ok, this script will find and save the middle frame of an animated GIF using Pillow.
It will also display the duration of the GIF by counting the milliseconds of each frame.
from PIL import Image
def iter_frames(im):
try:
i = 0
while 1:
im.seek(i)
frame = im.copy()
if i == 0:
# Save pallete of the first frame
palette = frame.getpalette()
else:
# Copy the pallete to the subsequent frames
frame.putpalette(palette)
yield frame
i += 1
except EOFError: # End of gif
pass
im = Image.open('animated.gif')
middle_frame_pos = int(im.n_frames / 2)
durations = []
for i, frame in enumerate(iter_frames(im)):
if i == middle_frame_pos:
middle_frame = frame.copy()
try:
durations.append(frame.info['duration'])
except KeyError:
pass
middle_frame.save('middle_frame.png', **frame.info)
duration = float("{:.2f}".format(sum(durations)))
print('Total duration: %d ms' % (duration))
Helpful code:
Python: Converting GIF frames to PNG
https://github.com/alimony/gifduration
You can do it like this:
convert pour.gif -coalesce -delete 0-3,5-8 frame4.png
Basically, it generates in full, all the frames and then deletes all frames other than from 4.
Related
I have a use case where I'd want to insert one of two watermarks - one designed for a dark-ish background, the other for a light background into a video. Let's say that I'd want to do this on the top right corner of the video.
How do I determine the average color of the top right section of the video? Post this, how do I determine which watermark to use by looking at the average color?
I have a solution right now where I am taking equally spaced screenshots and then measuring the average color, but it's excruciatingly slow, especially for longer videos.
# Calculate average color
black_distances = []
white_distances = []
movie = FFMPEG::Movie.new(video_file)
(0..movie.duration / 10).each do |second|
# extract a frame
filename = "tmp/watermark/#{SecureRandom.uuid}.jpg"
movie.screenshot filename.to_s, seek_time: second
# analyse frame for color distance
frame = MiniMagick::Image.open(filename)
frame.crop('20%x20%+80%+0')
frame.resize('1x1')
pixel = frame.get_pixels.flatten
distance_from_black = Math.sqrt(((black[0] - pixel[0])**2 + (black[1] - pixel[1])**2 + (black[2] - pixel[2])**2))
distance_from_white = Math.sqrt(((white[0] - pixel[0])**2 + (white[1] - pixel[1])**2 + (white[2] - pixel[2])**2))
black_distances.push distance_from_black
white_distances.push distance_from_white
File.delete(filename) if File.exist?(filename)
end
average_black_distance = black_distances.reduce(:+).to_f / black_distances.size
average_white_distance = white_distances.reduce(:+).to_f / white_distances.size
I am also confused about how to use the resulting average_black_distance and average_white_distance to determine which watermark to use.
I was able to make this faster by doing the following:
Taking screenshots using a single ffmpeg command, instead of iterating over movie.length and taking an individual shot every x seconds.
Cropping and scaling the screenshot in the same ffmpeg command, instead of doing that with MiniMagick.
Deleting with rm_rf instead of doing so inside the loop.
# Create folder for screenshots
foldername = "tmp/watermark/screenshots/#{medium.id}/"
FileUtils.mkdir_p foldername
# Take screenshots
movie = FFMPEG::Movie.new(video_file)
`ffmpeg -i #{video_file} -f image2 -vf fps=fps=0.2,crop=in_w/6:in_h/14:0:0,scale=1:1 #{foldername}/out%d.png`
# Calculate distances
white = Color .new('#ffffff')
black = Color .new('#000000')
distances = []
Dir.foreach foldername do |f|
next if %w[. .. .DS_Store].include? f
f = MiniMagick::Image.open(foldername + f)
color = f.get_pixels.flatten
distance_from_black = Color.new(color).color_distance(white)
distance_from_white = Color.new(color).color_distance(black)
distances.push distance_from_white - distance_from_black
end
If the value of distances.inject(0, :+) is positive, the video area captured is on the brighter side. Thanks to #Aetherus!
Trying to convert some grayscale images to RGB (1,1,1).. I have a folder of about 1500 images that I need batch converted using the code below (which works well with individual images)
Interestingly enough,
imwrite(repmat(imread(files(1).name), [1 1 3]),files(1).name)
imwrite(repmat(imread(files(2).name), [1 1 3]),files(2).name)
imwrite(repmat(imread(files(3).name), [1 1 3]),files(3).name)
...(and so forth)
works just fine
files = dir('*.jpeg')
for I=1:length(files)
imwrite(repmat(imread(files(i).name), [1 1 3]),files(i).name)
display(i)
end
Error using writejpg (line 46)
Data with 9 components not supported for JPEG files.
Error in imwrite (line 485)
feval(fmt_s.write, data, map, filename, paramPairs{:});
You need to do two things:
Use the correct variable name for looping, i.e. i or I but not a mix! Note that i has a built in definition as the imaginary constant, so you're better of using I, or something different entirely.
You show a warning for JPEGs with 9 elements not being supported when trying to write the file. This suggests you've blindly used repmat to triplicate an image which is already RBG.
We can address both of these like so:
files = dir('*.jpeg')
for k = 1:length(files)
img = imread( files(k).name ); % Load the image first
% Convert greyscale to RBG if not already RGB
% If it's already RBG, we don't even need to overwrite the image
if size(img,3) == 1
imwrite(repmat(img, [1 1 3]), files(k).name);
end
% Display progress
display(k)
end
I need to do a lot of videos with the next specifications:
A background video (bg.mp4)
Overlay a sequence of png images img1.png to img300.png (img%d.png) with a rate of 30 fps
Overlay a video with dust effects using a blend-lighten filter (dust.mp4)
Scale all the inputs to 1600x900 and if the input have not the aspect ratio, then crop them.
Specify the duration of the output-video to 10 sec (is the duration of image sequence at 30fps).
I've being doing a lot of test with different commands but always shows error.
Well, I think I got it in the next command:
ffmpeg -ss 00:00:18.300 -i music.mp3 -loop 1 -i bg.mp4 -i ac%d.png -i dust.mp4 -filter_complex "[1:0]scale=1600:ih*1200/iw, crop=1600:900[a];[a][2:0] overlay=0:0[b]; [3:0] scale=1600:ih*1600/iw, crop=1600:900,setsar=1[c]; [b][c] blend=all_mode='overlay':all_opacity=0.2" -shortest -y output.mp4
I'm going to explain in order to share what I've found:
Declaring the inputs:
ffmpeg -ss 00:00:18.300 -i music.mp3 -loop 1 -i bg.mp4 -i ac%d.png -i dust.mp4
Adding the filter complex. First part: [1,0] is the second element of the inputs (bg.mp4) and scaling to get the max values, and then cropping with the size I need, the result of this opperation, is in the [a] element.
[1:0]scale=1600:ih*1600/iw, crop=1600:900, setsar=1[a];
Second Part: Put the PNGs sequence over the resized video (bg.mp4, now [a]) and saving the resunt in the [b] element.
[a][2:0] overlay=0:0[b];
Scaling and cropping the fourth input (overlay.mp4) and saving in the [c] element.
[3:0]scale=1600:ih*1600/iw, crop=1600:900,setsar=1[c];
Mixing the first result with the overlay video with an "overlay" blending mode, and with an opacity of 0.1 because the video has gray tones and makes the result so dark.
[b][c] blend=all_mode='overlay':all_opacity=0.1
That's all.
If anyone can explay how this scaling filter works, I would thank a lot!
I needed to process a stack of images and was unable to get ffmpeg to work for me reliably, so I built a Python tool to help mediate the process:
#!/usr/bin/env python3
import functools
import numpy as np
import os
from PIL import Image, ImageChops, ImageFont, ImageDraw
import re
import sys
import multiprocessing
import time
def get_trim_box(image_name):
im = Image.open(image_name)
bg = Image.new(im.mode, im.size, im.getpixel((0,0)))
diff = ImageChops.difference(im, bg)
diff = ImageChops.add(diff, diff, 2.0, -100)
#The bounding box is returned as a 4-tuple defining the left, upper, right, and lower pixel coordinate. If the image is completely empty, this method returns None.
return diff.getbbox()
def rect_union(rect1, rect2):
left1, upper1, right1, lower1 = rect1
left2, upper2, right2, lower2 = rect2
return (
min(left1,left2),
min(upper1,upper2),
max(right1,right2),
max(lower1,lower2)
)
def blend_images(img1, img2, steps):
return [Image.blend(img1, img2, alpha) for alpha in np.linspace(0,1,steps)]
def make_blend_group(options):
print("Working on {0}+{1}".format(options["img1"], options["img2"]))
font = ImageFont.truetype(options["font"], size=options["fontsize"])
img1 = Image.open(options["img1"], mode='r').convert('RGB')
img2 = Image.open(options["img2"], mode='r').convert('RGB')
img1.crop(options["trimbox"])
img2.crop(options["trimbox"])
blends = blend_images(img1, img2, options["blend_steps"])
for i,img in enumerate(blends):
draw = ImageDraw.Draw(img)
draw.text(options["textloc"], options["text"], fill=options["fill"], font=font)
img.save(os.path.join(options["out_dir"],"out_{0:04}_{1:04}.png".format(options["blendnum"],i)))
if len(sys.argv)<3:
print("Syntax: {0} <Output Directory> <Images...>".format(sys.argv[0]))
sys.exit(-1)
out_dir = sys.argv[1]
image_names = sys.argv[2:]
pool = multiprocessing.Pool()
image_names = sorted(image_names)
image_names.append(image_names[0]) #So we can loop the animation
#Assumes image names are alphabetic with a UNIX timestamp mixed in.
image_times = [re.sub('[^0-9]','', x) for x in image_names]
image_times = [time.strftime('%Y-%m-%d (%a) %H:%M', time.localtime(int(x))) for x in image_times]
#Crop off the edges, assuming upper left pixel is representative of background color
print("Finding trim boxes...")
trimboxes = pool.map(get_trim_box, image_names)
trimboxes = [x for x in trimboxes if x is not None]
trimbox = functools.reduce(rect_union, trimboxes, trimboxes[0])
# #Put dates on images
testimage = Image.open(image_names[0])
font = ImageFont.truetype('DejaVuSans.ttf', size=90)
draw = ImageDraw.Draw(testimage)
tw, th = draw.textsize("2019-04-04 (Thu) 00:30", font)
tx, ty = (50, trimbox[3]-1.1*th) # starting position of the message
options = {
"blend_steps": 10,
"trimbox": trimbox,
"fill": (255,255,255),
"textloc": (tx,ty),
"out_dir": out_dir,
"font": 'DejaVuSans.ttf',
"fontsize": 90
}
#Generate pairs of images to blend
pairs = zip(image_names, image_names[1:])
#Tuple of (Image,Image,BlendGroup,Options)
pairs = [{**options, "img1": x[0], "img2": x[1], "blendnum": i, "text": image_times[i]} for i,x in enumerate(pairs)]
#Run in parallel
pool.map(make_blend_group, pairs)
This produces a series of images which can be made into a video like this:
ffmpeg -pattern_type glob -i "/z/out_*.png" -pix_fmt yuv420p -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" -r 30 /z/out.mp4
I am writing a function that generates a movie mimicking a particle in a fluid. The movie is coloured and I would like to generate a grayscaled movie for the start. Right now I am using avifile instead of videowriter. Any help on changing this code to get grayscale movie? Thanks in advance.
close all;
clear variables;
colormap('gray');
vidObj=avifile('movie.avi');
for i=1:N
[nx,ny]=coordinates(Lx,Ly,Nx,Ny,[x(i),-y(i)]);
[xf,yf]=ndgrid(nx,ny);
zf=zeros(size(xf))+z(i);
% generate a frame here
[E,H]=nfmie(an,bn,xf,yf,zf,rad,ns,nm,lambda,tf_flag,cc_flag);
Ecc=sqrt(real(E(:,:,1)).^2+real(E(:,:,2)).^2+real(E(:,:,3)).^2+imag(E(:,:,1)).^2+imag(E(:,:,2)).^2+imag(E(:,:,3)).^2);
clf
imagesc(nx/rad,ny/rad,Ecc);
writetif(Ecc,i);
if i==1
cl=caxis;
else
caxis(cl)
end
axis image;
axis off;
frame=getframe(gca);
cdata_size = size(frame.cdata);
data = uint8(zeros(ceil(cdata_size(1)/4)*4,ceil(cdata_size(2)/4)*4,3));
data(1:cdata_size(1),1:cdata_size(2),1:cdata_size(3)) = [frame.cdata];
frame.cdata = data;
vidObj = addframe(vidObj,frame);
end
vidObj = close(vidObj);
For your frame data, use rgb2gray to convert a colour frame into its grayscale counterpart. As such, change this line:
data(1:cdata_size(1),1:cdata_size(2),1:cdata_size(3)) = [frame.cdata];
To these two lines:
frameGray = rgb2gray(frame.cdata);
data(1:cdata_size(1),1:cdata_size(2),1:cdata_size(3)) = ...
cat(3,frameGray,frameGray,frameGray);
The first line of the new code will convert your colour frame into a single channel grayscale image. In colour, grayscale images have all of the same values for all of the channels, which is why for the second line, cat(3,frameGray,frameGray,frameGray); is being called. This stacks three copies of the grayscale image on top of each other as a 3D matrix and you can then write this frame to your file.
You need to do this stacking because when writing a frame to file using VideoWriter, the frame must be colour (a.k.a. a 3D matrix). As such, the only workaround you have if you want to write a grayscale frame to the file is to replicate the grayscale image into each of the red, green and blue channels to create its colour equivalent.
BTW, cdata_size(3) will always be 3, as getframe's cdata structure always returns a 3D matrix.
Good luck!
I am working on a project about lip recognition and I have to read a recorded a video at a frame rate of 30 fps so, if I have 70 frames I need just to acquire or select a representative frame every 8 frames as the shortest video which has number of frames in the data set is of 16 frames but my problem is to adjust the for loop every 8 frames,and it can't read any frame is the problem about the video reader??so,please if you have any idea I would be grateful
thanks,,
v = VideoReader('1 - 1.avi');
s = floor((size(v,4))/8);
for t =1:s:size(v,4)
img = read(s,i);
y = imresize(img,[100,120];
I would take the example for VideoReader and modify the code to explain -
%%// Paramters
sampling_factor = 8;
resizing_params = [100 120];
%%// Input video
xyloObj = VideoReader('xylophone.mpg');
%%// Setup other parameters
nFrames = floor(xyloObj.NumberOfFrame/sampling_factor); %%// xyloObj.NumberOfFrames;
vidHeight = resizing_params(1); %// xyloObj.Height;
vidWidth = resizing_params(1); %// xyloObj.Width;
%// Preallocate movie structure.
mov(1:nFrames) = struct('cdata', zeros(vidHeight, vidWidth, 3, 'uint8'),'colormap',[]);
%// Read one frame at a time.
for k = 1 :nFrames
IMG = read(xyloObj, (k-1)*sampling_factor+1);
%// IMG = some_operation(IMG);
mov(k).cdata = imresize(IMG,[vidHeight vidWidth]);
end
%// Size a figure based on the video's width and height.
hf = figure;
set(hf, 'position', [150 150 vidWidth vidHeight])
%// Play back the movie once at the video's frame rate.
movie(hf, mov, 1, xyloObj.FrameRate);
Basically the only change I have made are for 'nFrames' and the other factors revolving around it. Try changing the 'sampling_factor' and see if that makes sense . Also, I have added the image resizing that you are performing at the end of your code.
you can achieve this task by reading frames from video and store it in cell array. From cell array , you can easily read whatever frame you want by customizing for loop as follows.
for i=1:8:n
frame = cell{i};
process(frame)
end
cell: it contains all frames in video
process: it is a function to perform your task
n: number of frames in video
If you want to get more information for reading frames from video and store into cell array, visit the following link:
https://naveenideas.blogspot.in/2016/07/reading-frames-from-video.html