What I am trying to do is to add a short text at specific timeframe of a mp4 video.
I have this code:
# Import everything needed to edit video clips
from moviepy.editor import *
# loading video dsa gfg intro video
clip = VideoFileClip("video.mp4")
# clipping of the video
# getting video for only starting 10 seconds
clip = clip.subclip(0, 5)
# Reduce the audio volume (volume x 0.8)
clip = clip.volumex(0.8)
text = "Hello world!"
# Generate a text clip
txt_clip = TextClip(text, fontsize = 75, color = 'black')
# setting position of text in the center and duration will be 10 seconds
txt_clip = txt_clip.set_pos('center').set_duration(2).set_start(3)
# Overlay the text clip on the first video clip
video = CompositeVideoClip([new_clip, txt_clip])
# showing video
video.ipython_display(width = 280)
Everything works well except that the text changes suddenly to something like '#/tmp/tmpq5b_zzvf.txt' which is obviously not the intended effect.
Beside googling, I tried dir(txt_clip) to see if I could change it manually, and it does have a txt attribute associated with it. but inserting txt_clip.txt = "Hello world" right afterward does not do anything.
What could be the culprit?
Related
Making a slideshow in moviepy works without issue but i just cant figure out how to add a crossfade transition between each frame. I'm very new to python and moviepy and could really use some help. Thanks
img_clips = []
path_list = []
for image in os.listdir('E:/moviepy'):
if image.endswith(".png"):
path_list.append(os.path.join('E:/moviepy/', image))
# creating slide for each image
for img_path in path_list:
slide = ImageClip(img_path, duration=0.2)
img_clips.append(slide)
# concatenating slides
video_slides = concatenate_videoclips(img_clips, method='compose')
# exporting final video
video_slides.write_videofile("E:/moviepy/output_video.mp4", fps=30)
I have a use case where I'd want to insert one of two watermarks - one designed for a dark-ish background, the other for a light background into a video. Let's say that I'd want to do this on the top right corner of the video.
How do I determine the average color of the top right section of the video? Post this, how do I determine which watermark to use by looking at the average color?
I have a solution right now where I am taking equally spaced screenshots and then measuring the average color, but it's excruciatingly slow, especially for longer videos.
# Calculate average color
black_distances = []
white_distances = []
movie = FFMPEG::Movie.new(video_file)
(0..movie.duration / 10).each do |second|
# extract a frame
filename = "tmp/watermark/#{SecureRandom.uuid}.jpg"
movie.screenshot filename.to_s, seek_time: second
# analyse frame for color distance
frame = MiniMagick::Image.open(filename)
frame.crop('20%x20%+80%+0')
frame.resize('1x1')
pixel = frame.get_pixels.flatten
distance_from_black = Math.sqrt(((black[0] - pixel[0])**2 + (black[1] - pixel[1])**2 + (black[2] - pixel[2])**2))
distance_from_white = Math.sqrt(((white[0] - pixel[0])**2 + (white[1] - pixel[1])**2 + (white[2] - pixel[2])**2))
black_distances.push distance_from_black
white_distances.push distance_from_white
File.delete(filename) if File.exist?(filename)
end
average_black_distance = black_distances.reduce(:+).to_f / black_distances.size
average_white_distance = white_distances.reduce(:+).to_f / white_distances.size
I am also confused about how to use the resulting average_black_distance and average_white_distance to determine which watermark to use.
I was able to make this faster by doing the following:
Taking screenshots using a single ffmpeg command, instead of iterating over movie.length and taking an individual shot every x seconds.
Cropping and scaling the screenshot in the same ffmpeg command, instead of doing that with MiniMagick.
Deleting with rm_rf instead of doing so inside the loop.
# Create folder for screenshots
foldername = "tmp/watermark/screenshots/#{medium.id}/"
FileUtils.mkdir_p foldername
# Take screenshots
movie = FFMPEG::Movie.new(video_file)
`ffmpeg -i #{video_file} -f image2 -vf fps=fps=0.2,crop=in_w/6:in_h/14:0:0,scale=1:1 #{foldername}/out%d.png`
# Calculate distances
white = Color .new('#ffffff')
black = Color .new('#000000')
distances = []
Dir.foreach foldername do |f|
next if %w[. .. .DS_Store].include? f
f = MiniMagick::Image.open(foldername + f)
color = f.get_pixels.flatten
distance_from_black = Color.new(color).color_distance(white)
distance_from_white = Color.new(color).color_distance(black)
distances.push distance_from_white - distance_from_black
end
If the value of distances.inject(0, :+) is positive, the video area captured is on the brighter side. Thanks to #Aetherus!
I have these two functions in my program:
def depict_ph_increase(x,y,color, imobject):
program_print(color)
draw = PIL.ImageDraw.Draw(imobject)
draw.text((x, y),color,(255,255,255))
imobject.save('tmp-out.gif')
im_temp = PIL.Image.open("tmp-out.gif")#.convert2byte()
im_temp = im_temp.resize((930, 340), PIL.Image.ANTIALIAS)
MAP_temp = ImageTk.PhotoImage(im_temp)
map_display_temp = Label(main, image=MAP_temp)
map_display_temp.image = MAP_temp # keep a reference!
map_display_temp.grid(row=4,column=2, columnspan=3)
def read_temp_pixels(temperature_file, rngup, rngdown):
temp_image_object = PIL.Image.open(temperature_file)
(length, width) = get_image_size(temp_image_object)
(rngxleft, rngxright) = rngup
(rngyup,rngydown) = rngdown
print 'the length and width is'
print length, width
hotspots = 5;
for hotspot in range(0,hotspots):
color = "#ffffff"
while color == "#ffffff" or color == "#000000" or color == "#505050" or color == "#969696":
yc = random.randint(rngxleft, rngxright)
xc = random.randint(rngyup,rngydown)
color = convert_RGB_HEX(get_pixel_color(temp_image_object, xc, yc))
depict_ph_increase(xc,yc,color, temp_image_object)
The bottom one calls the top one. Their job is to read in this image:
It then randomly selects a few pixels, grabs their colors, and writes the hex values of the colors on top. But, when it redisplays the image, it gives me this garbage:
Those white numbers up near the upper right corner are the hex values its drawing. Its somehow reading the values from the corrupted image, despite the fact that I don't collect the values until AFTER I actually call the ImageDraw() method. Can someone explain to me why it is corrupting the image?
Some background--the get_pixel_color() function is used several other times in the program and is highly accurate, its just reading the pixel data from the newly corrupted image somehow. Furthermore, I do similar image reading (but not writing) at other points in my code.
If there is anything I can clarify, or any other part of my code you want to see, please let me know. You can also view the program in its entirety at my github here: https://github.com/jrfarah/coral/blob/master/src/realtime.py It should be commit #29.
Other SO questions I have examined, to no avail: Corrupted image is being saved with PIL
Any help would be greatly appreciated!
I fixed the problem by editing this line:
temp_image_object = PIL.Image.open(temperature_file)
to be
temp_image_object = PIL.Image.open(temperature_file).convert('RGB')
Canvas image produces awful color shifts in Chrome and Firefox (Mac) when saved to disk or uploaded to server. Safari has faithful color. Examples below + JSFiddle to reproduce with original image. Notice how the subject's face becomes very orange.
http://jsfiddle.net/E4yRv/141/
:: Includes step-by-step with sample images on how to reproduce
Code Excerpt:
canvas.ondrop = function(e) {
e.preventDefault();
var file = e.dataTransfer.files[0],
reader = new FileReader();
reader.onload = function(event) {
var img = new Image(),
imgStr = event.target.result,
imgData = context.getImageData(0,0, context.width,context.height);
state.innerHTML += ' Img source dropped in: <a href="' +
imgStr + '" target="_blank">save image</a><br />';
img.src = event.target.result;
img.onload = function(event) {
context.height = canvas.height = this.height;
context.width = canvas.width = this.width;
context.drawImage(this, 0, 0);
state.innerHTML += ' Canvas img drawn: save canvas <br />*add .jpg extension when saving';
};
};
reader.readAsDataURL(file);
return false;
};
When canvas draws the image in the respective browser the color renders to match the original. However, if the image is saved to disk locally and viewed in Photoshop (the only program that knows how to true handle color) the colors have shifted! Same occurs if viewing the saved file in a different browser.
Inspecting the images in photoshop, none have an embedded color profile. However, there is some translation of color that has occurred! It does not appear to be a case of misaligned profile or missing profile.
I have produced a detailed JS-Fiddle below on how to reproduce the issue.
http://jsfiddle.net/E4yRv/141/
I have encountered the similar color shift phenomenon in browser recently, and the translation of color in the human face can be reproduced by the JSFiddle code. After some research and thanks to the suggestion provided by Alexander O'Mara (the color profile in image), here is my conclusion:
The phenomenon should be caused by the embedded color profile in the sample image. If you open the original image in photoshop, it will show a warning window, which indicates that the color profile doesn't match the current used color gamut, and three options can be chosen:
use embedded color profile (in short, let's call it ref_ICC)
convert the color to the current used color gamut (after the conversion, a new ICC, i.e., the con_ICC, will be embedded in the resulting image)
ignore the color profile
After saving the three images by choosing the respective options, let's open them in some image viewer. I use XnView to check if any color profile (ICC) is embedded in image, and use Photoshop to check the RGB histogram. The result is:
(the downloaded* stands for the image obtained by the JSFiddle code of canvas.toDataURL)
| option 1 | option 2 | option 3 | downloaded*
histogram | ref_hist | con_hist | ref_hist | con_hist
ICC | ref_ICC | con_ICC | X | X
In a viewer (FastStone, for example) which doesn't care about the color profile (ICC), the displayed RGB value only depends on the histogram. So the original image (option 1) looks different from the downloaded one, in which the histogram seems to be converted by the browser in a way similar to the photoshop does for option 2. On the other hand, the image pairs of "option 1 & 3" and "option 2 & downloaded" show the same displayed value.
If the images are viewed in photoshop, then the three images (option 1 & 2 & downloaded) show the same displayed value. And as a final comparison, the displayed value of the downloaded image is the same in FastStone and Photoshop, while the original image shows different displayed value in these two viewers.
I am writing a function that generates a movie mimicking a particle in a fluid. The movie is coloured and I would like to generate a grayscaled movie for the start. Right now I am using avifile instead of videowriter. Any help on changing this code to get grayscale movie? Thanks in advance.
close all;
clear variables;
colormap('gray');
vidObj=avifile('movie.avi');
for i=1:N
[nx,ny]=coordinates(Lx,Ly,Nx,Ny,[x(i),-y(i)]);
[xf,yf]=ndgrid(nx,ny);
zf=zeros(size(xf))+z(i);
% generate a frame here
[E,H]=nfmie(an,bn,xf,yf,zf,rad,ns,nm,lambda,tf_flag,cc_flag);
Ecc=sqrt(real(E(:,:,1)).^2+real(E(:,:,2)).^2+real(E(:,:,3)).^2+imag(E(:,:,1)).^2+imag(E(:,:,2)).^2+imag(E(:,:,3)).^2);
clf
imagesc(nx/rad,ny/rad,Ecc);
writetif(Ecc,i);
if i==1
cl=caxis;
else
caxis(cl)
end
axis image;
axis off;
frame=getframe(gca);
cdata_size = size(frame.cdata);
data = uint8(zeros(ceil(cdata_size(1)/4)*4,ceil(cdata_size(2)/4)*4,3));
data(1:cdata_size(1),1:cdata_size(2),1:cdata_size(3)) = [frame.cdata];
frame.cdata = data;
vidObj = addframe(vidObj,frame);
end
vidObj = close(vidObj);
For your frame data, use rgb2gray to convert a colour frame into its grayscale counterpart. As such, change this line:
data(1:cdata_size(1),1:cdata_size(2),1:cdata_size(3)) = [frame.cdata];
To these two lines:
frameGray = rgb2gray(frame.cdata);
data(1:cdata_size(1),1:cdata_size(2),1:cdata_size(3)) = ...
cat(3,frameGray,frameGray,frameGray);
The first line of the new code will convert your colour frame into a single channel grayscale image. In colour, grayscale images have all of the same values for all of the channels, which is why for the second line, cat(3,frameGray,frameGray,frameGray); is being called. This stacks three copies of the grayscale image on top of each other as a 3D matrix and you can then write this frame to your file.
You need to do this stacking because when writing a frame to file using VideoWriter, the frame must be colour (a.k.a. a 3D matrix). As such, the only workaround you have if you want to write a grayscale frame to the file is to replicate the grayscale image into each of the red, green and blue channels to create its colour equivalent.
BTW, cdata_size(3) will always be 3, as getframe's cdata structure always returns a 3D matrix.
Good luck!