MoviePy does not write audio to video - moviepy

My code is as follows:
from moviepy.editor import *
clip = ImageClip(r"image path here")
audio = AudioFileClip(r"audio path here")
final_clip = CompositeVideoClip([clip])
final_clip2 = final_clip.set_audio(audio)
final_clip2 = final_clip2.set_duration(5)
final_clip2.write_videofile("output_video.mp4", fps=1)
I see no reason why this shouldn't work, it just doesn't write any audio to the file. I was wondering if it was something to do with the ffmeg, but reinstalling made no difference. I am convinced at this point that it is a problem with my computer because it seems that no one else in the world is having this problem.
If it helps, the console says that it is writing audio, but the red bar does not actually fill as it does when it writes video. Any help would be great as I am close to giving up.
(for the record any other methods of adding audio to a video with python would be appreciated)

Related

Why does this code to play a sound using Python and Pygame on mac not load the file and crash?

I have looked at several questions on this site and multiple other resources and found this code that can be used to play a sound with pygame, but I have been having some issues with it.
import pygame
pygame.init()
song = pygame.mixer.Sound('sound.wav')
clock = pygame.time.Clock()
song.play()
while True:
clock.tick(60)
pygame.quit()
I have tried two files ("sound.wav' and 'sound.mp3'), as well as single and double quotes, and it still doesn't work. When I try the path of the files, Python crashes. Is there something I can change in or add to this code to make it work, and if not, is there another solution?
The error I get says that it can't load the file.
EDIT: Now it is crashing no matter what I pass to pygame.mixer.Sound(), saying Python quit unexpectedly.
In pygame you can't use relative file paths unless you use os.
import pygame,os
dir = os.path.dirname(__file__)
pygame.init()
song = pygame.mixer.Sound(os.path.join(dir,'./sound.wav'))
clock = pygame.time.Clock()
song.play()
while True:
clock.tick(60)
pygame.quit()
using pygame to play sound, make sure that your code contains stop music code as well, if your code only contains start music then it python will not play that sound.
example.
from pygame import mixer
mixer.init()
mixer.music.load("song.mp3")
mixer.music.set_volume("0.5")
mixer.music.play()
#comment ::: if you have only this code then your sound will not be played you #should add code for stop it at any point like brlow..
st = input("type s to stop")
if(st == "s"):
mixer.music.stop()
#now this code will work

Qt5 Separating a video widget

I am trying to find a way to create and separate a video widget into two parts, in order to process stereo videos:
The first one would play a part of the video;
The second one would play the other part of the video.
I currently do not know where to start. I am searching around qt multimedia module, but I do not know how to achieve this behavior.
Does anyone have an idea?
I was also thinking to build two video widgets and run them into two threads but they have to be perfectly synchronized. The idea was to cut the video into two ones with ffmpeg and affecting each one to a video widget. However I do not think it would be easy to achieve this (each frame would have to be sync).
Thanks for your answers.
If your stereo video data is encoded in some special format that needs decoding on the codec/container format, I think that the QMultiMedia stuff in Qt is too basic for this kind of use case as it does not allow tuning into "one stream" of a multi-stream transport container.
However, if you have alternating scan-lines, alternating frames or even "side-by-side" or "over-and-under" image per frame encoded in a "normal" video stream, then all you will have to do is intercept the frames as they are being decoded and separate the frame into two QImages and display them.
That is definitely doable!
However depending on your video source and even the platform, you might want to select different methods. For example if you are using a QCamera as the source of your video you could use the QVideoProbe or QViewFinder approaches. Interrestingly the availability of those methods on different platforms vary, so definitely figure out that first.
If you are decoding video using QMediaPlayer, QVideoProbe will probably be the way to go.
For an inttroduction to how you can grab frames using the different methods, please look at some of the examples from the official documentation on the subject.
Here is a short example of using the QVideoProbe approach:
videoProbe = new QVideoProbe(this);
// Here, myVideoSource is a camera or other media object compatible with QVideoProbe
if (videoProbe->setSource(myVideoSource)) {
// Probing succeeded, videoProbe->isValid() should be true.
connect(videoProbe, SIGNAL(videoFrameProbed(QVideoFrame)),
this, SLOT(processIndividualFrame(QVideoFrame)));
}
// Cameras need to be started. Do whatever your video source requires to start here
myVideoSource->start();
// [...]
// This is the slot where the magic happens (separating each single frame from video into two `QImage`s and posting the result to two `QLabel`s for example):
void processIndividualFrame(QVideoFrame &frame){
QVideoFrame cloneFrame(frame);
cloneFrame.map(QAbstractVideoBuffer::ReadOnly);
const QImage image(cloneFrame.bits(),
cloneFrame.width(),
cloneFrame.height(),
QVideoFrame::imageFormatFromPixelFormat(cloneFrame.pixelFormat()));
cloneFrame.unmap();
QSize sz = image.size();
const int w = sz.width();
const int h2 = sz.height() / 2;
// Assumes "over-and-under" placement of stereo data for simplicity.
// If you instead need access to individual scanlines, please have a look at [this][2].
QImage leftImage = image.copy(0, 0, w, h2);
QImage rightImage = image.copy(0, h2, w, h2);
// Assumes you have a UI set up with labels named as below, and with sizing / layout set up correctly
ui->myLeftEyeLabel.setPixmap(QPixmap::fromImage(leftImage));
ui->myRightEyeLabel.setPixmap(QPixmap::fromImage(leftImage));
// Should play back rather smooth since they are effectively updated simultaneously
}
I hope this was useful.
BIG FAT WARNING: Only parts of this code has been tested or even compiled!

Matplotlib animation: setting fps=90 creates a movie file properly, but setting fps=120 does not?

Could you help me understand why this might be the case? This is what my command looks like:
ani.save("animation.avi", codec="libx264", fps=120)
The movie file size with the above command is very small in the order of hundreds of kilobytes. Playing the movie just shows a static picture (the very first frame).
ani.save("animation.avi", codec="libx264", fps=90)
This on the other hand, creates a movie file with a reasonable size in the order of megabytes, and plays as an animation as well.
At first, I thought this was an issue with a specific writer (avconv), but this problem occurs even when I have no writer specified, so it might be a general issue with my specific case, or matplotlib.

How to save DynamicSoundEffectInstance or SoundEffectInstance to a File or an Array?

I have a Windows Phone Silverlight Application. I do this to slow down the voice and change the pitch from microphone stream:
sound = new SoundEffect(bStream, microphone.SampleRate, AudioChannels.Mono);
SoundEffectInstance soundInstance = sound.CreateInstance();
soundInstance.Pitch -= 1;
soundInstance.Play();
here "bStream" is a byte array. The problem is I cannot save the data with the changed pitch (although I can play it). Is there a way to save my byte array after the pitch has been changed ? I tried DynamicSoundEffectInstance as well with same result. When I save bStream as a wav file, all the effects are gone.
Thanks for your help and insight.
Do you really need to save it with the pitch adjusted? Why not just save the pitch adjustment amount with the file and re-apply it again (just like you are doing already) after you load the file up.
If you really do need to adjust the pitch of the data, you'll essentially be resampling it. This involves interpolating and/or averaging the values to stretch or compress them in time. I believe NAudio contains C# code for doing this: http://naudio.codeplex.com/

Sample code for using mac camera in a program?

I'd like to use the camera in my Macbook in a program. I'm fairly language agnostic - C, Java, Python etc are all fine. Could anyone suggest the best place to look for documents or "Hello world" type code?
The ImageKit framework in Leopard has an IKPictureTaker class that will let you run the standard picture-taking sheet or panel that you seen in iChat and other applications.
If you don't want to use the standard picture-taker panel/sheet interface, you an use the QTKit Capture functionality to get an image from the iSight.
Both of these will require writing some Cocoa code in Objective-C, but that shouldn't really be an obstacle these days.
If you want to manipulate the camera directly from your code, you must use the QuickTime Capture APIs or the Cocoa QTKit Capture wrapper (much better).
The only caveat is: if you use a QTCaptureDecompressedVideoOutput, remember that the callbacks aren't made on the main thread, but on the QuickTIme-managed capture thread. Use [someObject performSelectorOnMainThread:... withObject:... waitUntilDone:NO] to send messages to an object on the main thread.
There is a utility called isightcapture that runs from the unix command line that takes a picture from the isight camera and saves it.
You can check it out at this web site: http://www.macupdate.com/info.php/id/18598
An example of using this with AppleScript is:
tell application "Terminal"
do script "/Applications/isightcapture myimage.jpg"
end tell
From a related question which specifically asked the solution to be pythonic, you should give a try to motmot's camiface library from Andrew Straw. It also works with firewire cameras, but it works also with the isight, which is what you are looking for.
From the tutorial:
import motmot.cam_iface.cam_iface_ctypes as cam_iface
import numpy as np
mode_num = 0
device_num = 0
num_buffers = 32
cam = cam_iface.Camera(device_num,num_buffers,mode_num)
cam.start_camera()
frame = np.asarray(cam.grab_next_frame_blocking())
print 'grabbed frame with shape %s'%(frame.shape,)
It is used in this sample neuroscience demo
Quartz Composer is also a pleasant way to capture and work with video, when it's applicable. There's a video input patch.
Quartz Composer is a visual programming environment that integrates into a larger Cocoa program if need be.
http://developer.apple.com/graphicsimaging/quartz/quartzcomposer.html
Another solution is OpenCV+python with a script like:
import cv
capture = cv.CaptureFromCAM(0)
img = cv.QueryFrame(capture)
Cannot do any simpler!

Resources