Moviepy Rendered Video File Doesn't Have any Sound - moviepy

I have just taken a simple video clip and made a rendered video of it using moviepy. This is my code :
import moviepy.editor as mpe
video_clip = mpe.VideoFileClip("video.mp4")
audio_clip = mpe.AudioFileClip("closer.mp3")
video_clip.to_videofile("testingaudio.mp4",audio_codec = 'aac',audio = True)
The video is created. I even played it in VLC but there is no audio in it.

You need to add the audio clip to the video clip before writing it to a file.
Example:
import moviepy.editor as mpe
video_clip = mpe.VideoFileClip("video.mp4")
audio_clip = mpe.AudioFileClip("closer.mp3")
video_clip = video_clip.set_audio(audio_clip)
video_clip.write_videofile("testingaudio.mp4", audio_codec='aac')

Related

ffmpeg pipe to generate h264 chunks from numpy array or bgr24 bytes

is there a possibility if someone can help me with FFmpeg pipe to generate h264 chunks from NumPy array
This is what I am doing now:
process = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='bgr24', s='{}x{}'.format(self._deeptwin.image_width, self._deeptwin.image_height))
.output(path, pix_fmt='yuv420p', vcodec='libx264', r=25)
.overwrite_output()
.run_async(pipe_stdin=True)
)
for res in response:
st = datetime.datetime.now()
print(f'{res.status} image {res.id} rendered at {datetime.datetime.now()}')
img = np.frombuffer(res.image, dtype=np.uint8)
img = img.reshape((int(self._deeptwin.image_height * 1.5), self._deeptwin.image_width))
img = cv2.cvtColor(img, cv2.COLOR_YUV2BGR_I420)
for frame in img:
start = datetime.datetime.now()
process.stdin.write(
frame
.astype(np.uint8)
.tobytes()
)
end = datetime.datetime.now()
stop = datetime.datetime.now()
print(f'{res.id} conversion done at {stop-st}')
process.stdin.close()
process.wait()
`
What this is doing currently is generating an out.h264 file. It is fine and playing a correct video as well.
However, what I need to achieve: I want to generate chunks of h264 frames in the following way: 1.h264, 2.h264............n.h264
Any guidance would be really appreciated and helpful.
Thanks in advance.

poster image not displaying with add_movie() in python-pptx

I am trying to insert some sound files into a presentation, and the sound file seems to save fine, but the display image is always the default play button logo. Is there something wrong with my code, or is it another issue. I am currently working in a linux environment, if that makes any difference. I have tried with both mp4 and mp3 and the image issue is the same. The small play bar also seems not to appear although the sound file is in the presentation.
from pptx import Presentation
from pptx.util import Inches
prs = Presentation()
title_slide_layout = prs.slide_layouts[0]
slide = prs.slides.add_slide(title_slide_layout)
prs.slides[0].shapes.add_movie("sample2.mp3",
left = Inches(1), top = Inches(1), width = Inches(1), height = Inches(1),
poster_frame_image = "cat.jpeg"
)
prs.save('sound_image.pptx')
You can easily add any sound files to presentations and set poster images for them by using Aspose.Slides for Python as shown below:
import aspose.slides as slides
presentation_path = "example.pptx"
audio_path = "sample2.mp3"
image_path = "cat.jpeg"
with slides.Presentation() as presentation:
slide = presentation.slides[0]
# Add an audio frame to the slide with a specified position and size.
with open(audio_path, 'rb') as audio_stream:
audio_frame = slide.shapes.add_audio_frame_embedded(150, 100, 50, 50, audio_stream)
# Add the image to presentation resources.
with open(image_path, 'rb') as image_stream:
audio_image = presentation.images.add_image(image_stream)
# Set the image as the audio poster.
audio_frame.picture_format.picture.image = audio_image
presentation.save(presentation_path, slides.export.SaveFormat.PPTX)
I work as a Support Developer at Aspose.

Read many videos and convert them into images

I have been able to get a program that can read a single video and convert it into images. However, actually, I am having many videos and I would like to write a function that can read the videos one by one and convert them into images. I aim to convert the videos into images so as to facilitate the processing. The code for converting a single video is as follows.
import sys
import argparse
import cv2
vidcap = cv2.VideoCapture('E:/DATA/1/12.mp4')
path_out = "E:/DATA_Synth/12"
success,image = vidcap.read()
#image=cv2.resize(image, (640, 480))
count = 0
while success:
cv2.imwrite(path_out + "/frame%03d.jpg" % count, image)
#cv2.imwrite(path_out + "/frame%d.jpg" % count, image)
success,image = vidcap.read()
#image = cv2.resize(image, (640, 480))
print ('Read a new frame: ', success)
count += 1
Any suggestions and comments would be highly appreciated
You should create a loop to iterate through your folder which contains all your videos and pick each video and run your above code.
import os
import sys
import argparse
import cv2
directory="enter your folder directory for video"
for vidfile in os.listdir(directory):
if vidfile.endswith('.mp4'):
vidcap=cv2.VideoCapture(os.path.join(directory, vidfile))
## write your code here for converting the video to individual frames
else:
continue
I have updated the codes as follows.
import os
import sys
import argparse
import cv2
directory="E:/Training/video"
path_out = "E:/DATA_Synth/12"
for vidfile in os.listdir(directory):
if vidfile.endswith('.avi'):
vidcap=cv2.VideoCapture(os.path.join(directory, vidfile))
## write your code here for converting the video to individual frames
success,image = vidcap.read()
#image=cv2.resize(image, (640, 480))
count = 0
while success:
#vidcap.set(cv2.CAP_PROP_POS_MSEC,(count*1000))
cv2.imwrite(path_out + "/frame%03d.jpg" % count, image) # save frame as JPEG file
#cv2.imwrite(path_out + "/frame%d.jpg" % count, image)
success,image = vidcap.read()
#image = cv2.resize(image, (640, 480))
#print ('Read a new frame: ', success)
count += 1
else:
continue
However, only the frames from the last video in my directory are saved. Please, how can I modify the codes in such a way that the name of frame as written here
cv2.imwrite(path_out + "/frame%d.jpg" % count, image)
may also contain the name of the corresponding video.

AVPlayer pathForResource not getting called

I have a video background for the welcome screen of the app but when I run only a blank screen appears.
The string for the bundle path for resource part is perfectly correct so a typo isn't the problem or anything. When I put a breakpoint on the "if path" block though it isn't getting called. This is in view did load and "AVKit" and "AVFoundation" are both imported and there are no errors.
Any insights? Thanks!
let moviePath = NSBundle.mainBundle().pathForResource("Ilmatic_Wearables", ofType: "mp4")
if let path = moviePath {
let url = NSURL.fileURLWithPath(path)
let player = AVPlayer(URL: url)
let playerViewController = AVPlayerViewController()
playerViewController.player = player
playerViewController.view.frame = self.view.bounds
self.view.addSubview(playerViewController.view)
self.addChildViewController(playerViewController)
player.play()
}
Weird, your code should work if moviePath isn't nil as you are saying. Check this way:
if let moviePath != nil {
...
}
Update
Check if your video file's Target Membership is set
EDIT (11/13/2015) - Try this:
Right Click, delete the video file from your project navigator, select move to trash.
Then drag the video from finder back into your project navigator.
When choosing options for adding these files:
check Copy items if needed
check add to targets
Rebuild your project, see if it works now!
I have included sample code that works for me:
import AVFoundation
import AVKit
class ExampleViewController: UIViewController {
func loadAndPlayFile() {
guard let path = NSBundle.mainBundle().pathForResource("filename", ofType: "mp4") else {
print("Error: Could not locate video resource")
return
}
let url = NSURL(fileURLWithPath: path)
let moviePlayer = AVPlayer(URL: url)
let moviePlayerController = AVPlayerViewController()
moviePlayerController.player = moviePlayer
presentViewController(moviePlayerController, animated: true) {
moviePlayerController.player?.play()
}
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
loadAndPlayFile()
}
}
/edit
Check to see if the video file plays in iTunes. If it plays in iTunes, it should also play on the iOS simulator and on a device.
If it doesn't play in iTunes, but does work with VLC Media Player, Quicktime, etc. It need to be re-encoded.
iOS supports many industry-standard video formats and compression standards, including the following:
H.264 video, up to 1.5 Mbps, 640 by 480 pixels, 30 frames per second, Low-Complexity version of the H.264 Baseline Profile with AAC-LC audio up to 160 Kbps, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats
H.264 video, up to 768 Kbps, 320 by 240 pixels, 30 frames per second, Baseline Profile up to Level 1.3 with AAC-LC audio up to 160 Kbps, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats
MPEG-4 video, up to 2.5 Mbps, 640 by 480 pixels, 30 frames per second, Simple Profile with AAC-LC audio up to 160 Kbps, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats
Numerous audio formats, including the ones listed in Audio Technologies

How do I convert a video or a sequence of images to a bag file?

I am new to ROS. I need to convert a preexisting video file, or a large amount of images that can be concatenated into a video stream, into a .bag file in ROS. I found this code online: http://answers.ros.org/question/11537/creating-a-bag-file-out-of-a-image-sequence/, but it says it is for camera calibration, so not sure if it fits my purpose.
Could someone with a good knowledge of ROS confirm that I can use the code in the link provided for my purposes, or if anyone actually has the code I'm looking for, could you please post it here?
The following code converts a video file to a bag file, inspired from the code in the link provided.
Little reminder:
this code depends on cv2 (opencv python)
time stamp of ROS message is calculated by frame index and fps. fps will be set to 24 if opencv unable to read it from the video.
import time, sys, os
from ros import rosbag
import roslib, rospy
roslib.load_manifest('sensor_msgs')
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import cv2
TOPIC = 'camera/image_raw/compressed'
def CreateVideoBag(videopath, bagname):
'''Creates a bag file with a video file'''
bag = rosbag.Bag(bagname, 'w')
cap = cv2.VideoCapture(videopath)
cb = CvBridge()
prop_fps = cap.get(cv2.CAP_PROP_FPS)
if prop_fps != prop_fps or prop_fps <= 1e-2:
print "Warning: can't get FPS. Assuming 24."
prop_fps = 24
ret = True
frame_id = 0
while(ret):
ret, frame = cap.read()
if not ret:
break
stamp = rospy.rostime.Time.from_sec(float(frame_id) / prop_fps)
frame_id += 1
image = cb.cv2_to_compressed_imgmsg(frame)
image.header.stamp = stamp
image.header.frame_id = "camera"
bag.write(TOPIC, image, stamp)
cap.release()
bag.close()
if __name__ == "__main__":
if len( sys.argv ) == 3:
CreateVideoBag(*sys.argv[1:])
else:
print( "Usage: video2bag videofilename bagfilename")

Resources