Using AVPlayer in a Mac app here to play random videos in fullscreen from a folder, but when I try to play .vob files or .mpg files I just get a black screen that lasts as long as the video lasts.
Does AVFoundation not support playback from these containers? I figured that since they were playable with stock QuickTime Player they would also work with AVPlayer.
The AVURLAsset class has a static methods that you can query for supported video UTIs:
+ (NSArray *)audiovisualTypes
On 10.9.1 it returns these system defined UTIs:
public.mpeg
public.mpeg-2-video
public.avi
public.aifc-audio
public.aac-audio
public.mpeg-4
public.au-audio
public.aiff-audio
public.mp2
public.3gpp2
public.ac3-audio
public.mp3
public.mpeg-2-transport-stream
public.3gpp
public.mpeg-4-audio
Here is an explanation of system UTIs. So it seems that at least the .mpg container should be supported.
According to wiki, .mpg files can contain MPEG-1 or MPEG-2 video, but only MPEG-2 video is supported. So maybe that's why the file loads but nothing is displayed.
QuickTime internally uses QTMovieModernizer in order to play videos in legacy formats (as mentioned in this WWDC session), so maybe you can look into that. It even has a method for determining if a file needs to be modernized:
+ requiresModernization:error:
To get a list of supported extensions, you can use the following function:
import AVKit
import MobileCoreServices
func getAllowedAVPlayerFileExtensions() -> [String] {
let avTypes = AVURLAsset.audiovisualTypes()
var avExtensions = avTypes.map({ UTTypeCopyPreferredTagWithClass($0 as CFString, kUTTagClassFilenameExtension)?.takeRetainedValue() as String? ?? "" })
avExtensions = avExtensions.filter { !$0.isEmpty }
return avExtensions
}
This will return a list like so:
["caf", "ttml", "au", "ts", "mqv", "pls", "flac", "dv", "amr", "mp1", "mp3", "ac3", "loas", "3gp", "aifc", "m2v", "m2t", "m4b", "m2a", "m4r", "aa", "webvtt", "aiff", "m4a", "scc", "mp4", "m4p", "mp2", "eac3", "mpa", "vob", "scc", "aax", "mpg", "wav", "mov", "itt", "xhe", "m3u", "mts", "mod", "vtt", "m4v", "3g2", "sc2", "aac", "mp4", "vtt", "m1a", "mp2", "avi"]
Related
I'm trying to change the max buffer length on my video streaming in clappr video player.
I know that in HLS format the way to do it is like this:
player = new Clappr.Player({
playback: {
hlsjsConfig: {
maxMaxBufferLength: 30
}}})
And it's realy working for HLS videos,
I'm looking for equivalent way to do it with MPEG-dash foramt
How are you playing DASH in Clappr?
If you are using Shaka, https://github.com/clappr/dash-shaka-playback, set it up as shown at https://github.com/clappr/dash-shaka-playback, using the buffer settings you require as described at https://github.com/google/shaka-player/blob/master/docs/tutorials/network-and-buffering-config.md#buffering-configuration
Eg:
player = new Clappr.Player({
source: '//storage.googleapis.com/shaka-demo-assets/angel-one/dash.mpd',
plugins: [DashShakaPlayback],
shakaConfiguration: {
preferredAudioLanguage: 'pt-BR',
streaming: {
bufferingGoal: 30,
rebufferingGoal: 15,
bufferBehind: 60
}
}
});
I'm creating a video (QuickTime .mov format, H.264 encoded) from a bunch of still images, and I want to add a chapter track in the process. The video is being created fine, and I am not detecting any errors, but QuickTime Player does not show any chapters. I am aware of this question but it does not solve my problem.
The old QuickTime Player 7, unlike recent versions, can show information about the tracks of a movie. When I open a movie with working chapters (created using old QuickTime code), I see a video track and a text track, and the video track knows that the text track is providing chapters for the video. Whereas, if I examine a movie created by my new code, there is a metadata track along with the video track, but QuickTime does not know that the metadata track is supposed to be providing chapters. Things I've read have led me to believe that one is supposed to use metadata for chapters, but has anyone actually gotten that to work? Would a text track work?
Here's how I am creating the AVAssetWriterInput for the metadata.
// Make dummy AVMetadataItem to get its format
AVMutableMetadataItem* dummyMetaItem = [AVMutableMetadataItem metadataItem];
dummyMetaItem.identifier = AVMetadataIdentifierQuickTimeUserDataChapter;
dummyMetaItem.dataType = (NSString*) kCMMetadataBaseDataType_UTF8;
dummyMetaItem.value = #"foo";
AVTimedMetadataGroup* dummyGroup = [[[AVTimedMetadataGroup alloc]
initWithItems: #[dummyMetaItem]
timeRange: CMTimeRangeMake( kCMTimeZero, kCMTimeInvalid )] autorelease];
CMMetadataFormatDescriptionRef metaFmt = [dummyGroup copyFormatDescription];
// Make the input
AVAssetWriterInput* metaWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType: AVMediaTypeMetadata
outputSettings: nil
sourceFormatHint: metaFmt];
CFRelease( metaFmt );
// Associate metadata input with video input
[videoInput addTrackAssociationWithTrackOfInput: metaWriterInput
type: AVTrackAssociationTypeChapterList];
// Associate metadata input with AVAssetWriter
[writer addInput: metaWriterInput];
// Create a metadata adaptor
AVAssetWriterInputMetadataAdaptor* metaAdaptor = [AVAssetWriterInputMetadataAdaptor
assetWriterInputMetadataAdaptorWithAssetWriterInput: metaWriterInput];
P.S. I tried using a text track instead (an AVAssetWriterInput of type AVMediaTypeText) and QuickTime Player says the result is "not a movie". Not sure what I'm doing wrong.
I managed to use a text track to provide chapters. I spent an Apple developer tech support incident and was told that this is the right way to do it.
Setup:
I assume that the AVAssetWriter has been created, and an AVAssetWriterInput for the video track has been assigned to it.
The trickiest part here is creating the text format description. The docs say that CMTextFormatDescriptionCreateFromBigEndianTextDescriptionData takes as input a TextDescription structure, but neglects to say where that structure is defined. It is in Movies.h, which is in QuickTime.framework, which is no longer part of the Mac OS SDK. Thanks, Apple.
// Create AVAssetWriterInput
AVAssetWriterInput* textWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType: AVMediaTypeText
outputSettings: nil ];
textWriterInput.marksOutputTrackAsEnabled = NO;
// Connect input to writer
[writer addInput: textWriterInput];
// Mark the text track as providing chapter for the video
[videoWriterInput addTrackAssociationWithTrackOfInput: textWriterInput
type: AVTrackAssociationTypeChapterList];
// Create the text format description, which we will need
// when creating each sample.
CMFormatDescriptionRef textFmt = NULL;
TextDescription textDesc;
memset( &textDesc, 0, sizeof(textDesc) );
textDesc.descSize = OSSwapHostToBigInt32( sizeof(textDesc) );
textDesc.dataFormat = OSSwapHostToBigInt32( 'text' );
CMTextFormatDescriptionCreateFromBigEndianTextDescriptionData( NULL,
(const uint8_t*)&textDesc, sizeof(textDesc), NULL, kCMMediaType_Text,
&textFmt );
Writing a Sample:
CMSampleTimingInfo timing =
{
CMTimeMakeWithSeconds( endTime - startTime, timeScale ), // duration
CMTimeMakeWithSeconds( startTime, timeScale ),
kCMTimeInvalid
};
CMSampleBufferRef textSample = NULL;
CMPSampleBufferCreateWithText( NULL, (CFStringRef)theTitle, true, NULL, NULL,
textFmt, &timing, &textSample );
[textWriterInput appendSampleBuffer: textSample];
The function CMPSampleBufferCreateWithText is taken from the open source CoreMediaPlus.
It seems that MediaSource and Progressive playback use the different demuxer. ChunkDemuxer is used for MediaSource, ShellDemuxer is used for Progressive playback.
In ShellParser.cpp implementation:
PipelineStatus ShellParser::Construct(
scoped_refptr<ShellDataSourceReader> reader,
scoped_refptr<ShellParser>* parser,
const scoped_refptr<MediaLog>& media_log) {
DCHECK(parser);
DCHECK(media_log);
*parser = NULL;
// download first 16 bytes of stream to determine file type and extract basic
// container-specific stream configuration information
uint8 header[kInitialHeaderSize];
int bytes_read = reader->BlockingRead(0, kInitialHeaderSize, header);
if (bytes_read != kInitialHeaderSize) {
return DEMUXER_ERROR_COULD_NOT_PARSE;
}
// attempt to construct mp4 parser from this header
return ShellMP4Parser::Construct(reader, header, parser, media_log);
}
It seems that Cobalt can only demux MP4 container(Only ShellMP4Parser) for progressive playback.
Is it known status for Cobalt ?how can we support webm progressive playback on the device?
Cobalt will not support WebM/VP9 progressive playback. We changed the Progressive Conformance test to change VP9 to H264. This will be pushed soon.
https://github.com/youtube/js_mse_eme/commit/d7767e13be7ed8b8bdb2efda39337a4a2fb121ba
I have a video background for the welcome screen of the app but when I run only a blank screen appears.
The string for the bundle path for resource part is perfectly correct so a typo isn't the problem or anything. When I put a breakpoint on the "if path" block though it isn't getting called. This is in view did load and "AVKit" and "AVFoundation" are both imported and there are no errors.
Any insights? Thanks!
let moviePath = NSBundle.mainBundle().pathForResource("Ilmatic_Wearables", ofType: "mp4")
if let path = moviePath {
let url = NSURL.fileURLWithPath(path)
let player = AVPlayer(URL: url)
let playerViewController = AVPlayerViewController()
playerViewController.player = player
playerViewController.view.frame = self.view.bounds
self.view.addSubview(playerViewController.view)
self.addChildViewController(playerViewController)
player.play()
}
Weird, your code should work if moviePath isn't nil as you are saying. Check this way:
if let moviePath != nil {
...
}
Update
Check if your video file's Target Membership is set
EDIT (11/13/2015) - Try this:
Right Click, delete the video file from your project navigator, select move to trash.
Then drag the video from finder back into your project navigator.
When choosing options for adding these files:
check Copy items if needed
check add to targets
Rebuild your project, see if it works now!
I have included sample code that works for me:
import AVFoundation
import AVKit
class ExampleViewController: UIViewController {
func loadAndPlayFile() {
guard let path = NSBundle.mainBundle().pathForResource("filename", ofType: "mp4") else {
print("Error: Could not locate video resource")
return
}
let url = NSURL(fileURLWithPath: path)
let moviePlayer = AVPlayer(URL: url)
let moviePlayerController = AVPlayerViewController()
moviePlayerController.player = moviePlayer
presentViewController(moviePlayerController, animated: true) {
moviePlayerController.player?.play()
}
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
loadAndPlayFile()
}
}
/edit
Check to see if the video file plays in iTunes. If it plays in iTunes, it should also play on the iOS simulator and on a device.
If it doesn't play in iTunes, but does work with VLC Media Player, Quicktime, etc. It need to be re-encoded.
iOS supports many industry-standard video formats and compression standards, including the following:
H.264 video, up to 1.5 Mbps, 640 by 480 pixels, 30 frames per second, Low-Complexity version of the H.264 Baseline Profile with AAC-LC audio up to 160 Kbps, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats
H.264 video, up to 768 Kbps, 320 by 240 pixels, 30 frames per second, Baseline Profile up to Level 1.3 with AAC-LC audio up to 160 Kbps, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats
MPEG-4 video, up to 2.5 Mbps, 640 by 480 pixels, 30 frames per second, Simple Profile with AAC-LC audio up to 160 Kbps, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats
Numerous audio formats, including the ones listed in Audio Technologies
I am using the following code snippets to record screen, and in most situations recorded wmv file is clear enough, but for some part of video it is not very clear (grey color for some parts). What I record is ppt with full screen mode. I am using Windows Media Encoder 9.
Here is my code snippet,
IWMEncSourceGroup SrcGrp;
IWMEncSourceGroupCollection SrcGrpColl;
SrcGrpColl = encoder.SourceGroupCollection;
SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1");
IWMEncVideoSource2 SrcVid;
IWMEncSource SrcAud;
SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO);
SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO);
SrcVid.SetInput("ScreenCap://ScreenCapture1", "", "");
SrcAud.SetInput("Device://Default_Audio_Device", "", "");
// Specify a file object in which to save encoded content.
IWMEncFile File = encoder.File;
string CurrentFileName = Guid.NewGuid().ToString();
File.LocalFileName = CurrentFileName;
CurrentFileName = File.LocalFileName;
// Choose a profile from the collection.
IWMEncProfileCollection ProColl = encoder.ProfileCollection;
IWMEncProfile Pro;
for (int i = 0; i < ProColl.Count; i++)
{
Pro = ProColl.Item(i);
if (Pro.Name == "Screen Video/Audio High (CBR)")
{
SrcGrp.set_Profile(Pro);
break;
}
}
encoder.Start();
thanks in advance,
George
I would guess that it's a problem with your encoder profile or settings, and not a problem with the code. If you're using the default "Screen Video/Audio High (CBR)" profile in WME9, it's using a video bitrate of 250Kbps, which is pretty low. I'd suggest creating a custom profile in the Windows Media Encoder Profile Editor Utility. Something like this:
awesomesc.prx
Name: Awesome Screen Profile
Audio: WMA 9.2 CBR (32kbps, 44kHz, mono CBR)
Video: WMV 9 Screen Quality VBR (Video size Same as video input, Frame rate 10fps, Key frame interval 3sec, Video quality 90)
Then just change the code to match the custom profile's name.
if (Pro.Name == "Awesome Screen Profile")
The encoder settings would take a much longer post to go through, but if you have not changed them from the defaults, you should be OK.
The Quality-based VBR algorithm can be pretty amazing, and will likely produce a surprisingly low average bitrate, but if VBR won't work for your needs, you can use the Windows Media Encoder Profile Editor utility to import the schia.prx profile that you're using and tweak the settings to find a higher CBR bitrate that produces acceptable quality.
"Screen Video/Audio Medium (CBR)"
it solved my problem