I have a video background for the welcome screen of the app but when I run only a blank screen appears.
The string for the bundle path for resource part is perfectly correct so a typo isn't the problem or anything. When I put a breakpoint on the "if path" block though it isn't getting called. This is in view did load and "AVKit" and "AVFoundation" are both imported and there are no errors.
Any insights? Thanks!
let moviePath = NSBundle.mainBundle().pathForResource("Ilmatic_Wearables", ofType: "mp4")
if let path = moviePath {
let url = NSURL.fileURLWithPath(path)
let player = AVPlayer(URL: url)
let playerViewController = AVPlayerViewController()
playerViewController.player = player
playerViewController.view.frame = self.view.bounds
self.view.addSubview(playerViewController.view)
self.addChildViewController(playerViewController)
player.play()
}
Weird, your code should work if moviePath isn't nil as you are saying. Check this way:
if let moviePath != nil {
...
}
Update
Check if your video file's Target Membership is set
EDIT (11/13/2015) - Try this:
Right Click, delete the video file from your project navigator, select move to trash.
Then drag the video from finder back into your project navigator.
When choosing options for adding these files:
check Copy items if needed
check add to targets
Rebuild your project, see if it works now!
I have included sample code that works for me:
import AVFoundation
import AVKit
class ExampleViewController: UIViewController {
func loadAndPlayFile() {
guard let path = NSBundle.mainBundle().pathForResource("filename", ofType: "mp4") else {
print("Error: Could not locate video resource")
return
}
let url = NSURL(fileURLWithPath: path)
let moviePlayer = AVPlayer(URL: url)
let moviePlayerController = AVPlayerViewController()
moviePlayerController.player = moviePlayer
presentViewController(moviePlayerController, animated: true) {
moviePlayerController.player?.play()
}
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
loadAndPlayFile()
}
}
/edit
Check to see if the video file plays in iTunes. If it plays in iTunes, it should also play on the iOS simulator and on a device.
If it doesn't play in iTunes, but does work with VLC Media Player, Quicktime, etc. It need to be re-encoded.
iOS supports many industry-standard video formats and compression standards, including the following:
H.264 video, up to 1.5 Mbps, 640 by 480 pixels, 30 frames per second, Low-Complexity version of the H.264 Baseline Profile with AAC-LC audio up to 160 Kbps, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats
H.264 video, up to 768 Kbps, 320 by 240 pixels, 30 frames per second, Baseline Profile up to Level 1.3 with AAC-LC audio up to 160 Kbps, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats
MPEG-4 video, up to 2.5 Mbps, 640 by 480 pixels, 30 frames per second, Simple Profile with AAC-LC audio up to 160 Kbps, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats
Numerous audio formats, including the ones listed in Audio Technologies
Related
Q1) How can I get video file details with macOS APIs?
Q2) How do I assess video quality of an mp4 file?
I need a program to separate a large archive of mp4 files based on the video quality - i.e., clarity, sharpness - roughly, where they'd appear along the TV spectrum of analog -> 720 -> 1080 -> 2/4k. In this case, audio, color levels, file size, CPU/GPU load, etc., are not considerations per se.
Q1) It is easy to find "natural" dimensions with AVPlayer. A bit more poking around (https://developer.apple.com/documentation/avfoundation/avpartialasyncproperty/3816116-formatdescriptions ), my files have "avc1" as the media subtype; I gather that means h264. Can't locate ways to get more details with Apple APIs, like bit rate, that even Quicktime Player provides.
Lots of info is available with ffprobe, so I added it to my program. You too can embed a CLI program that runs inside a macOS application in background - see code at bottom.
Q2) To a video noob, dimensions are the obvious first approximation for video quality ... and codec, but mine have previously been converted to h264. Then I consider bit rates from ffprobe.
For testing, I located two h264 files with same dimensions (1280, 720), bit depth (8), and similar file size, frame rate, duration, amount of motion, color content. To my eye, one of the two looks better, distinctly sharper; that file is smaller and has a lower video bit rate (20-40%), even when normalized for its slightly lower frame rate and duration.
From an info theory perspective, doesn't seem possible. I've learned codecs can provide "quality" optimizations during compression - way past my understanding - but I can't find, looking at the video stream data, indicators of any that would impact quality/sharpness. Nothing in per-frame and per-packet data from ffprobe stands out.
Are there any tell-tale signs I should look for? Is this a fool's errand?
Here's my swift hack to run ffprobe inside a macOS application (written with XC 13 on 11.6). If you know how to run a Process() that lives in /usr/bin/..., please post - I don't get the entitlements thing. (Aliases/links to home directory don't work.)
// takes a local fileURL and determines video properties using ffprobe
func runFFProbe(targetURL:URL){
func buildArguments(url:URL) -> [String] {
// for ffprobe introduction,see: https://ottverse.com/ffprobe-comprehensive-tutorial-with-examples/
// and for complete info: https://ffmpeg.org/ffprobe.html
var arguments:[String] = []
// note: don't interpolate URL paths - may have spaces in them
let argString = "-v error -hide_banner -of default=noprint_wrappers=0 -print_format flat -select_streams v:0 -show_entries stream=width,height,bit_rate,codec_name,codec_long_name,profile,codec_tag_string,time_base,avg_frame_rate,r_frame_rate,duration_ts,bits_per_raw_sample,nb_frames "
let _ = argString.split(separator: " ").map{arguments.append(String($0))}
// let _ suppresses compiler warning about unused result of map call
arguments.append(url.path) // spaces in URL path seem to be okay here
return arguments
}
let task = Process()
// task.executableURL = URL(fileURLWithPath: "/usr/local/bin/ffprobe")
// reports "doesn't exist", but really access is blocked by macOS :(
// statically-linked ffprobe is added to the app bundle
// downloadable here - https://evermeet.cx/ffmpeg/#sExtLib-ffprobe
task.executableURL = Bundle.main.url(forResource: "ffprobe", withExtension: nil)
task.arguments = buildArguments(url: targetURL)
let pipe = Pipe()
task.standardOutput = pipe // ffprobe writes console thru standardOutput
// (ffmpeg uses standardError)
let fh = pipe.fileHandleForReading
var cumulativeResults = "" // adds the result from each buffer dump
fh.waitForDataInBackgroundAndNotify() // setup handle for listening
// object must be specified when running multiple simultaneous calls
// otherwise every instance receives messages from all other filehandles too
NotificationCenter.default.addObserver(forName: .NSFileHandleDataAvailable, object: fh, queue: nil) {notif in
let closureFileHandle:FileHandle = notif.object as! FileHandle
// Get the data from the FileHandle
let data:Data = closureFileHandle.availableData
// print("received bytes: \(data.count)\n") // debugging
if data.count > 0 {
// re-arm fh for any addition data
fh.waitForDataInBackgroundAndNotify()
// append new data to the accumulator
let str = String(decoding: data, as: UTF8.self)
cumulativeResults += str
// optionally insert code here for intermediate reporting/parsing
// self.printToTextView(string: str)
}
}
task.terminationHandler = {task -> Void in
DispatchQueue.main.async(execute: {
// run the whole termination on the main queue
if task.terminationReason==Process.TerminationReason.exit {
// roll your own reporting method
self.printToTextView(string: targetURL.lastPathComponent)
self.printToTextView(string: targetURL.fileSizeString) //custom URL extension
self.printToTextView(string: cumulativeResults)
let str = "\nSuccess!\n"
self.printToTextView(string: str)
} else {
print("Task did not terminate properly")
// post an error in UI too
return
}
// successful conversion if this point is reached
}) // end dispatchqueue
} // end termination handler
do { try
task.run()
} catch let error as NSError {
print(error.localizedDescription)
// post in UI too
return
}
} // end runFFProbe()
Using AVPlayer in a Mac app here to play random videos in fullscreen from a folder, but when I try to play .vob files or .mpg files I just get a black screen that lasts as long as the video lasts.
Does AVFoundation not support playback from these containers? I figured that since they were playable with stock QuickTime Player they would also work with AVPlayer.
The AVURLAsset class has a static methods that you can query for supported video UTIs:
+ (NSArray *)audiovisualTypes
On 10.9.1 it returns these system defined UTIs:
public.mpeg
public.mpeg-2-video
public.avi
public.aifc-audio
public.aac-audio
public.mpeg-4
public.au-audio
public.aiff-audio
public.mp2
public.3gpp2
public.ac3-audio
public.mp3
public.mpeg-2-transport-stream
public.3gpp
public.mpeg-4-audio
Here is an explanation of system UTIs. So it seems that at least the .mpg container should be supported.
According to wiki, .mpg files can contain MPEG-1 or MPEG-2 video, but only MPEG-2 video is supported. So maybe that's why the file loads but nothing is displayed.
QuickTime internally uses QTMovieModernizer in order to play videos in legacy formats (as mentioned in this WWDC session), so maybe you can look into that. It even has a method for determining if a file needs to be modernized:
+ requiresModernization:error:
To get a list of supported extensions, you can use the following function:
import AVKit
import MobileCoreServices
func getAllowedAVPlayerFileExtensions() -> [String] {
let avTypes = AVURLAsset.audiovisualTypes()
var avExtensions = avTypes.map({ UTTypeCopyPreferredTagWithClass($0 as CFString, kUTTagClassFilenameExtension)?.takeRetainedValue() as String? ?? "" })
avExtensions = avExtensions.filter { !$0.isEmpty }
return avExtensions
}
This will return a list like so:
["caf", "ttml", "au", "ts", "mqv", "pls", "flac", "dv", "amr", "mp1", "mp3", "ac3", "loas", "3gp", "aifc", "m2v", "m2t", "m4b", "m2a", "m4r", "aa", "webvtt", "aiff", "m4a", "scc", "mp4", "m4p", "mp2", "eac3", "mpa", "vob", "scc", "aax", "mpg", "wav", "mov", "itt", "xhe", "m3u", "mts", "mod", "vtt", "m4v", "3g2", "sc2", "aac", "mp4", "vtt", "m1a", "mp2", "avi"]
I'm currently writing a small application which takes a folder containing many short video files (~1mn each) and plays them like they were ONE long video file.
I've been using AVQueuePlayer to play them all one after another but I was wondering if there were an alternative to this, because I'm running into some problems:
there is a small but noticeable gap when the player switches to the next file
I can't go back to the previous video file without having to remove all the items in the queue and put them back
I'd like to be able to go to any point in the video, just as if it were a single video file. Is AVPlayer the best approach for this?
I realize that it's been about 6 years since this was asked, but I found a solution to this shortly after seeing this question and maybe it will be helpful to someone else.
Instead of using a an AVQueuePlayer, I combined the clips in an AVMutableComposition (a subclass of AVAsset) which I could then play in a normal AVPlayer.
let assets: [AVAsset] = urlsOfVideos.map(AVAsset.init)
let composition = AVMutableComposition()
let compositionVideoTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
let compositionAudioTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
var insertTime = CMTime.zero
for asset in assets {
let range = CMTimeRange(start: .zero, duration: asset.duration)
guard let videoTrack = asset.tracks(withMediaType: .video).first,
let audioTrack = asset.tracks(withMediaType: .audio).first else {
continue
}
compositionVideoTrack?.preferredTransform = orientation!
try? compositionVideoTrack?.insertTimeRange(range, of: videoTrack, at: insertTime)
try? compositionAudioTrack?.insertTimeRange(range, of: audioTrack, at: insertTime)
insertTime = CMTimeAdd(insertTime, asset.duration)
}
Then you create the player like this
let player = AVPlayer(playerItem: AVPlayerItem(asset: composition))
I'm finding it difficult to determine how to extract the following information from a QuickTime movie, either using QTKit or the older QuickTime APIs in OS X, targeting 10.5+:
Video and audio codecs used (e.g. "H.264")
Video and audio bitrates (e.g. 64 kbps)
Dimensions
The specific problems I've encountered are:
1) The only means to the video and audio codec names that I've found involve the use of ImageDescriptionHandle and SoundDescriptionHandle, both of which appear to require the Carbon-only methods NewHandleClear and DisposeHandle, as well as requiring the 32-bit only Media object. Is there a more modern method that doesn't require the Carbon framework and is 64-bit compatible?
2) For the bitrate, I'm getting the GetMediaDataSizeTime64 and dividing by the track duration in seconds. However, in the case of one audio track, that method returns a value of 128 kbps, but calling QTSoundDescriptionGetProperty with the audio track media and the kQTAudioPropertyID_FormatString param returns a string of "64 kbps". Why would those two values be different? Is there a better way to calculate a track's bitrate?
3) Dimensions returned by [QTMovie movieAttributes] objectForKey:QTMovieNaturalSizeAttribute] or by [QTTrack attributeForKey:QTTrackDimensionsAttribute] are incorrect for one particular movie. The size returned is 720 x 480, but the actual view size in QuickTime Player is 640 x 480. Player's info window shows a size string of "720 x 480 (640 x 480)". Is there a better way to determine the actual movie dimensions?
Thanks in advance!
This metadata can be obtained from the [movie tracks] QTTrack* objects.
1) Enumerating through the tracks you can find the video and audio tracks.
QTMedia* media = [track media];
if ([media hasCharacteristic:QTMediaCharacteristicVisual])
{
// video track
}
if ([media hasCharacteristic:QTMediaCharacteristicAudio])
{
// audio track
}
The information about codecs:
NSString* summary = [track attributeForKey:QTTrackFormatSummaryAttribute];
2) To calculate the movie's bitrate you need to calculate the total data size of all tracks and divide it on the movie duration.
Enumerating through the tracks get the data size of each track:
QTMedia* media = [track media];
Track quicktimeTrack = [track quickTimeTrack];
TimeValue startTime = 0;
TimeValue duration = GetTrackDuration(quicktimeTrack);
long trackDataSize = GetTrackDataSize(quicktimeTrack, startTime, duration);
3) To get the movie's dimensions
NSSize movieSize = [(NSValue*)[[movie movieAttributes] objectForKey:QTMovieNaturalSizeAttribute] sizeValue];
However, the actual dimensions of the video track may be different:
Fixed width = 0;
Fixed height = 0;
GetTrackDimensions(videoTrack, &width, &height);
I am using the following code snippets to record screen, and in most situations recorded wmv file is clear enough, but for some part of video it is not very clear (grey color for some parts). What I record is ppt with full screen mode. I am using Windows Media Encoder 9.
Here is my code snippet,
IWMEncSourceGroup SrcGrp;
IWMEncSourceGroupCollection SrcGrpColl;
SrcGrpColl = encoder.SourceGroupCollection;
SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1");
IWMEncVideoSource2 SrcVid;
IWMEncSource SrcAud;
SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO);
SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO);
SrcVid.SetInput("ScreenCap://ScreenCapture1", "", "");
SrcAud.SetInput("Device://Default_Audio_Device", "", "");
// Specify a file object in which to save encoded content.
IWMEncFile File = encoder.File;
string CurrentFileName = Guid.NewGuid().ToString();
File.LocalFileName = CurrentFileName;
CurrentFileName = File.LocalFileName;
// Choose a profile from the collection.
IWMEncProfileCollection ProColl = encoder.ProfileCollection;
IWMEncProfile Pro;
for (int i = 0; i < ProColl.Count; i++)
{
Pro = ProColl.Item(i);
if (Pro.Name == "Screen Video/Audio High (CBR)")
{
SrcGrp.set_Profile(Pro);
break;
}
}
encoder.Start();
thanks in advance,
George
I would guess that it's a problem with your encoder profile or settings, and not a problem with the code. If you're using the default "Screen Video/Audio High (CBR)" profile in WME9, it's using a video bitrate of 250Kbps, which is pretty low. I'd suggest creating a custom profile in the Windows Media Encoder Profile Editor Utility. Something like this:
awesomesc.prx
Name: Awesome Screen Profile
Audio: WMA 9.2 CBR (32kbps, 44kHz, mono CBR)
Video: WMV 9 Screen Quality VBR (Video size Same as video input, Frame rate 10fps, Key frame interval 3sec, Video quality 90)
Then just change the code to match the custom profile's name.
if (Pro.Name == "Awesome Screen Profile")
The encoder settings would take a much longer post to go through, but if you have not changed them from the defaults, you should be OK.
The Quality-based VBR algorithm can be pretty amazing, and will likely produce a surprisingly low average bitrate, but if VBR won't work for your needs, you can use the Windows Media Encoder Profile Editor utility to import the schia.prx profile that you're using and tweak the settings to find a higher CBR bitrate that produces acceptable quality.
"Screen Video/Audio Medium (CBR)"
it solved my problem