This is pretty frustrating. I'm trying to get the size of an AVURLasset, but try to avoid naturalSize since Xcode tells me, this is deprecated in iOS5.
But: What's the replacement?
I can't find any clue on how to get the video-dimensions without using «naturalsize»...
Resolution in Swift 3:
func resolutionSizeForLocalVideo(url:NSURL) -> CGSize? {
guard let track = AVAsset(URL: url).tracksWithMediaType(AVMediaTypeVideo).first else { return nil }
let size = CGSizeApplyAffineTransform(track.naturalSize, track.preferredTransform)
return CGSize(width: fabs(size.width), height: fabs(size.height))
}
For Swift 4:
func resolutionSizeForLocalVideo(url:NSURL) -> CGSize? {
guard let track = AVAsset(url: url as URL).tracks(withMediaType: AVMediaType.video).first else { return nil }
let size = track.naturalSize.applying(track.preferredTransform)
return CGSize(width: fabs(size.width), height: fabs(size.height))
}
Solutions without preferredTransform do not return correct values for some videos on the latest devices!
I just checked the documentation online, and the naturalSize method is deprecated for the AVAsset object. However, there should always be an AVAssetTrack which refers to the AVAsset, and the AVAssetTrack has a naturalSize method that you can call.
naturalSize
The natural dimensions of the media data referenced by the track. (read-only)
#property(nonatomic, readonly) CGSize naturalSize
Availability
Available in iOS 4.0 and later. Declared In AVAssetTrack.h
Via: AVAssetTrack Reference for iOS
The deprecation warning on the official documentation suggests, "Use the naturalSize and preferredTransform, as appropriate, of the asset’s video tracks instead (see also tracksWithMediaType:)."
I changed my code from:
CGSize size = [movieAsset naturalSize];
to
CGSize size = [[[movieAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize];
It's less pretty and less safe now but won't break when they drop that method.
The deprecation warning says:
Use the naturalSize and preferredTransform, as appropriate,
of the asset’s video tracks instead (see also tracksWithMediaType:).
So we need an AVAssetTrack, and we want its naturalSize and preferredTransform. This can be accessed with the following:
AVAssetTrack *track = [[asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
CGSize dimensions = CGSizeApplyAffineTransform(track.naturalSize, track.preferredTransform);
asset is obviously your AVAsset.
This is a fairly simple extension for AVAsset in Swift 4 to get the size of the video, if available:
extension AVAsset {
var screenSize: CGSize? {
if let track = tracks(withMediaType: .video).first {
let size = __CGSizeApplyAffineTransform(track.naturalSize, track.preferredTransform)
return CGSize(width: fabs(size.width), height: fabs(size.height))
}
return nil
}
}
To derive the dimension of an AVAsset, you should calculate the union of all the visual track rects (after applying their corresponding preferred transformation):
CGRect unionRect = CGRectZero;
for (AVAssetTrack *track in [asset tracksWithMediaCharacteristic:AVMediaCharacteristicVisual]) {
CGRect trackRect = CGRectApplyAffineTransform(CGRectMake(0.f,
0.f,
track.naturalSize.width,
track.naturalSize.height),
track.preferredTransform);
unionRect = CGRectUnion(unionRect, trackRect);
}
CGSize naturalSize = unionRect.size;
Methods that rely on CGSizeApplyAffineTransform fail when your asset contains tracks with non-trivial affine transformation (e.g., 45 degree rotations) or if your asset contains tracks with different origins (e.g., two tracks playing side-by-side with the second track's origin augmented by the width of the first track).
See: MediaPlayerPrivateAVFoundationCF::sizeChanged()at https://opensource.apple.com/source/WebCore/WebCore-7536.30.2/platform/graphics/avfoundation/cf/MediaPlayerPrivateAVFoundationCF.cpp
For Swift 5
let assetSize = asset.tracks(withMediaType: .video)[0].naturalSize
Swift version of #David_H answer.
extension AVAsset {
func resolutionSizeForLocalVideo() -> CGSize? {
var unionRect = CGRect.zero;
for track in self.tracks(withMediaCharacteristic: .visual) {
let trackRect = CGRect(x: 0, y: 0, width:
track.naturalSize.width, height:
track.naturalSize.height).applying(track.preferredTransform)
unionRect = unionRect.union(trackRect)
}
return unionRect.size
}
}
For iOS versions 15.0 and above,
extension AVAsset {
func naturalSize() async -> CGSize? {
guard let tracks = try? await loadTracks(withMediaType: .video) else { return nil }
guard let track = tracks.first else { return nil }
guard let size = try? await track.load(.naturalSize) else { return nil }
return size
}
}
Related
Widgets now include the concept of display mode (represented by NCWidgetDisplayMode), which lets you describe how much content is available and allows users to choose a compact or expanded view.
How to expand widget in ios 10.0? It doesn't work as in ios 9.
Ok, i found right solution here.
1) Set the display mode to NCWidgetDisplayMode.expanded first in viewDidLoad:
override func viewDidLoad() {
super.viewDidLoad()
self.extensionContext?.widgetLargestAvailableDisplayMode = NCWidgetDisplayMode.expanded
}
2) Implement new protocol method:
func widgetActiveDisplayModeDidChange(_ activeDisplayMode: NCWidgetDisplayMode, withMaximumSize maxSize: CGSize) {
if (activeDisplayMode == NCWidgetDisplayMode.compact) {
self.preferredContentSize = maxSize
}
else {
//expanded
self.preferredContentSize = CGSize(width: maxSize.width, height: 200)
}
}
And it will work as official apps.
Image
Here is a Objective-C one.
- (void)widgetActiveDisplayModeDidChange:(NCWidgetDisplayMode)activeDisplayMode
withMaximumSize:(CGSize)maxSize
{
if (activeDisplayMode == NCWidgetDisplayModeCompact) {
self.preferredContentSize = maxSize;
}
else {
self.preferredContentSize = CGSizeMake(0, 200);
}
}
I try to read information about icons that are shown in finder on left source list. I tried already NSFileManager with following options
NSURLEffectiveIconKey icon read is not the same as in finder
NSURLCustomIconKey - returns nil
NSURLThumbnailKey - returns nil
NSThumbnail1024x1024SizeKey - returns nil
I managed to read all mounted devices using NSFileManager but I have no clue how to read icons connected with devices? Maybe someone has any idea or a hint.
I also tried to use
var image: NSImage = NSWorkspace.sharedWorkspace().iconForFile((url as! NSURL).path!)
but it returns the same image as NSURLEffectiveIconKey
Thanks!
First, the proper way to query which volumes are shown in the Finder's sidebar is using the LSSharedFileList API. That API also provides a way to query the icon:
LSSharedFileListRef list = LSSharedFileListCreate(NULL, kLSSharedFileListFavoriteVolumes, NULL);
UInt32 seed;
NSArray* items = CFBridgingRelease(LSSharedFileListCopySnapshot(list, &seed));
CFRelease(list);
for (id item in items)
{
IconRef icon = LSSharedFileListItemCopyIconRef((__bridge LSSharedFileListItemRef)item);
NSImage* image = [[NSImage alloc] initWithIconRef:icon];
// Do something with this item and icon
ReleaseIconRef(icon);
}
You can query other properties of the items using LSSharedFileListItemCopyDisplayName(), LSSharedFileListItemCopyResolvedURL, and LSSharedFileListItemCopyProperty().
This answer is a translation to Swift 1.2 of Ken Thomases's Objective-C answer.
All credits go to Ken Thomases, this is just a translation of his awesome answer.
let listBase = LSSharedFileListCreate(kCFAllocatorDefault, kLSSharedFileListFavoriteVolumes.takeUnretainedValue(), NSMutableDictionary())
let list = listBase.takeRetainedValue() as LSSharedFileList
var seed:UInt32 = 0
let itemsCF = LSSharedFileListCopySnapshot(list, &seed)
if let items = itemsCF.takeRetainedValue() as? [LSSharedFileListItemRef] {
for item in items {
let icon = LSSharedFileListItemCopyIconRef(item)
let image = NSImage(iconRef: icon)
// use image ...
}
}
Explanations:
When translating Ken's answer from Objective-C to try and use it I encountered some difficulties, this is the reason why I made this answer.
First problem was with LSSharedFileListCreate, the method signature in Swift didn't accept nil as its first parameter. I had to find a constant representing a CFAllocator: kCFAllocatorDefault. And the third parameter didn't accept nil either, so I put a dummy unused NSMutableDictionary to keep the compiler happy.
Also the "seed" parameter for LSSharedFileListCopySnapshot didn't accept the usual var seed:Uint32? for the inout, I had to give a default value to seed.
For deciding when to use takeRetainedValue or takeUnRetainedValue when using these APIs I referred to this answer.
Last, I had to cast the returned array as a Swift array of LSSharedFileListItemRef elements (it was initially inferred as a CFArray by the compiler).
Update
This has been deprecated in OS X El Capitan 10.11 (thanks #patmar)
Update 2
Note that while it's been deprecated it still works. The cast as [LSSharedFileListItemRef] in the previous solution is now ignored so we have to cast as NSArray instead then cast the item later:
if let items = itemsCF.takeRetainedValue() as? NSArray {
for item in items {
let icon = LSSharedFileListItemCopyIconRef(item as! LSSharedFileListItem)
let image = NSImage(iconRef: icon)
// use image ...
}
}
NSURLCustomIconKey will return nil because support for this key is not implemented. It's mentioned in the header but not in the NSURL documentation. You could get the info via deprecated File Manager methods.
https://developer.apple.com/library/mac/documentation/Carbon/Reference/File_Manager/
Alternatively maybe something like this.
func getResourceValue(_ value: AutoreleasingUnsafeMutablePointer<AnyObject?>,
forKey key: String,
error error: NSErrorPointer) -> Bool
Parameters
value
The location where the value for the resource property identified by key should be stored.
key
The name of one of the URL’s resource properties.
error
The error that occurred if the resource value could not be retrieved. This parameter is optional. If you are not interested in receiving error information, you can pass nil.
2022, swift 5
low-res icon (really fast work):
let icon: NSImage = NSWorkspace.shared.icon(forFile: url.path )
Hi-res icon:
extension NSWorkspace {
func highResIcon(forPath path: String, resolution: Int = 512) -> NSImage {
if let rep = self.icon(forFile: path)
.bestRepresentation(for: NSRect(x: 0, y: 0, width: resolution, height: resolution), context: nil, hints: nil) {
let image = NSImage(size: rep.size)
image.addRepresentation(rep)
return image
}
return self.icon(forFile: path)
}
}
Also hi-res thumbnail:
fileprivate extension URL {
func getImgThumbnail(_ size: CGFloat) -> NSImage? {
let ref = QLThumbnailCreate ( kCFAllocatorDefault,
self as NSURL,
CGSize(width: size, height: size),
[ kQLThumbnailOptionIconModeKey: false ] as CFDictionary
)
guard let thumbnail = ref?.takeRetainedValue()
else { return nil }
if let cgImageRef = QLThumbnailCopyImage(thumbnail) {
let cgImage = cgImageRef.takeRetainedValue()
return NSImage(cgImage: cgImage, size: CGSize(width: cgImage.width, height: cgImage.height))
}
return nil
}
}
I am trying to use the new AVAudioEngine in iOS 8.
It looks like the completionHandler of player.scheduleFile() is called before the sound file has finished playing.
I am using a sound file with a length of 5s -- and the println()-Message appears round about 1 second before the end of the sound.
Am I doing something wrong or do I misunderstand the idea of a completionHandler?
Thanks!
Here is some code:
class SoundHandler {
let engine:AVAudioEngine
let player:AVAudioPlayerNode
let mainMixer:AVAudioMixerNode
init() {
engine = AVAudioEngine()
player = AVAudioPlayerNode()
engine.attachNode(player)
mainMixer = engine.mainMixerNode
var error:NSError?
if !engine.startAndReturnError(&error) {
if let e = error {
println("error \(e.localizedDescription)")
}
}
engine.connect(player, to: mainMixer, format: mainMixer.outputFormatForBus(0))
}
func playSound() {
var soundUrl = NSBundle.mainBundle().URLForResource("Test", withExtension: "m4a")
var soundFile = AVAudioFile(forReading: soundUrl, error: nil)
player.scheduleFile(soundFile, atTime: nil, completionHandler: { println("Finished!") })
player.play()
}
}
I see the same behavior.
From my experimentation, I believe the callback is called once the buffer/segment/file has been "scheduled", not when it is finished playing.
Although the docs explicitly states:
"Called after the buffer has completely played or the player is stopped. May be nil."
So I think it's either a bug or incorrect documentation. No idea which
You can always compute the future time when audio playback will complete, using AVAudioTime. The current behavior is useful because it supports scheduling additional buffers/segments/files to play from the callback before the end of the current buffer/segment/file finishes, avoiding a gap in audio playback. This lets you create a simple loop player without a lot of work. Here's an example:
class Latch {
var value : Bool = true
}
func loopWholeFile(file : AVAudioFile, player : AVAudioPlayerNode) -> Latch {
let looping = Latch()
let frames = file.length
let sampleRate = file.processingFormat.sampleRate
var segmentTime : AVAudioFramePosition = 0
var segmentCompletion : AVAudioNodeCompletionHandler!
segmentCompletion = {
if looping.value {
segmentTime += frames
player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
}
}
player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
segmentCompletion()
player.play()
return looping
}
The code above schedules the entire file twice before calling player.play(). As each segment gets close to finishing, it schedules another whole file in the future, to avoid gaps in playback. To stop looping, you use the return value, a Latch, like this:
let looping = loopWholeFile(file, player)
sleep(1000)
looping.value = false
player.stop()
The AVAudioEngine docs from back in the iOS 8 days must have just been wrong. In the meantime, as a workaround, I noticed if you instead use scheduleBuffer:atTime:options:completionHandler: the callback is fired as expected (after playback finishes).
Example code:
AVAudioFile *file = [[AVAudioFile alloc] initForReading:_fileURL commonFormat:AVAudioPCMFormatFloat32 interleaved:NO error:nil];
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file.processingFormat frameCapacity:(AVAudioFrameCount)file.length];
[file readIntoBuffer:buffer error:&error];
[_player scheduleBuffer:buffer atTime:nil options:AVAudioPlayerNodeBufferInterrupts completionHandler:^{
// reminder: we're not on the main thread in here
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"done playing, as expected!");
});
}];
My bug report for this was closed as "works as intended," but Apple pointed me to new variations of the scheduleFile, scheduleSegment and scheduleBuffer methods in iOS 11. These add a completionCallbackType argument that you can use to specify that you want the completion callback when the playback is completed:
[self.audioUnitPlayer
scheduleSegment:self.audioUnitFile
startingFrame:sampleTime
frameCount:(int)sampleLength
atTime:0
completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) {
// do something here
}];
The documentation doesn't say anything about how this works, but I tested it and it works for me.
I've been using this workaround for iOS 8-10:
- (void)playRecording {
[self.audioUnitPlayer scheduleSegment:self.audioUnitFile startingFrame:sampleTime frameCount:(int)sampleLength atTime:0 completionHandler:^() {
float totalTime = [self recordingDuration];
float elapsedTime = [self recordingCurrentTime];
float remainingTime = totalTime - elapsedTime;
[self performSelector:#selector(doSomethingHere) withObject:nil afterDelay:remainingTime];
}];
}
- (float)recordingDuration {
float duration = duration = self.audioUnitFile.length / self.audioUnitFile.processingFormat.sampleRate;
if (isnan(duration)) {
duration = 0;
}
return duration;
}
- (float)recordingCurrentTime {
AVAudioTime *nodeTime = self.audioUnitPlayer.lastRenderTime;
AVAudioTime *playerTime = [self.audioUnitPlayer playerTimeForNodeTime:nodeTime];
AVAudioFramePosition sampleTime = playerTime.sampleTime;
if (sampleTime == 0) { return self.audioUnitLastKnownTime; } // this happens when the player isn't playing
sampleTime += self.audioUnitStartingFrame; // if we trimmed from the start, or changed the location with the location slider, the time before that point won't be included in the player time, so we have to track it ourselves and add it here
float time = sampleTime / self.audioUnitFile.processingFormat.sampleRate;
self.audioUnitLastKnownTime = time;
return time;
}
Yes, it does get called slightly before the file (or buffer) has completed. If you call [myNode stop] from within the completion handler the file (or buffer) will not fully complete. However, if you call [myEngine stop], the file (or buffer) will complete to the end
// audioFile here is our original audio
audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: {
print("scheduleFile Complete")
var delayInSeconds: Double = 0
if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime) {
if let rate = rate {
delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate) / Double(rate!)
} else {
delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate)
}
}
// schedule a stop timer for when audio finishes playing
DispatchTime.executeAfter(seconds: delayInSeconds) {
audioEngine.mainMixerNode.removeTap(onBus: 0)
// Playback has completed
}
})
As of today, in a project with deployment target 12.4, on a device running 12.4.1, here's the way we found to successfully stop the nodes upon playback completion:
// audioFile and playerNode created here ...
playerNode.scheduleFile(audioFile, at: nil, completionCallbackType: .dataPlayedBack) { _ in
os_log(.debug, log: self.log, "%#", "Completing playing sound effect: \(filePath) ...")
DispatchQueue.main.async {
os_log(.debug, log: self.log, "%#", "... now actually completed: \(filePath)")
self.engine.disconnectNodeOutput(playerNode)
self.engine.detach(playerNode)
}
}
The main difference w.r.t. previous answers is to postpone node detaching on main thread (which I guess is also the audio render thread?), instead of performing that on callback thread.
I am trying to implement window toggling (something I've done many times in Objective-C), but now in Swift. It seams that I am getting the use of NSWindowOcclusionState.Visible incorrectly, but I really cannot see my problem. Only the line w.makeKeyAndOrderFront(self) is called after the initial window creation.
Any suggestions?
var fileArchiveListWindow: NSWindow? = nil
#IBAction func tougleFileArchiveList(sender: NSMenuItem) {
if let w = fileArchiveListWindow {
if w.occlusionState == NSWindowOcclusionState.Visible {
w.orderOut(self)
}
else {
w.makeKeyAndOrderFront(self)
}
}
else {
let sb = NSStoryboard(name: "FileArchiveOverview",bundle: nil)
let controller: FileArchiveOverviewWindowController = sb?.instantiateControllerWithIdentifier("FileArchiveOverviewController") as FileArchiveOverviewWindowController
fileArchiveListWindow = controller.window
fileArchiveListWindow?.makeKeyAndOrderFront(self)
}
}
Old question, but I just run into the same problem. Checking the occlusionState is done a bit differently in Swift using the AND binary operator:
if (window.occlusionState & NSWindowOcclusionState.Visible != nil) {
// visible
}
else {
// not visible
}
In recent SDKs, the NSWindowOcclusionState bitmask is imported into Swift as an OptionSet. You can use window.occlusionState.contains(.visible) to check if a window is visible or not (fully occluded).
Example:
observerToken = NotificationCenter.default.addObserver(forName: NSWindow.didChangeOcclusionStateNotification, object: window, queue: nil) { note in
let window = note.object as! NSWindow
if window.occlusionState.contains(.visible) {
// window at least partially visible, resume power-hungry calculations
} else {
// window completely occluded, throttle down timers, CPU, etc.
}
}
I am playing a AVMutableVideoComposition with AVPlayer and since IOS8 everything was perfectly fine.
But now the video start playing and after 4 or 5 seconds it stop ,like buffering or something like that, the sound keeps playing and when the video ends the AVPlayer loops and play it fine without stops.
I have no clue for fixing this issue.
Any help would be appreciate,
Thank you
I had a same issue, but I got the solution.
You should play after playerItem's status is changed to .ReadyToPlay.
I also answered here, that issue is similar to your issue.
Please see as below.
func startVideoPlayer() {
let playerItem = AVPlayerItem(asset: self.composition!)
playerItem.videoComposition = self.videoComposition!
let player = AVPlayer(playerItem: playerItem)
player.actionAtItemEnd = .None
videoPlayerLayer = AVPlayerLayer(player: player)
videoPlayerLayer!.frame = self.bounds
/* add playerItem's observer */
player.addObserver(self, forKeyPath: "player.currentItem.status", options: .New, context: nil)
NSNotificationCenter.defaultCenter().addObserver(self, selector: "playerItemDidReachEnd:", name: AVPlayerItemDidPlayToEndTimeNotification, object: playerItem);
self.layer.addSublayer(videoPlayerLayer!)
}
override func observeValueForKeyPath(keyPath: String?, ofObject object: AnyObject?, change: [String : AnyObject]?, context: UnsafeMutablePointer<Void>) {
if keyPath != nil && keyPath! == "player.currentItem.status" {
if let newValue = change?[NSKeyValueChangeNewKey] {
if AVPlayerStatus(rawValue: newValue as! Int) == .ReadyToPlay {
playVideo() /* play after status is changed to .ReadyToPlay */
}
}
} else {
super.observeValueForKeyPath(keyPath, ofObject: object, change: change, context: context)
}
}
func playerItemDidReachEnd(notification: NSNotification) {
let playerItem = notification.object as! AVPlayerItem
playerItem.seekToTime(kCMTimeZero)
playVideo()
}
func playVideo() {
videoPlayerLayer?.player!.play()
}
Same here, don't know if it can be count like an answer but anyway, just use
[AVPlayer seekToTime:AVPlayer.currentItem.duration]; to make first loop by yourself and to avoid AVPlayer stop. That's only way that i found.
I was having the same problem and i solved it this way ..
Instead of applying videoComposition to AVPlayerItem directly .. i exported my video using AVAssetExportSession to apply videoComposition, which in turn gives me a Url with a video having all my videoComposition applied and now i can use this contentURL to play video on AVPlayer..
This works like a charm ..
Exporting video will take time but as now rendering video composition is not happening on the go, it will work smoothly..
Following code can be used to export video ..
AVAssetExportSession *export = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPreset1280x720];
export.videoComposition = videoComposition;
export.outputURL = [NSURL fileURLWithPath:[[NSTemporaryDirectory() stringByAppendingPathComponent:[NSUUID new].UUIDString] stringByAppendingPathExtension:#"MOV"]];
export.outputFileType = AVFileTypeQuickTimeMovie;
export.shouldOptimizeForNetworkUse = YES;
[export exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (export.status == AVAssetExportSessionStatusCompleted) {
completionHander(export.outputURL, nil);
} else {
completionHander(nil, export.error);
}
});
}];