iOS CVPixelBufferCreate leaking memory in swift 2 - swift2

I'm trying to convert an image into a video, and the right way seems to use a AVAssetWriter with a AVAssetWriterInputPixelBufferAdaptor, and it works well, but it leaks memory.
When I convert the CGImage to a CVPixelBuffer, I call CVPixelBufferCreate, which never frees it's memory.
func CGImageToPixelBuffer(image: CGImageRef, frameSize: CGSize) -> CVPixelBuffer {
// stupid CFDictionary stuff
let keys: [CFStringRef] = [kCVPixelBufferCGImageCompatibilityKey, kCVPixelBufferCGBitmapContextCompatibilityKey]
let values: [CFTypeRef] = [kCFBooleanTrue, kCFBooleanTrue]
let keysPointer = UnsafeMutablePointer<UnsafePointer<Void>>.alloc(1)
let valuesPointer = UnsafeMutablePointer<UnsafePointer<Void>>.alloc(1)
keysPointer.initialize(keys)
valuesPointer.initialize(values)
let options = CFDictionaryCreate(kCFAllocatorDefault, keysPointer, valuesPointer, keys.count,
UnsafePointer<CFDictionaryKeyCallBacks>(), UnsafePointer<CFDictionaryValueCallBacks>())
let buffer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
// here's the leak >:[
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height),
kCVPixelFormatType_32ARGB, options, buffer)
CVPixelBufferLockBaseAddress(pixelBuffer.memory!, 0);
let bufferData = CVPixelBufferGetBaseAddress(buffer.memory!);
let rgbColorSpace = CGColorSpaceCreateDeviceRGB();
let context = CGBitmapContextCreate(bufferData, Int(frameSize.width),
Int(frameSize.height), 8, 4*Int(frameSize.width), rgbColorSpace,
CGImageAlphaInfo.NoneSkipFirst.rawValue);
CGContextDrawImage(context, CGRectMake(0, 0,
CGFloat(CGImageGetWidth(image)),
CGFloat(CGImageGetHeight(image))), image);
CVPixelBufferUnlockBaseAddress(pixelBuffer.memory!, 0);
return buffer.memory!
}
And here's the code which calls CGImageToPixelBuffer
func saveImageAsVideoFile(path: NSURL, image: UIImage, duration: Double) {
let writer = try! AVAssetWriter(URL: path, fileType: AVFileTypeQuickTimeMovie)
let videoSettings: NSDictionary = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : NSNumber(integer: Int(image.size.width)),
AVVideoHeightKey : NSNumber(integer: Int(image.size.height))
]
let input = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings as? [String : AnyObject])
let inputAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: input, sourcePixelBufferAttributes: nil)
input.expectsMediaDataInRealTime = false
writer.addInput(input)
writer.startWriting()
writer.startSessionAtSourceTime(kCMTimeZero)
// leak starts here
let buffer = CGImageToPixelBuffer(image.CGImage!, frameSize: image.size)
// append 30 frames to AVAssetWriter thing
for i in 0..<30
if input.readyForMoreMediaData {
inputAdaptor.appendPixelBuffer(buffer, withPresentationTime: CMTimeMake(i, 30))
}
}
//CVPixelBufferRelease(buffer) ??
input.markAsFinished()
writer.finishWritingWithCompletionHandler({ () -> Void in
print("yay")
})
}
CVPixelBufferRelease is not available in swift 2
CVPixelBufferCreate doesn't return a unmanaged pointer in swift 2, so I can't use this guys code
I've tried manually calling destroy and dealloc on the unsafepointer, to no avail.
every time it's called it increases the memory usage, and will crash the device if called enough
Any help or advice would be appreciated.

Your let buffer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1) isn't balanced by a dealloc, however adding that doesn't seem to fix the leak.
Simply using a CVPixelBuffer? does though, and seems simpler:
var buffer: CVPixelBuffer?
// TODO: handle status != noErr
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height),
kCVPixelFormatType_32ARGB, options, &buffer)
CVPixelBufferLockBaseAddress(buffer!, 0);
let bufferData = CVPixelBufferGetBaseAddress(buffer!);
let rgbColorSpace = CGColorSpaceCreateDeviceRGB();
let context = CGBitmapContextCreate(bufferData, Int(frameSize.width),
Int(frameSize.height), 8, 4*Int(frameSize.width), rgbColorSpace,
CGImageAlphaInfo.NoneSkipFirst.rawValue);
CGContextDrawImage(context, CGRectMake(0, 0,
CGFloat(CGImageGetWidth(image)),
CGFloat(CGImageGetHeight(image))), image);
CVPixelBufferUnlockBaseAddress(buffer!, 0);
You should also handle status != noErr and unwrap the optional buffer sooner, to avoid all those buffer!s.

I also have similar leak. I am not sure but I think the following can be used:
let bufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
...
let pixelBuffer = bufferPointer.memory!
bufferPointer.destroy()
return pixelBuffer

Related

Why is if statement in this function never true

I have a function in swift UIview that makes 2 mp4`s loop. The first mp4 is playing fine but the second one is not, it only seems to play once, how could i fix this?
let videoURL: NSURL = NSBundle.mainBundle().URLForResource(instrumentaimp4[skaicius], withExtension: "mp4")!
let sakeleURL: NSURL = NSBundle.mainBundle().URLForResource("sakele_blikas", withExtension: "mp4")!
player = AVPlayer(URL: videoURL)
player?.actionAtItemEnd = .None
player?.muted = true
sakele = AVPlayer(URL: sakeleURL)
sakele?.actionAtItemEnd = .None
sakele?.muted = true
let playerLayer = AVPlayerLayer(player: player)
playerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
playerLayer.zPosition = 1
let playerLayer2 = AVPlayerLayer(player: sakele)
playerLayer2.videoGravity = AVLayerVideoGravityResizeAspectFill
playerLayer2.zPosition = 1
view.layer.addSublayer(playerLayer2)
view.layer.addSublayer(playerLayer)
player?.play()
sakele?.play()
//loop video
NSNotificationCenter.defaultCenter().addObserver(self,
selector: "loopVideo:",
name: AVPlayerItemDidPlayToEndTimeNotification,
object:nil)
func loopVideo(notification: NSNotification) {
if let finishedPlayer = notification.object as! AVPlayerItem!{
if finishedPlayer == self.sakele {
self.sakele?.seekToTime(kCMTimeZero)
self.sakele?.play()
NSLog("1")
}else{
self.player?.seekToTime(kCMTimeZero)
self.player?.play()
NSLog("2")}
}}
NSLog ("2") never happens. Where is my mistake? any help apreciated
Your sakele is an AVPlayer. But your notification.object is claimed to be (and is cast to) an AVPlayerItem, which is then assigned to finishedPlayer. This makes no sense. When you come to compare if finishedPlayer == self.sakele, it can never succeed, because they are not the same kind of object, let alone the same object.

Swift2 retrieving images from Firebase

I am trying to read/display an image from Firebase. I am first encoding the image and then posting this encoded String to Firebase. This runs fine. When I try and decode the encoded string from Firebase and convert it to an image, I am getting a nil value exception.
This is how I am saving the image to Firebase
var base64String: NSString!
func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage, editingInfo: [String : AnyObject]?) {
self.dismissViewControllerAnimated(true, completion: nil)
imageToPost.image = image
var uploadImage = image as! UIImage
var imageData = UIImagePNGRepresentation(uploadImage)!
self.base64String = imageData.base64EncodedStringWithOptions(NSDataBase64EncodingOptions.Encoding64CharacterLineLength)
let ref = Firebase(url: "https://XXX.firebaseio.com")
var quoteString = ["string": self.base64String]
var usersRef = ref.childByAppendingPath("goalImages")
var users = ["image": quoteString]
usersRef.setValue(users)
displayAlert("Image Posted", message: "Your image has been successfully posted!")
}
This is how I am trying to read the image from Firebase
// ViewController.swift
import UIKit
import Firebase
class ViewController: UIViewController {
#IBOutlet weak var image: UIImageView!
var base64String: NSString!
#IBAction func buttonClicked(sender: AnyObject) {
sender.setTitle("\(sender.tag)", forState: UIControlState.Normal)
}
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
let ref = Firebase(url: "https://XXX.firebaseio.com/goalImages/image/string")
ref.observeEventType(.Value, withBlock: { snapshot in
self.base64String = snapshot.value as! NSString
let decodedData = NSData(base64EncodedString: self.base64String as String, options: NSDataBase64DecodingOptions())
//Next line is giving the error
var decodedImage = UIImage(data: decodedData!)
self.image.image = decodedImage
}, withCancelBlock: { error in
print(error.description)
})
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
The error says: "fatal error: unexpectedly found nil while unwrapping an Optional value"; decodedData is nil. Could someone explain what is going wrong.
Instead of
let decodedData = NSData(base64EncodedString: self.base64String as String,
options: NSDataBase64DecodingOptions())
try adding IgnoreUnknownCharacters
NSDataBase64DecodingOptions.IgnoreUnknownCharacters
Use Example: Encode a jpg, store and read from firebase
encode and write our favorite starship
if let image = NSImage(named:"Enterprise.jpeg") {
let imageData = image.TIFFRepresentation
let base64String = imageData!.base64EncodedStringWithOptions(.Encoding64CharacterLineLength)
let imageRef = myRootRef.childByAppendingPath("image_path")
imageRef.setValue(base64String)
read and decode
imageRef.observeEventType(.Value, withBlock: { snapshot in
let base64EncodedString = snapshot.value
let imageData = NSData(base64EncodedString: base64EncodedString as! String,
options: NSDataBase64DecodingOptions.IgnoreUnknownCharacters)
let decodedImage = NSImage(data:imageData!)
self.myImageView.image = decodedImage
}, withCancelBlock: { error in
print(error.description)
})
EDIT 2019_05_17
Update to Swift 5 and Firebase 6
func writeImage() {
if let image = NSImage(named:"Enterprise.jpg") {
let imageData = image.tiffRepresentation
if let base64String = imageData?.base64EncodedString() {
let imageRef = self.ref.child("image_path")
imageRef.setValue(base64String)
}
}
}
func readImage() {
let imageRef = self.ref.child("image_path")
imageRef.observeSingleEvent(of: .value, with: { snapshot in
let base64EncodedString = snapshot.value as! String
let imageData = Data(base64Encoded: base64EncodedString, options: Data.Base64DecodingOptions.ignoreUnknownCharacters)!
let decodedImage = NSImage(data: imageData)
self.myImageView.image = decodedImage
})
}
Firebase Engineer here:
I highly recommend using the new Firebase Storage API for uploading images to Firebase. It's simple to use, low cost, and backed by Google Cloud Storage for huge scale.
You can upload from NSData or an NSURL pointing to a local file (I'll show NSData, but the principle is the same):
// Data in memory
let data: NSData = ...
// Create a reference to the file you want to upload
let riversRef = storageRef.child("images/rivers.jpg")
// Upload the file to the path "images/rivers.jpg"
let uploadTask = riversRef.putData(data, metadata: nil) { metadata, error in
if (error != nil) {
// Uh-oh, an error occurred!
} else {
// Metadata contains file metadata such as size, content-type, and download URL.
let downloadURL = metadata!.downloadURL
// This can be stored in the Firebase Realtime Database
// It can also be used by image loading libraries like SDWebImage
}
}
You can even pause and resume uploads, and you can easily monitor uploads for progress:
// Upload data
let uploadTask = storageRef.putData(...)
// Add a progress observer to an upload task
uploadTask.observeStatus(.Progress) { snapshot in
// Upload reported progress
if let progress = snapshot.progress {
let percentComplete = 100.0 * Double(progress.completedUnitCount) / Double(progress.totalUnitCount)
}
}

How would I put together a video using the AVAssetWriter in swift?

I'm currently making a small app that timelapses the webcam on my mac, saves the captured frame to png, and I am looking into exporting the captured frames as a single video.
I use CGImage to handle the original images and have them set in an array but I'm unsure on there to go from there. I gather from my own research that I have to use AVAssetWriter and AVAssetWriterInput somehow.
I've had a look about on here, read the apple docs and searched google. But all the guides etc, are in obj-c rather than swift which is making it really difficult to understand (As I have no experience in Obj-C).
Any help would be very much appreciated.
Many Thanks,
Luke.
I solved the same problem in Swift. Starting from an array oh UIImage, try this (it's a little long :-) but works):
var choosenPhotos: [UIImage] = [] *** your array of UIImages ***
var outputSize = CGSizeMake(1280, 720)
func build(outputSize outputSize: CGSize) {
let fileManager = NSFileManager.defaultManager()
let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
guard let documentDirectory: NSURL = urls.first else {
fatalError("documentDir Error")
}
let videoOutputURL = documentDirectory.URLByAppendingPathComponent("OutputVideo.mp4")
if NSFileManager.defaultManager().fileExistsAtPath(videoOutputURL.path!) {
do {
try NSFileManager.defaultManager().removeItemAtPath(videoOutputURL.path!)
} catch {
fatalError("Unable to delete file: \(error) : \(__FUNCTION__).")
}
}
guard let videoWriter = try? AVAssetWriter(URL: videoOutputURL, fileType: AVFileTypeMPEG4) else {
fatalError("AVAssetWriter error")
}
let outputSettings = [AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : NSNumber(float: Float(outputSize.width)), AVVideoHeightKey : NSNumber(float: Float(outputSize.height))]
guard videoWriter.canApplyOutputSettings(outputSettings, forMediaType: AVMediaTypeVideo) else {
fatalError("Negative : Can't apply the Output settings...")
}
let videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings)
let sourcePixelBufferAttributesDictionary = [kCVPixelBufferPixelFormatTypeKey as String : NSNumber(unsignedInt: kCVPixelFormatType_32ARGB), kCVPixelBufferWidthKey as String: NSNumber(float: Float(outputSize.width)), kCVPixelBufferHeightKey as String: NSNumber(float: Float(outputSize.height))]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput, sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
if videoWriter.canAddInput(videoWriterInput) {
videoWriter.addInput(videoWriterInput)
}
if videoWriter.startWriting() {
videoWriter.startSessionAtSourceTime(kCMTimeZero)
assert(pixelBufferAdaptor.pixelBufferPool != nil)
let media_queue = dispatch_queue_create("mediaInputQueue", nil)
videoWriterInput.requestMediaDataWhenReadyOnQueue(media_queue, usingBlock: { () -> Void in
let fps: Int32 = 1
let frameDuration = CMTimeMake(1, fps)
var frameCount: Int64 = 0
var appendSucceeded = true
while (!self.choosenPhotos.isEmpty) {
if (videoWriterInput.readyForMoreMediaData) {
let nextPhoto = self.choosenPhotos.removeAtIndex(0)
let lastFrameTime = CMTimeMake(frameCount, fps)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
var pixelBuffer: CVPixelBuffer? = nil
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferAdaptor.pixelBufferPool!, &pixelBuffer)
if let pixelBuffer = pixelBuffer where status == 0 {
let managedPixelBuffer = pixelBuffer
CVPixelBufferLockBaseAddress(managedPixelBuffer, 0)
let data = CVPixelBufferGetBaseAddress(managedPixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGBitmapContextCreate(data, Int(self.outputSize.width), Int(self.outputSize.height), 8, CVPixelBufferGetBytesPerRow(managedPixelBuffer), rgbColorSpace, CGImageAlphaInfo.PremultipliedFirst.rawValue)
CGContextClearRect(context, CGRectMake(0, 0, CGFloat(self.outputSize.width), CGFloat(self.outputSize.height)))
let horizontalRatio = CGFloat(self.outputSize.width) / nextPhoto.size.width
let verticalRatio = CGFloat(self.outputSize.height) / nextPhoto.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize:CGSize = CGSizeMake(nextPhoto.size.width * aspectRatio, nextPhoto.size.height * aspectRatio)
let x = newSize.width < self.outputSize.width ? (self.outputSize.width - newSize.width) / 2 : 0
let y = newSize.height < self.outputSize.height ? (self.outputSize.height - newSize.height) / 2 : 0
CGContextDrawImage(context, CGRectMake(x, y, newSize.width, newSize.height), nextPhoto.CGImage)
CVPixelBufferUnlockBaseAddress(managedPixelBuffer, 0)
appendSucceeded = pixelBufferAdaptor.appendPixelBuffer(pixelBuffer, withPresentationTime: presentationTime)
} else {
print("Failed to allocate pixel buffer")
appendSucceeded = false
}
}
if !appendSucceeded {
break
}
frameCount++
}
videoWriterInput.markAsFinished()
videoWriter.finishWritingWithCompletionHandler { () -> Void in
print("FINISHED!!!!!")
}
})
}
}
Following is code to generate video from images working in Xcode 11.3.1 and Swift 5.1. This code is generated from the answer of #aleciufs Sep 25 '15 answer. The following function assumes the images are loaded and available var images array [UIImage]
func build(outputSize: CGSize) {
let fileManager = FileManager.default
let urls = fileManager.urls(for: .cachesDirectory, in: .userDomainMask)
guard let documentDirectory = urls.first else {
fatalError("documentDir Error")
}
let videoOutputURL = documentDirectory.appendingPathComponent("OutputVideo.mp4")
if FileManager.default.fileExists(atPath: videoOutputURL.path) {
do {
try FileManager.default.removeItem(atPath: videoOutputURL.path)
} catch {
fatalError("Unable to delete file: \(error) : \(#function).")
}
}
guard let videoWriter = try? AVAssetWriter(outputURL: videoOutputURL, fileType: AVFileType.mp4) else {
fatalError("AVAssetWriter error")
}
let outputSettings = [AVVideoCodecKey : AVVideoCodecType.h264, AVVideoWidthKey : NSNumber(value: Float(outputSize.width)), AVVideoHeightKey : NSNumber(value: Float(outputSize.height))] as [String : Any]
guard videoWriter.canApply(outputSettings: outputSettings, forMediaType: AVMediaType.video) else {
fatalError("Negative : Can't apply the Output settings...")
}
let videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: outputSettings)
let sourcePixelBufferAttributesDictionary = [
kCVPixelBufferPixelFormatTypeKey as String : NSNumber(value: kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: NSNumber(value: Float(outputSize.width)),
kCVPixelBufferHeightKey as String: NSNumber(value: Float(outputSize.height))
]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput, sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
}
if videoWriter.startWriting() {
videoWriter.startSession(atSourceTime: CMTime.zero)
assert(pixelBufferAdaptor.pixelBufferPool != nil)
let media_queue = DispatchQueue(__label: "mediaInputQueue", attr: nil)
videoWriterInput.requestMediaDataWhenReady(on: media_queue, using: { () -> Void in
let fps: Int32 = 2
let frameDuration = CMTimeMake(value: 1, timescale: fps)
var frameCount: Int64 = 0
var appendSucceeded = true
while (!self.images.isEmpty) {
if (videoWriterInput.isReadyForMoreMediaData) {
let nextPhoto = self.images.remove(at: 0)
let lastFrameTime = CMTimeMake(value: frameCount, timescale: fps)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
var pixelBuffer: CVPixelBuffer? = nil
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferAdaptor.pixelBufferPool!, &pixelBuffer)
if let pixelBuffer = pixelBuffer, status == 0 {
let managedPixelBuffer = pixelBuffer
CVPixelBufferLockBaseAddress(managedPixelBuffer, [])
let data = CVPixelBufferGetBaseAddress(managedPixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(outputSize.width), height: Int(outputSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(managedPixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
context?.clear(CGRect(x: 0, y: 0, width: outputSize.width, height: outputSize.height))
let horizontalRatio = CGFloat(outputSize.width) / nextPhoto.size.width
let verticalRatio = CGFloat(outputSize.height) / nextPhoto.size.height
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize = CGSize(width: nextPhoto.size.width * aspectRatio, height: nextPhoto.size.height * aspectRatio)
let x = newSize.width < outputSize.width ? (outputSize.width - newSize.width) / 2 : 0
let y = newSize.height < outputSize.height ? (outputSize.height - newSize.height) / 2 : 0
context?.draw(nextPhoto.cgImage!, in: CGRect(x: x, y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(managedPixelBuffer, [])
appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
} else {
print("Failed to allocate pixel buffer")
appendSucceeded = false
}
}
if !appendSucceeded {
break
}
frameCount += 1
}
videoWriterInput.markAsFinished()
videoWriter.finishWriting { () -> Void in
print("FINISHED!!!!!")
saveVideoToLibrary(videoURL: videoOutputURL)
}
})
}
}
The extra function I provide is:
func saveVideoToLibrary(videoURL: URL) {
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
}) { saved, error in
if let error = error {
print("Error saving video to librayr: \(error.localizedDescription)")
}
if saved {
print("Video save to library")
}
}
}

set and create paragraph style in swift

I am trying to port an objective C method to draw text in a PDF context to Swift.
I could convert most of the code, but the following lines are giving me a problem.
Some help in the conversion would be welcome.
Here the Objective C code:
-(void)drawText:(NSString *)textToDraw context:(CGContextRef)myPDFContext textcolor:(NSColor *)textcolor textfont:(NSFont*)textfont textbox:(CGRect)boxsize pagesize:(CGSize)pageSize {
// .........
// create paragraph style and assign text alignment to it
CTTextAlignment alignment = kCTJustifiedTextAlignment;
CTParagraphStyleSetting _settings[] = { {kCTParagraphStyleSpecifierAlignment, sizeof(alignment), &alignment} };
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(_settings, sizeof(_settings) / sizeof(_settings[0]));
// set paragraph style attribute
CFAttributedStringSetAttribute(attrStr, CFRangeMake(0, CFAttributedStringGetLength(attrStr)), kCTParagraphStyleAttributeName, paragraphStyle);
// .........
}
// The following lines are my try in Swift, but this gives errors:
func DrawText(textToDraw:String, myPDFContext:CGContextRef, textcolor:NSColor, textfont:NSFont, boxsize:CGRect, pagesize:CGSize) {
var alignment = CTTextAlignment.TextAlignmentLeft
let alignmentSetting = CTParagraphStyleSetting(spec: CTParagraphStyleSpecifier.Alignment, valueSize: sizeof(alignment), value: &alignment)
let paragraphStyle = CTParagraphStyleCreate(alignmentSetting, 1)
//.......
If this is solved I could post the complete method in Swift.
I think this does it:
var alignment = CTTextAlignment.TextAlignmentLeft
let alignmentSetting = [CTParagraphStyleSetting(spec: .Alignment, valueSize: UInt(sizeofValue(alignment)), value: &alignment)]
let paragraphStyle = CTParagraphStyleCreate(alignmentSetting, 1)
CFAttributedStringSetAttribute(attrStr, CFRangeMake(0, CFAttributedStringGetLength(attrStr)), kCTParagraphStyleAttributeName, paragraphStyle)
I made the following modifications:
I changed sizeof(alignment) to UInt(sizeofValue(alignment)) because sizeof in Swift only takes a type (such as sizeof(Int)). The UInt() is a constructor that turns the Int returned by sizeofValue into the UInt needed by this call.
I make alignmentSetting into an array so that it could be passed to an UnsafePointer of that type. This also better matches the original version.
I changed CTParagraphStyleSpecifier.Alignment to just .Alignment since the first part isn't needed since spec is of type CTParagraphStyleSpecifier.
The hardcoded 1 is OK in this case because you are passing just one value, but the more general solution would be to do:
let paragraphStyle = CTParagraphStyleCreate(alignmentSetting, UInt(alignmentSetting.count))
Here the full working method based on the corrections and help.
func drawText(textToDraw:String, myPDFContext:CGContextRef, textcolor:NSColor, textfont:NSFont, boxsize:CGRect, pagesize:CGSize) {
let _font = NSFont(name:textfont.fontName, size:textfont.pointSize )
if (_font == nil) {
println("Font [\(textfont)] does not exist")
return
}
let myBoxWidth = boxsize.size.width
let myBoxHeight = boxsize.size.height
let myBoxxpos = boxsize.origin.x
let myBoxypos = boxsize.origin.y
let frameRect = CGRectMake(myBoxxpos, myBoxypos,myBoxWidth,myBoxHeight);
let framePath = CGPathCreateMutable()
CGPathAddRect(framePath, nil, frameRect);
// create attributed string
let attrStr = CFAttributedStringCreateMutable(kCFAllocatorDefault, 0);
CFAttributedStringReplaceString (attrStr, CFRangeMake(0, 0), textToDraw);
// create font
let font = CTFontCreateWithName(textfont.fontName, textfont.pointSize, nil);
var alignment = CTTextAlignment.TextAlignmentLeft
let alignmentSetting = [CTParagraphStyleSetting(spec: .Alignment, valueSize: UInt(sizeofValue(alignment)), value: &alignment)]
//let paragraphStyle = CTParagraphStyleCreate(alignmentSetting, 1)
let paragraphStyle = CTParagraphStyleCreate(alignmentSetting, UInt(alignmentSetting.count))
CFAttributedStringSetAttribute(attrStr, CFRangeMake(0, CFAttributedStringGetLength(attrStr)), kCTParagraphStyleAttributeName, paragraphStyle)
// set font attribute
CFAttributedStringSetAttribute(attrStr, CFRangeMake(0, CFAttributedStringGetLength(attrStr)), kCTFontAttributeName, font);
// set color attribute
CFAttributedStringSetAttribute(attrStr,CFRangeMake(0, CFAttributedStringGetLength(attrStr)), kCTForegroundColorAttributeName,textcolor.CGColor);
// Prepare the text using a Core Text Framesetter.
let framesetter = CTFramesetterCreateWithAttributedString(attrStr);
// Get the frame that will do the rendering.
let currentRange = CFRangeMake(0, 0);
let frameRef = CTFramesetterCreateFrame(framesetter, currentRange, framePath, nil);
// Put the text matrix into a known state. This ensures
// that no old scaling factors are left in place.
CGContextSetTextMatrix(myPDFContext, CGAffineTransformIdentity);
// Draw the frame.
CTFrameDraw(frameRef, myPDFContext);
}

AVURLAsset getting video size

This is pretty frustrating. I'm trying to get the size of an AVURLasset, but try to avoid naturalSize since Xcode tells me, this is deprecated in iOS5.
But: What's the replacement?
I can't find any clue on how to get the video-dimensions without using «naturalsize»...
Resolution in Swift 3:
func resolutionSizeForLocalVideo(url:NSURL) -> CGSize? {
guard let track = AVAsset(URL: url).tracksWithMediaType(AVMediaTypeVideo).first else { return nil }
let size = CGSizeApplyAffineTransform(track.naturalSize, track.preferredTransform)
return CGSize(width: fabs(size.width), height: fabs(size.height))
}
For Swift 4:
func resolutionSizeForLocalVideo(url:NSURL) -> CGSize? {
guard let track = AVAsset(url: url as URL).tracks(withMediaType: AVMediaType.video).first else { return nil }
let size = track.naturalSize.applying(track.preferredTransform)
return CGSize(width: fabs(size.width), height: fabs(size.height))
}
Solutions without preferredTransform do not return correct values for some videos on the latest devices!
I just checked the documentation online, and the naturalSize method is deprecated for the AVAsset object. However, there should always be an AVAssetTrack which refers to the AVAsset, and the AVAssetTrack has a naturalSize method that you can call.
naturalSize
The natural dimensions of the media data referenced by the track. (read-only)
#property(nonatomic, readonly) CGSize naturalSize
Availability
Available in iOS 4.0 and later. Declared In AVAssetTrack.h
Via: AVAssetTrack Reference for iOS
The deprecation warning on the official documentation suggests, "Use the naturalSize and preferredTransform, as appropriate, of the asset’s video tracks instead (see also tracksWithMediaType:)."
I changed my code from:
CGSize size = [movieAsset naturalSize];
to
CGSize size = [[[movieAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize];
It's less pretty and less safe now but won't break when they drop that method.
The deprecation warning says:
Use the naturalSize and preferredTransform, as appropriate,
of the asset’s video tracks instead (see also tracksWithMediaType:).
So we need an AVAssetTrack, and we want its naturalSize and preferredTransform. This can be accessed with the following:
AVAssetTrack *track = [[asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
CGSize dimensions = CGSizeApplyAffineTransform(track.naturalSize, track.preferredTransform);
asset is obviously your AVAsset.
This is a fairly simple extension for AVAsset in Swift 4 to get the size of the video, if available:
extension AVAsset {
var screenSize: CGSize? {
if let track = tracks(withMediaType: .video).first {
let size = __CGSizeApplyAffineTransform(track.naturalSize, track.preferredTransform)
return CGSize(width: fabs(size.width), height: fabs(size.height))
}
return nil
}
}
To derive the dimension of an AVAsset, you should calculate the union of all the visual track rects (after applying their corresponding preferred transformation):
CGRect unionRect = CGRectZero;
for (AVAssetTrack *track in [asset tracksWithMediaCharacteristic:AVMediaCharacteristicVisual]) {
CGRect trackRect = CGRectApplyAffineTransform(CGRectMake(0.f,
0.f,
track.naturalSize.width,
track.naturalSize.height),
track.preferredTransform);
unionRect = CGRectUnion(unionRect, trackRect);
}
CGSize naturalSize = unionRect.size;
Methods that rely on CGSizeApplyAffineTransform fail when your asset contains tracks with non-trivial affine transformation (e.g., 45 degree rotations) or if your asset contains tracks with different origins (e.g., two tracks playing side-by-side with the second track's origin augmented by the width of the first track).
See: MediaPlayerPrivateAVFoundationCF::sizeChanged()at https://opensource.apple.com/source/WebCore/WebCore-7536.30.2/platform/graphics/avfoundation/cf/MediaPlayerPrivateAVFoundationCF.cpp
For Swift 5
let assetSize = asset.tracks(withMediaType: .video)[0].naturalSize
Swift version of #David_H answer.
extension AVAsset {
func resolutionSizeForLocalVideo() -> CGSize? {
var unionRect = CGRect.zero;
for track in self.tracks(withMediaCharacteristic: .visual) {
let trackRect = CGRect(x: 0, y: 0, width:
track.naturalSize.width, height:
track.naturalSize.height).applying(track.preferredTransform)
unionRect = unionRect.union(trackRect)
}
return unionRect.size
}
}
For iOS versions 15.0 and above,
extension AVAsset {
func naturalSize() async -> CGSize? {
guard let tracks = try? await loadTracks(withMediaType: .video) else { return nil }
guard let track = tracks.first else { return nil }
guard let size = try? await track.load(.naturalSize) else { return nil }
return size
}
}

Resources