How can you load a font (TTF) from a file using Core Text? - cocoa

Prior to OSX 10.6, ATSFontActivateFromFileSpecification/ATSFontActivateFromFileReference were available and could be used to load a font from a file. I can't find anything similar in Core Text.

You can get a CTFontRef from a font file by going via a CGFontRef:
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFSTR("/path/to/font"), kCFURLPOSIXPathStyle, false);
CGDataProviderRef dataProvider = CGDataProviderCreateWithURL(url);
CGFontRef theCGFont = CGFontCreateWithDataProvider(dataProvider);
CTFontRef theCTFont = CTFontCreateWithGraphicsFont(theCGFont);
CFRelease(theCGFont);
CFRelease(dataProvider);
CFRelease(url);
// do something with the CTFontRef here
CFRelease(theCTFont);

It looks like CTFontManagerCreateFontDescriptorsFromURL is the Core Text replacement.

Here's an updated version of how to do this in 2020. Much simpler now. Used 12 as arbitrary type size.
let fontURL = URL(fileURLWithPath: "path/to/font.otf")
let fd = CTFontManagerCreateFontDescriptorsFromURL(fontURL as CFURL) as! [CTFontDescriptor]
let theCTFont = CTFontCreateWithFontDescriptor(fd[0], 12.0, nil)

NSURL *fontURL = [[NSBundle mainBundle] URLForResource:#"Crystal" withExtension:#"ttf"];
assert(fontURL);
CFErrorRef error = NULL;
if (!CTFontManagerRegisterFontsForURL((__bridge CFURLRef)fontURL, kCTFontManagerScopeProcess, &error))
{
CFShow(error);
abort();
}

Related

Browse directory from UI and select a file, get the location of the file and read it`s content in a string variable for Mac OS app in Objective C

What I want is very similar to folderBrowsingDialog and folderBrowsingDialog.selectedPath in C#.
I want to build a Mac OS app (Objective C) where user can browse directory from UI and select a file, get the location of the file and read it`s content in a string variable. What UI component should I use for browsing and reading location of the file?
For file manipulation, I understand I need to work with NSFileManager. But how to carry out the first part? Is there any good documentation on cocoaapplication UI programming for Mac OS for carrying out the task?
There is a question here that sounds similar to this one but that does not discuss UI part.
Here is a simple example of how this can be done:
- (IBAction)buttonAction:(id)sender {
NSOpenPanel *openPanel = [NSOpenPanel new];
openPanel.canChooseFiles = YES;
openPanel.canChooseDirectories = NO;
openPanel.allowsMultipleSelection = YES;
[openPanel beginWithCompletionHandler:^(NSInteger result) {
if (result == NSFileHandlingPanelOKButton) {
for (NSURL* fileURL in openPanel.URLs) {
NSData *fileContent = [NSData dataWithContentsOfURL:fileURL];
NSString* stringFileContent = [[NSString alloc] initWithData:fileContent encoding:NSUTF8StringEncoding];
NSLog(#"File content: %#", stringFileContent);
}
}
}];
}

Creating a Cocoapod library and [UIImage imageNamed:] returns NULL

I have https://github.com/fulldecent/FDChessBoardView working great and am now starting the project again from scratch with pod lib create in order to make a Podspec.
There is one {'.h','.m'} file and some images. The images are in the file system in the provided Pod/Assets folder. The resources are noted in the podspec file with:
s.resource_bundles = {
'FDChessboardView' => ['Pod/Assets/*']
}
(I have also tried directly adding these files into the Development Pods/FDChessboardView/Resources group inside XCode.)
Inside the library implementation file I need to refer to these images. I have tried:
NSString *bundlePath = [[NSBundle mainBundle] pathForResource:#"FDChessboardView" ofType:#"bundle"];
NSBundle *bundle = [NSBundle bundleWithPath:bundlePath];
NSString *imagePath = [bundle pathForResource:#"aa" ofType:#"png"];
UIImage* image = [UIImage imageWithContentsOfFile:imagePath];
Here the imagePath is set correctly. This file exists and file confirms it is a PNG:
[...]aa.png: PNG image data, 182 x 164, 8-bit/color RGBA, non-interlaced
However the UIImage is NULL.
I have also tried these:
image = [UIImage imageNamed:#"aa"];
UIImage *image = [UIImage imageNamed:#"FDChessboardView.bundle/aa.png"];
image = [UIImage imageNamed:#"aa" inBundle:bundle compatibleWithTraitCollection:nil];
All of them produce NULL.
Could anyone help point me in the correct direction for how to load theme image assets?
Thank you!
Will
Ok - I found a solution that works for me:
In the lib I am creating, I have the following class: AEBubbleField.h AEBubbleView.m
I also have an image in the Assets folder of the generated file structure called background.png. I want to use this in my lib files above so I add the following to the pod spec (to ensure the png is copied with my lib classes)
s.ios.resource_bundle = { 'AEBubbleField' => 'Pod/Assets/*.png' }
I can then access the image in the following manner:
NSBundle *bundle = [NSBundle bundleForClass:[self class]]; // Any class in the lib can be used here, I just went for the current class
UIImage *backgroundImage = [UIImage imageNamed:#"background.png" inBundle:bundle compatibleWithTraitCollection:nil];
This is the only way I can get the image and use it. All other methods I have tried simply return nil like the original question's.
this answer works for me.
https://stackoverflow.com/a/34324540/1897767
and, my swift code is:
class func loadImage(name: String) -> UIImage? {
let podBundle = NSBundle(forClass: MyClass.self)
if let url = podBundle.URLForResource("MyBundleName", withExtension: "bundle") {
let bundle = NSBundle(URL: url)
return UIImage(named: name, inBundle: bundle, compatibleWithTraitCollection: nil)
}
return nil
}
Rob's answer didn't work for me, I think because I didn't change the resource bundle setup from the default in pod lib create
s.resource_bundles = {
'PodCore' => ['Pod/Assets/*.png',
'Pod/Classes/**/*.xib',
'Pod/Classes/**/*.storyboard']
}
I was able to load images from that resource bundle, but had to define it explicitly.
NSString *bundleURL = [[NSBundle mainBundle] pathForResource:#"PodCore"
ofType:#"bundle"];
NSBundle *podBundle = [NSBundle bundleWithPath:bundleURL];
UIImage *image = [UIImage imageNamed:#"home_black.png"
inBundle:podBundle
compatibleWithTraitCollection:nil];
If you're using Swift 2 > you can access this way:
// should access the bundle for this class
let bundle = NSBundle(forClass: self.classForCoder)
// successfully loaded image
let myImageToUse = UIImage(named: "image", inBundle: bundle, compatibleWithTraitCollection: nil)

Adding filters to video with AVFoundation (OSX) - how do I write the resulting image back to AVWriter?

Setting the scene
I am working on a video processing app that runs from the command line to read in, process and then export video. I'm working with 4 tracks.
Lots of clips that I append into a single track to make one video. Let's call this the ugcVideoComposition.
Clips with Alpha which get positioned on a second track and using layer instructions, is set composited on export to play back over the top of the ugcVideoComposition.
A music audio track.
An audio track for the ugcVideoComposition containing the audio from the clips appended into the single track.
I have this all working, can composite it and export it correctly using AVExportSession.
The problem
What I now want to do is apply filters and gradients to the ugcVideoComposition.
My research so far suggests that this is done by using AVReader and AVWriter, extracting a CIImage, manipulating it with filters and then writing that out.
I haven't yet got all the functionality I had above working, but I have managed to get the ugcVideoComposition read in and written back out to disk using the AssetReader and AssetWriter.
BOOL done = NO;
while (!done)
{
while ([assetWriterVideoInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Let's try create an image....
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer];
// < Apply filters and transformations to the CIImage here
// < HOW TO GET THE TRANSFORMED IMAGE BACK INTO SAMPLE BUFFER??? >
// Write things back out.
[assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}
As you can see, I can even get the CIImage from the CMSampleBuffer, and I'm confident I can work out how to manipulate the image and apply any effects etc. I need. What I don't know how to do is put the resulting manipulated image BACK into the SampleBuffer so I can write it out again.
The question
Given a CIImage, how can I put that into a sampleBuffer to append it with the assetWriter?
Any help appreciated - the AVFoundation documentation is terrible and either misses crucial points (like how to put an image back after you've extracted it, or is focussed on rendering images to the iPhone screen which is not what I want to do.
Much appreciated and thanks!
I eventually found a solution by digging through a lot of half complete samples and poor AVFoundation documentation from Apple.
The biggest confusion is that while at a high level, AVFoundation is "reasonably" consistent between iOS and OSX, the lower level items behave differently, have different methods and different techniques. This solution is for OSX.
Setting up your AssetWriter
The first thing is to make sure that when you set up the asset writer, you add an adaptor to read in from a CVPixelBuffer. This buffer will contain the modified frames.
// Create the asset writer input and add it to the asset writer.
AVAssetWriterInput *assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[[videoTracks objectAtIndex:0] mediaType] outputSettings:videoSettings];
// Now create an adaptor that writes pixels too!
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput
sourcePixelBufferAttributes:nil];
assetWriterVideoInput.expectsMediaDataInRealTime = NO;
[assetWriter addInput:assetWriterVideoInput];
Reading and Writing
The challenge here is that I couldn't find directly comparable methods between iOS and OSX - iOS has the ability to render a context directly to a PixelBuffer, where OSX does NOT support that option. The context is also configured differently between iOS and OSX.
Note that you should include the QuartzCore.Framework into your XCode Project as well.
Creating the context on OSX.
CIContext *context = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil]; // We don't want to always create a context so we put it outside the loop
Now you want want to loop through, reading off the AssetReader and writing to the AssetWriter... but note that you are writing via the adaptor created previously, not with the SampleBuffer.
while ([adaptor.assetWriterInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
// GRAB AN IMAGE FROM THE SAMPLE BUFFER
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:640.0], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:360.0], kCVPixelBufferHeightKey,
nil];
CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer options:options];
//-----------------
// FILTER IMAGE - APPLY ANY FILTERS IN HERE
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"];
[filter setDefaults];
[filter setValue: inputImage forKey: kCIInputImageKey];
[filter setValue: #1.0f forKey: kCIInputIntensityKey];
CIImage *outputImage = [filter valueForKey: kCIOutputImageKey];
//-----------------
// RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
// 1. Firstly render the image
CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];
// 2. Grab the size
CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));
// 3. Convert the CGImage to a PixelBuffer
CVPixelBufferRef pxBuffer = NULL;
// pixelBufferFromCGImage is documented below.
pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];
// 4. Write things back out.
// Calculate the frame time
CMTime frameTime = CMTimeMake(1, 30); // Represents 1 frame at 30 FPS
CMTime presentTime=CMTimeAdd(currentTime, frameTime); // Note that if you actually had a sequence of images (an animation or transition perhaps), your frameTime would represent the number of images / frames, not just 1 as I've done here.
// Finally write out using the adaptor.
[adaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}
Creating the PixelBuffer
There MUST be an easier way, however for now, this works and is the only way I found to get directly from a CIImage to a PixelBuffer (via a CGImage) on OSX. The following code is cut and paste from AVFoundation + AssetWriter: Generate Movie With Images and Audio
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Try using: SDAVAssetExportSession
SDAVAssetExportSession on GITHub
and then implementing a delegate to process the pixels
- (void)exportSession:(SDAVAssetExportSession *)exportSession renderFrame:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime toBuffer:(CVPixelBufferRef)renderBuffer
{ Do CIImage and CIFilter inside here }

Taking a screenshot of MKMapView

I'm trying to get a screenshot of a MKMapView.
and I'm using the following code:
UIGraphicsBeginImageContext(myMapView.frame.size);
[myMapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
And I'm getting and almost blank image with the map current location icon and a Google logo in it.
What could be causing that?
I should tell you that myMapView is actually on another viewController's view but since I'm getting the blue spot showing the location and the google logo I assume the reference I have is the correct one.
Thank you.
iOS 7 introduced a new method to generate screenshots of a MKMapView. It is now possible to use the new MKMapSnapshot API as follows:
MKMapView *mapView = [..your mapview..]
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc]init];
options.region = mapView.region;
options.mapType = MKMapTypeStandard;
options.showsBuildings = NO;
options.showsPointsOfInterest = NO;
options.size = CGSizeMake(1000, 500);
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc]initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_main_queue() completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if( error ) {
NSLog( #"An error occurred: %#", error );
} else {
[UIImagePNGRepresentation( snapshot.image ) writeToFile:#"/Users/<yourAccountName>/map.png" atomically:YES];
}
}];
Currently all overlays and annotations are not rendered. You have to render them afterwards onto the resulting snapshot image yourself. The provided MKMapSnapshot object has a handy helper method to do the mapping between coordinates and points:
CGPoint point = [snapshot pointForCoordinate:locationCoordinate2D];
As mentioned here, you can try this
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

Obtain ReferenceURL after saving an image using UIImageWriteToSavedPhotosAlbum()

I want to obtain the referenceURL to the image that I saved into camera roll using UIImageWriteToSavedPhotosAlbum().
iOS 4.1 or above can do it easily by using AssetLibrary.
ALAssetsLibraryWriteImageCompletionBlock completionBlock = ^(NSURL* url, NSError* error) {
if (error == nil) {
savedURL = url;
}
};
UIImage * originalImage = [info objectForKey:UIImagePickerControllerOriginalImage];
NSMutableDictionary * metadata = (NSMutableDictionary *)[info objectForKey:UIImagePickerControllerMediaMetadata];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:originalImage.CGImage
metadata:metadata
completionBlock:completionBlock];
But, I cannot figure out a smart way in case of earlier iOS where the only way of saving an image to the camera library is UIImageWriteToSavedPhotosAlbum(). One way I think about is looking around the saved image using ALAssetsGroup etc. This is not smart for me, and it only helps iOS 4.0.
Thank you in advance,
Kiyo
Use writeImageToSavedPhotosAlbum instead:
[library writeImageToSavedPhotosAlbum:[originalImage CGImage] orientation:(ALAssetOrientation)[originalImage imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
NSLog(#"error"); // oops, error !
} else {
NSLog(#"url %#", assetURL); // assetURL is the url you looking for
}
}];

Resources