Adding image inside textview using native NStextattachment and NSattributedString in Nativescript - nativescript

I just tried to add an image inside the textview using native NSAtrributedstring and NStextAttachment, getting some help from this article here
However, I am unable to do it. I am using nativescript-mediafilepicker library to add the image from the Photo library, then converting the PH image to UIImage using one its inbuilt method. But the textview is not getting updated with the image. However, I am being able to add more string through the NSattributedstring but not image.
here's my code.
//creating and initilizing new NSMutableAttributedString
var attributedString = NSMutableAttributedString.alloc().initWithString(textview.ios.text);
textview.ios.attributedText = attributedString;
//value is an UIImage object what the convertPHImageToUIImage method returns
var image = value;
console.log(image);
//the above log prints<UIImage: 0x2817ac4d0> size {4032, 3024} orientation 0 scale 1.000000
let oldWidth = image.size.width;
// console.log(oldWidth);
let scaleFactor = oldWidth / (textview.ios.frame.size.width - 10);
//console.log(scaleFactor);
let orientation="up";
//create NStextAttachment
let textAttachment = NSTextAttachment.alloc().init();
//adding UIImage object to NSTextAttachment
textAttachment.image = UIImage.imageWithCGImageScaleOrientation(image.CGImage ,scaleFactor , orientation)
// console.dir(textAttachment);
//creating a new NSAttributedString
let attrStringWithImage = NSAttributedString.alloc().init();
//console.dir(attrStringWithImage);
attrStringWithImage.attachment = textAttachment;
console.dir(attrStringWithImage)
//appenind the NSAttributedString to the mutable string..
attributedString.appendAttributedString(attrStringWithImage);
//console.log(attributedString.containsAttachmentsInRange(textview.ios.selectedRange));
textview.ios.attributedText = attributedString;
//textview.ios.textStorage.insertAttributedStringAtIndex(attrStringWithImage,textview.ios.selectedRange.location)
//this doesn't work either

Install tns-platform-declarations if you are using TypeScript, that will make your life easy when you want to access native apis.
UIImage.imageWithCGImageScaleOrientation(cgImage, scale, orientation);
This docs will help you understanding the casting of Objective C to JavaScript / TypeScript.

Related

How to efficiently draw a Core Image with a Filter into an NSView?

I am applying a perspective Core Image filter to transform and draw a CIImage into a custom NSView and it seems slower than I expected (e.g, I drag a slider that alters the perspective transformation and the drawing lags behind the slider value). Here is my custom drawRect method where self.mySourceImage is a CIImage:
- (void)drawRect:(NSRect)dirtyRect {
[super drawRect:dirtyRect];
if (self.perspectiveFilter == nil)
self.perspectiveFilter = [CIFilter filterWithName:#"CIPerspectiveTransform"];
[self.perspectiveFilter setValue:self.mySourceImage
forKey:#"inputImage"];
[self.perspectiveFilter setValue: [CIVector vectorWithX:0 Y:0]
forKey:#"inputBottomLeft"];
// ... set other vector parameters based on slider value
CIImage *outputImage = [self.perspectiveFilter outputImage];
[outputImage drawInRect:dstrect
fromRect:srcRect
operation:NSCompositingOperationSourceOver
fraction:0.8];
}
Here is an example output:
My experience with image filters tells me that this should be much faster. Is there some "best practice" that I am missing to speed this up?
Note that I only create the filter once (stored as a property).
I did make sure the view has a CALayer for a backing store. Should I be adding the filter to a CALayer somehow?
Note that I never create a CIContext -- I assume there is an implicit context used by NSView? Should I create a CIContext and render to an image and draw the image?
Here's how I use a GLKView in UIKit:
I prefer subclassing GLKView to allow for a few things:
initializing from code
overriding draw(rect:) for the UIImageView equivalence of contentMode (aspect fit in particular)
when using scaleAspectFit, creating a "clear color" for the background color to match the surrounding superviews
That said, here's what I have:
import GLKit
class ImageView: GLKView {
var renderContext: CIContext
var rgb:(Int?,Int?,Int?)!
var myClearColor:UIColor!
var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
var uiImage:UIImage? {
get {
let final = renderContext.createCGImage(self.image, from: self.image.extent)
return UIImage(cgImage: final!)
}
}
init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
self.translatesAutoresizingMaskIntoConstraints = false
}
override init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
self.translatesAutoresizingMaskIntoConstraints = false
}
required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
self.translatesAutoresizingMaskIntoConstraints = false
}
override func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes:
The vast majority of this was taken from something written a few years back (in Swift 2 I think) from objc.io with the associated GitHub project. In particular, check out their GLKView subclass that has code for scaleAspectFill and other content modes.
Note the usage of a single CIContext called renderContext. I use it to create a UIImage when needed (in iOS you "share" a UIImage).
I use a didSet with the image property to automatically call setNeedsDisplay when the image changes. (I also call this explicitly when an iOS device changes orientation.) I do not know the macOS equivalent of this call.
I hope this gives you a good start for using OpenGL in macOS. If it's anything like UIKit, trying to put a CIImage in an NSView doesn't involve the GPU, which is a bad thing.

How do you embed an image in an NSAttributedString?

I am trying to create an NSAttributedString that includes an NSImage for an OS X application.
I have tried a few different ways, but with this basic code:
let image = NSImage(named: "super-graphic")!
let attachment = NSTextAttachment()
attachment.image = image
let imageString = NSAttributedString(attachment: attachment)
When I set this on an NSLabel, or NSTextField attributed string, the image doesn't render.
Is it possible to combine NSImage and NSAttributedString to embed an image in an attributed string on OS X?
Well, I feel silly, this code works:
let image = NSImage(named: "super-graphic")!
let attachment = NSTextAttachment()
let cell = NSTextAttachmentCell(imageCell: image)
attachment.attachmentCell = cell
let imageString = NSAttributedString(attachment: attachment)
The key difference was to avoid using the "image" property of the NSTextAttachment

initializing NSImage in swift

Trying to initializing NSImage object.
var image = NSImage("Images/pause_work_normal.png")
But get an error: ambiguous reference to member NSImage.init
Try this:
var image:NSImage!
image = NSImage(named:"Images/pause_work_normal.png")
Or the short way
var image = NSImage(named:"Images/pause_work_normal.png")

Adding filters to video with AVFoundation (OSX) - how do I write the resulting image back to AVWriter?

Setting the scene
I am working on a video processing app that runs from the command line to read in, process and then export video. I'm working with 4 tracks.
Lots of clips that I append into a single track to make one video. Let's call this the ugcVideoComposition.
Clips with Alpha which get positioned on a second track and using layer instructions, is set composited on export to play back over the top of the ugcVideoComposition.
A music audio track.
An audio track for the ugcVideoComposition containing the audio from the clips appended into the single track.
I have this all working, can composite it and export it correctly using AVExportSession.
The problem
What I now want to do is apply filters and gradients to the ugcVideoComposition.
My research so far suggests that this is done by using AVReader and AVWriter, extracting a CIImage, manipulating it with filters and then writing that out.
I haven't yet got all the functionality I had above working, but I have managed to get the ugcVideoComposition read in and written back out to disk using the AssetReader and AssetWriter.
BOOL done = NO;
while (!done)
{
while ([assetWriterVideoInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Let's try create an image....
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer];
// < Apply filters and transformations to the CIImage here
// < HOW TO GET THE TRANSFORMED IMAGE BACK INTO SAMPLE BUFFER??? >
// Write things back out.
[assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}
As you can see, I can even get the CIImage from the CMSampleBuffer, and I'm confident I can work out how to manipulate the image and apply any effects etc. I need. What I don't know how to do is put the resulting manipulated image BACK into the SampleBuffer so I can write it out again.
The question
Given a CIImage, how can I put that into a sampleBuffer to append it with the assetWriter?
Any help appreciated - the AVFoundation documentation is terrible and either misses crucial points (like how to put an image back after you've extracted it, or is focussed on rendering images to the iPhone screen which is not what I want to do.
Much appreciated and thanks!
I eventually found a solution by digging through a lot of half complete samples and poor AVFoundation documentation from Apple.
The biggest confusion is that while at a high level, AVFoundation is "reasonably" consistent between iOS and OSX, the lower level items behave differently, have different methods and different techniques. This solution is for OSX.
Setting up your AssetWriter
The first thing is to make sure that when you set up the asset writer, you add an adaptor to read in from a CVPixelBuffer. This buffer will contain the modified frames.
// Create the asset writer input and add it to the asset writer.
AVAssetWriterInput *assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[[videoTracks objectAtIndex:0] mediaType] outputSettings:videoSettings];
// Now create an adaptor that writes pixels too!
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput
sourcePixelBufferAttributes:nil];
assetWriterVideoInput.expectsMediaDataInRealTime = NO;
[assetWriter addInput:assetWriterVideoInput];
Reading and Writing
The challenge here is that I couldn't find directly comparable methods between iOS and OSX - iOS has the ability to render a context directly to a PixelBuffer, where OSX does NOT support that option. The context is also configured differently between iOS and OSX.
Note that you should include the QuartzCore.Framework into your XCode Project as well.
Creating the context on OSX.
CIContext *context = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil]; // We don't want to always create a context so we put it outside the loop
Now you want want to loop through, reading off the AssetReader and writing to the AssetWriter... but note that you are writing via the adaptor created previously, not with the SampleBuffer.
while ([adaptor.assetWriterInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
// GRAB AN IMAGE FROM THE SAMPLE BUFFER
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:640.0], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:360.0], kCVPixelBufferHeightKey,
nil];
CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer options:options];
//-----------------
// FILTER IMAGE - APPLY ANY FILTERS IN HERE
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"];
[filter setDefaults];
[filter setValue: inputImage forKey: kCIInputImageKey];
[filter setValue: #1.0f forKey: kCIInputIntensityKey];
CIImage *outputImage = [filter valueForKey: kCIOutputImageKey];
//-----------------
// RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
// 1. Firstly render the image
CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];
// 2. Grab the size
CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));
// 3. Convert the CGImage to a PixelBuffer
CVPixelBufferRef pxBuffer = NULL;
// pixelBufferFromCGImage is documented below.
pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];
// 4. Write things back out.
// Calculate the frame time
CMTime frameTime = CMTimeMake(1, 30); // Represents 1 frame at 30 FPS
CMTime presentTime=CMTimeAdd(currentTime, frameTime); // Note that if you actually had a sequence of images (an animation or transition perhaps), your frameTime would represent the number of images / frames, not just 1 as I've done here.
// Finally write out using the adaptor.
[adaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}
Creating the PixelBuffer
There MUST be an easier way, however for now, this works and is the only way I found to get directly from a CIImage to a PixelBuffer (via a CGImage) on OSX. The following code is cut and paste from AVFoundation + AssetWriter: Generate Movie With Images and Audio
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Try using: SDAVAssetExportSession
SDAVAssetExportSession on GITHub
and then implementing a delegate to process the pixels
- (void)exportSession:(SDAVAssetExportSession *)exportSession renderFrame:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime toBuffer:(CVPixelBufferRef)renderBuffer
{ Do CIImage and CIFilter inside here }

Auto-Resizing of NSTextView and/or NSScrollView

I have an NSTextView inside an NSView (which is being used by a NSPopover, don't know if that is relevant) that I'm trying to resize automatically and programmatically (cf caption).
I have been struggling with a lot of stuff, namely :
Looking at NSLayoutManager and usedRectForTextContainer that give me aberrant size values
(usedRectForTextContainer : {{0, 0}, {0.001, 28}})
Modifying NSScrollView frame, [NSScrollView contentView], [NSScrollView documentView]
Getting rid of AutoLayout
I reached the point where I can resize my scollview and my Popover, but I can't get the actual height of the text inside the NSTextView.
Any kind of help would be appreciated.
-(void)resize
{
//Get clipView and scrollView
NSClipView* clip = [popoverTextView superview];
NSScrollView* scroll = [clip superview];
//First, have an extra long contentSize and ScrollView to test everything else
[popover setContentSize:NSMakeSize([popover contentSize].width, 200)];
[scroll setFrame:(NSRect){
[scroll frame].origin.x,[scroll frame].origin.y,
[scroll frame].size.width,155
}];
NSLayoutManager* layout = [popoverTextView layoutManager];
NSTextContainer* container = [popoverTextView textContainer];
NSRect myRect = [layout usedRectForTextContainer:container]; //Broken
//Now what ?
}
I still have no idea why I can't use [layout usedRectForTextContainer:container], but I managed to get the NSTextView's height by using :
-(void)resize
{
//Get glyph range for boundingRectForGlyphRange:
NSRange range = [[myTextView layoutManager] glyphRangeForTextContainer:container];
//Finally get the height
float textViewHeight = [[myTextView layoutManager] boundingRectForGlyphRange:range
inTextContainer:container].size.height;
}
Here is some functional code that I have tested myself for swift. usedRectForTextContainer is not broken it just is set lazily. To set it i call glyphrangefortextcontainer
I found the answer here: http://stpeterandpaul.ca/tiger/documentation/Cocoa/Conceptual/TextLayout/Tasks/StringHeight.html
let textStorage = NSTextStorage(string: newValue as! String)
let textContainer = NSTextContainer(size: NSSize(width: view!.Text.frame.width, height:CGFloat.max))
let layoutManager = NSLayoutManager()
layoutManager.addTextContainer(textContainer)
textStorage.addLayoutManager(layoutManager)
textContainer.lineFragmentPadding = 0.0
textContainer.lineBreakMode = NSLineBreakMode.ByWordWrapping
textStorage.font = NSFont(name: "Times-Roman", size: 12)
layoutManager.glyphRangeForTextContainer(textContainer)
let rect = layoutManager.usedRectForTextContainer(textContainer)
Swift.print(rect)
The documentation also suggest that is the the case:

Resources