I am trying to save an instance of NSColor to a file like this:
writeF(node.lineColour.hueComponent)
writeF(node.lineColour.saturationComponent)
writeF(node.lineColour.brightnessComponent)
writeF(node.lineColour.alphaComponent)
where the write function is:
func writeF(var val: CGFloat) -> Bool {
let nsd = NSData(bytes: &val, length: sizeof(CGFloat))
let rv = oStream!.write(UnsafePointer(nsd.bytes), maxLength: sizeof(CGFloat))
return rv > 0
}
And "node.lineColour" is just NSColor.blueColor(). It all compiles OK, but gives a run-time message at the first "writeF" line:
2015-10-01 07:57:43.871 canl[77917:8371660] An uncaught exception was raised
2015-10-01 07:57:43.871 canl[77917:8371660] *** -hueComponent not valid for the NSColor NSCalibratedWhiteColorSpace 0 1; need to first convert colorspace.
Apple documentation on color spaces is very esoteric (if you already understand it then it's a fine reference, but if not then... good luck). Why is the above code wrong? Should be able to at least retrieve the color components (CGFloat).
After swimming through the available documentation and trying different things, I found this to work:
let aColor = node.lineColor.colorUsingColorSpaceName(NSCalibratedRGBColorSpace)
if let culoare = aColor {
writeF(culoare.redComponent)
writeF(culoare.greenComponent)
writeF(culoare.blueComponent)
writeF(culoare.alphaComponent)
}
It also works for getting the hue, saturation, and brightness components, but I think I will go with RGB.
Related
Edit: Fixed. A working sample is at https://github.com/halmueller/vImage-mac-sample.
I'm trying to read the feed of a MacBook Pro's Facetime camera to process it with the vImage framework. I'm following the example in Apple's VideoCaptureSample, which is written for iOS.
I'm getting hung up on creating the vImageConverter, which creates an image buffer that vImage can use. My call to vImageConverter_CreateForCVToCGImageFormat() fails, with the console error "insufficient information in srcCVFormat to decode image. vImageCVImageFormatError = -21601".
The same call works on iOS. But the image formats are different on iOS and macOS. On iOS, the vImageConverter constructor is able to infer the format information, but on macOS, it can't.
Here's my setup code:
func displayEqualizedPixelBuffer(pixelBuffer: CVPixelBuffer) {
var error = kvImageNoError
if converter == nil {
let cvImageFormat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer).takeRetainedValue()
let deviceRGBSpace = CGColorSpaceCreateDeviceRGB()
let dcip3SolorSpace = CGColorSpace(name: CGColorSpace.dcip3)
vImageCVImageFormat_SetColorSpace(cvImageFormat,
deviceRGBSpace)
print(cvImageFormat)
if let unmanagedConverter = vImageConverter_CreateForCVToCGImageFormat(
cvImageFormat,
&cgImageFormat,
nil,
vImage_Flags(kvImagePrintDiagnosticsToConsole),
&error) {
guard error == kvImageNoError else {
return
}
converter = unmanagedConverter.takeRetainedValue()
} else {
return
}
}
When I run on iOS, I see in the console:
vImageCVFormatRef 0x101e12210:
type: '420f'
matrix:
0.29899999499321 0.58700001239777 0.11400000005960
-0.16873589158058 -0.33126410841942 0.50000000000000
0.50000000000000 -0.41868758201599 -0.08131241053343
chroma location: <RGB Base colorspace missing>
RGB base colorspace: =Bo
On macOS, though, the call to vImageConverter_CreateForCVToCGImageFormat returns nil, and I see:
vImageCVFormatRef 0x10133a270:
type: '2vuy'
matrix:
0.29899999499321 0.58700001239777 0.11400000005960
-0.16873589158058 -0.33126410841942 0.50000000000000
0.50000000000000 -0.41868758201599 -0.08131241053343
chroma location: <RGB Base colorspace missing>
RGB base colorspace: Ð ü
2018-03-13... kvImagePrintDiagnosticsToConsole: vImageConverter_CreateForCVToCGImageFormat error:
insufficient information in srcCVFormat to decode image. vImageCVImageFormatError = -21601
Note that the image type (4 letter code) is different, as is the RGB base colorspace. I've tried on the Mac using dcip3ColorSpace instead of deviceRGB, and the results are the same.
What am I missing to get this vImageConverter created?
The -21601 error code means that the source CV format is missing chroma siting information (see http://dougkerr.net/Pumpkin/articles/Subsampling.pdf for a nice background of chroma siting). You can fix this by explicitly setting it with vImageCVImageFormat_SetChromaSiting. So, immediately after setting the format's color space, and before creating the converter (i.e. where you have print(cvImageFormat)), add the following:
vImageCVImageFormat_SetChromaSiting(cvImageFormat,
kCVImageBufferChromaLocation_Center)
Cheers!
simon
So, while the answer about calling setting the chorma property on the vImage format does work, there is a better way to do that. Just set the property on the core video pixel buffer and then when you call vImageCVImageFormat_CreateWithCVPixelBuffer() it will just work, like so:
NSDictionary *pbAttachments = #{
(__bridge NSString*)kCVImageBufferChromaLocationTopFieldKey:(__bridge NSString*)kCVImageBufferChromaLocation_Center,
(__bridge NSString*)kCVImageBufferAlphaChannelIsOpaque: (id)kCFBooleanTrue,
};
CVBufferRef pixelBuffer = cvPixelBuffer;
CVBufferSetAttachments(pixelBuffer, (__bridge CFDictionaryRef)pbAttachments, kCVAttachmentMode_ShouldPropagate);
For extra points, you can also set the colorspace ref with the kCVImageBufferICCProfileKey and the CGColorSpaceCopyICCData() API.
I am using Swift to do some video processing. After upgrading to El Capitan (and Swift 2) my code broke. I traced an error down to the CIFilter function CIColorClamp. This function is supposed to clamp the pixel values, but in fact messes up the image extent.
let _c:CGFloat = 0.05
let minComp = CIVector(x:_c, y:_c, z:_c, w: 1)
let maxComp = CIVector(x:1, y:1, z:1, w: 1)
let clamp: CIFilter = CIFilter(name: "CIColorClamp")!
print("clamp-in \(image.extent)")
clamp.setDefaults()
clamp.setValue(image, forKey: kCIInputImageKey)
clamp.setValue(minComp, forKey: "inputMinComponents")
clamp.setValue(maxComp, forKey: "inputMaxComponents")
print("clamp-out \(clamp.outputImage!.extent)")
The code above produces the output:
> clamp-in (6.0, 6.0, 1268.0, 708.0)
CoreAnimation: Warning! CAImageQueueSetOwner() is deprecated and does nothing. Please stop calling this method.
> clamp-out (-8.98846567431158e+307, -8.98846567431158e+307, 1.79769313486232e+308, 1.79769313486232e+308)
The fact that this call produces an internal warning does not instill confidence either!
Can anyone confirm this behavior? What am I doing wrong?
I also ran into this Problem. The extent was always set like this
-8.98846567431158e+307, -8.98846567431158e+307, 1.79769313486232e+308, 1.79769313486232e+308
but then I tried calling filter.debugDescription and recognized that in the extent in the sourceImage is given properly.
Here's my workaround. Because I use different filters, I ask if the filters name is 'CIColorClamp' and then I set the extent used in the CGImageRef to the values from the original image.
var extent = filteredImage.extent
if filter.name == "CIColorClamp" {
extent = sourceImage.extent
}
let cgImage:CGImageRef = context.createCGImage(filteredImage, fromRect: extent)
UIImageJPEGRepresentation(UIImage(CGImage: cgImage), 1.0).writeToFile(...)
Before that fix I always had an Crash cause the UIImageJPEGRepresentation could not be created, cause of the wrong extent values.
So it looks like that the extent is not transferred to the filtered image.
I had exactly the problem. I fixed it simply by cropping the returned image to the original image rect (Objective-C code):
if ([filter.name isEqualToString:#"CIColorClamp"]) {
image = [image imageByCroppingToRect:sourceImage.extent];
}
How do you invoke HIDictionaryWindowShow in Swift? I try this:
import Carbon
if let text = _dictionaryText, let range = _dictionaryRange
{
let font = CTFontCreateWithName("Baskerville", 16, nil);
let point = CGPoint(x: 0.0, y: 0.0);
var trns = CGAffineTransform();
HIDictionaryWindowShow(nil, text, range, font, point, false, &trns);
}
But getting error
Cannot invoke 'HIDictionaryWindowShow' with an argument list of type
'(nil, String, CFRange, CTFont!, CGPoint, Bool, CGAffineTransform)'
Not seeing the wrong argument here. First and last argument should be allowed to be nil but docs say that NULL is Ok as the first argument which is nil in Swift or is it? As there is no NULL in Swift what do I need to specify instead of it?
Sorry I don't have the Swift code, but you can probably work from the following untested objective-c code. Note that showDefinitionForAttributedString was added to replace HIDictionaryWindowShow which is, as you know, a carbon function and won't be supported forever.
[self.view showDefinitionForAttributedString:[[NSAttributedString alloc] initWithString:text] atPoint:NSMakePoint(0.0, 0.0)];
EDIT:
In looking further the second argument is not correct in your example. Carbon lib does not understand String, it wants CFTypeRef.
I've been struggling all week with this one. My first few attempts landed me with RGBA images from a monochrome image. I've gotten down to 8bit gray scale. But I need more compression. Indexed is the way to go but I have found no sample code or documentation (and I've looked) to do it. I've already asked on the Apple Developer Forums and came up blank so far.
I'm using Swift 1.2 in Xcode 6.3.1 targeting OS X 10.10. Here is a relevant section of code.
func createBitmapContext (size: CGRect) -> CGContext! {
let context = CGBitmapContextCreate(nil,
Int(size.width),
Int(size.height),
8,
0,
CGColorSpaceCreateDeviceGray(),
CGBitmapInfo(CGImageAlphaInfo.None.rawValue))
CGContextSetInterpolationQuality(context, kCGInterpolationNone)
CGContextSetShouldAntialias(context, false)
return context
}
func createCGImage(ci: CIImage) -> CGImage? {
let contextOptions = [kCIContextOutputColorSpace: NSNull(),
kCIContextWorkingColorSpace: NSNull(),
kCIContextUseSoftwareRenderer: true]
let cgContext = createBitmapContext(ci.extent())
let context = CIContext(CGContext: cgContext, options: contextOptions as [NSObject : AnyObject])
CGContextDrawImage(cgContext, ci.extent(), context.createCGImage(ci, fromRect: ci.extent()))
return CGBitmapContextCreateImage(cgContext)
}
func saveImage(path: String, image: NSData) {
image.writeToFile(path, atomically: true)
}
func getImageFileData(image: CGImage) -> NSData? {
var pngDataRef = CFDataCreateMutable(nil, 0)
let pngDest = CGImageDestinationCreateWithData(pngDataRef, kUTTypePNG, 1, nil)
CGImageDestinationAddImage(pngDest, image, nil)
CGImageDestinationFinalize(pngDest)
return pngDataRef
}
The code takes a CIImage that is already B&W. It gets rendered into a supported context to create a gray scale CGImage. I have not found a way to create an Indexed context. The image destination is an NSData object because I want to do some work with it before I spit it out to a file. The NSData object contains the file data as it will appear on disk.
What I want is an indexed image that is one bit per pixel. I know that PNG supports that format and that OS X will read it. I just haven't figured out how to create it.
Help much appreciated. Thanks.
There is an issue of GLKView, I'm stuck here a lot. First, I create EAGLContext context and make it current:
EAGLContext* pOpenGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
if(!pOpenGLContext)
return nil;
if(![EAGLContext setCurrentContext:pOpenGLContext])
return nil;
Runs ok (I need version 3 so it sutes me)! Then I create GLKView, attached to previously created context:
GLKView* pOpenGLView = [[GLKView alloc] initWithFrame:Frame context:pOpenGLContext];
It's ok. But this code don't change anything at all :(
[pOpenGLView setDrawableColorFormat:GLKViewDrawableColorFormatRGBA8888];
[pOpenGLView setDrawableDepthFormat:GLKViewDrawableDepthFormat24];
[pOpenGLView setDrawableStencilFormat:GLKViewDrawableStencilFormatNone];
[pOpenGLView setDrawableMultisample:GLKViewDrawableMultisampleNone];
Then I do some final stuff:
pOpenGLView.delegate = self;
[pMainWindow addSubview:pOpenGLView];
...
However, after using GLKViewDrawableStencilFormatNone, I asks OpenGL for a depth and stencil formats... I get:
glGetIntegerv(GL_DEPTH_BITS, &OpenGLDepthBits); // = 32 (I need 24)
glGetIntegerv(GL_STENCIL_BITS, &OpenGLStencilBits); // = 8 (I need 0)
I need to turn stencil buffer off! I need to set 24-bit format depth buffer.
I have try to do like this also:
pOpenGLView.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
pOpenGLView.drawableDepthFormat = GLKViewDrawableDepthFormat24;
pOpenGLView.drawableStencilFormat = GLKViewDrawableStencilFormatNone;
How can I get it? What is wrong here? Thank you.