Strange behive NSBitmapImageRep - macos

I try to build image. Pixel by pixel. So first I build some class which can draw by loops different color at each pixel. But it works nice only if alpha is set to 255. Changing alpha makes colors darker and changes picture. Size and place is OK.
var rep:NSBitmapImageRep = NSBitmapImageRep(
bitmapDataPlanes: nil,
pixelsWide: width,
pixelsHigh: height,
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: NSDeviceRGBColorSpace,
bytesPerRow: 0,
bitsPerPixel: 0)!
for posX in 0-offset...width+offset*2-1 {
for posY in 0-offset...height+offset*2-1 {
var R = Int(Float(posX)/Float(width)*255)
var G = Int(Float(posY)/Float(height)*255)
var B = Int(rand() % 256)
pixel = [R,G,B,alpha]
rep.setPixel(&pixel, atX: posX, y: posY)
}
}
rep.drawInRect(bounds)
alpha set to 255
alpha set to 196
alpha equals 127 this time.
And 64.
Where i'm wrong?

The most likely problem is that you're not premultiplying the alpha with the color components.
From the NSBitmapImageRep class reference:
Alpha Premultiplication and Bitmap Formats
When creating a bitmap using a premultiplied format, if a coverage
(alpha) plane exists, the bitmap’s color components are premultiplied
with it. In this case, if you modify the contents of the bitmap, you
are therefore responsible for premultiplying the data. Note that
premultiplying generally has negligible effect on output quality. For
floating-point image data, premultiplying color components is a
lossless operation, but for fixed-point image data, premultiplication
can introduce small rounding errors. In either case, more rounding
errors may appear when compositing many premultiplied images; however,
such errors are generally not readily visible.
For this reason, you should not use an NSBitmapImageRep object if you
want to manipulate image data. To work with data that is not
premultiplied, use the Core Graphics framework instead. (Specifically,
create images using the CGImageCreate function and kCGImageAlphaLast
parameter.) Alternatively, include the
NSAlphaNonpremultipliedBitmapFormat flag when creating the bitmap.
Note
Use the bitmapFormat parameter to the
initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel:
method to specify the format for creating a bitmap. When creating or
retrieving a bitmap with other methods, the bitmap format depends on
the original source of the image data. Check the bitmapFormat property
before working with image data.
You have used the -init... method without the bitmapFormat parameter. In that case, you need to query the bitmapFormat of the resulting object and make sure you build your pixel values to match that format. Note that the format dictates where the alpha appears in the component order, whether the color components are premultiplied by the alpha, and whether the components are floating point or integer.
You can switch to using the -init... method that does have the bitmapFormat parameter and specify NSAlphaNonpremultipliedBitmapFormat mixed in with your choice of other flags (first or last, integer or floating point, endianness). Note, though, that not all possible formats are supported for drawing.
By the way, I strongly recommend reading the sections about NSImage and NSBitmapImageRep in the 10.6 AppKit release notes. Search for "NSImage, CGImage, and CoreGraphics impedance matching" and start reading there through the section "NSBitmapImageRep: CoreGraphics impedance matching and performance notes", which is most relevant here. That last section, in particular, has important information about working directly with pixels.

Related

DirectWrite + Direct2D custom text rendering is hairy

I'm evaluating Direct2D for research purposes, and while I was at it, I decided to render my usual help text with a DirectWrite custom text renderer, which converts the text to a path geometry in order to add outline (as demonstrated in the DWriteHelloWorld sample on MSDN).
However, some letters have weird "hairs" or "horns" on them (picture: stroke width of 3 and 5).
Also tried with other fonts (f.e. Consolas), the effect is the same.
Source code (VS 2015):
https://www.dropbox.com/s/v3204h0ww2cp0yk/FOR_STACKOVERFLOW_12.zip?dl=0
The solution is as easy as I hoped. The "hairs" are actually caused by the line joints which D2D generates. Therefore the solution is to create an ID2D1StrokeStyle object as the following:
ID2D1StrokeStyle* strokestyle = nullptr;
D2D1_STROKE_STYLE_PROPERTIES strokeprops = D2D1::StrokeStyleProperties();
strokeprops.lineJoin = D2D1_LINE_JOIN_ROUND;
d2dfactory->CreateStrokeStyle(strokeprops, NULL, 0, &strokestyle);
// draw
rendertarget->DrawGeometry(transformedgeometry, blackbrush, 3.0f, strokestyle);
With this solution, the text is rendered as expected (perhaps a little more roundish at the joints).
I would suspect the reason for that is that default flattening tolerance in D2D does not work well for the purpose of rendering glyph outlines, at small enough sizes. Normally you'd use bitmap rendering for small sizes and outlines for larger ones, according to GetRecommendedRenderingMode(). Do you have same artifacts if you increase font size let's say 10 times?

Delphi use of CopyRect from bmp to image

I have such code
source.Picture.LoadFromFile(fName);
buffer.Assign(source.Picture.Bitmap);
buffer.Canvas.CopyRect(rect(0,0,buffer.Width,buffer.Height), target.Canvas, rect(0,0,buffer.Width,buffer.Height));
And it won't work.
There are better ways to load image, but I want to play with them.
The main reason is to load smaller images,
So it's correct to copy canvas rect, but it not shows single pixel...
Objects are initialized and scaled except for target which I want to contain more than one image.
I supose there is no need for writing what types are objects, all needed are classes procedures that shows what are what.
I wonder whats wrong? I tried many ways, but simple nothing.
Please help.
Probably source is TImage, buffer is TBitmap, target is also TImage (you should mention it in your question so we don't have to guess).
In this case, the second line would work only when you load from .BMP because only these have bitmap populated. If you have .png or .jpeg instead, the second line would erase the actual picture and replace it with empty bitmap... Not very intuitive behaviour but it's documented at least.
To work with arbitrary graphic, you should use TCanvas.Draw method which in turn calls TGraphic.Draw(). As you can see from description, it draws the graphic you loaded into the canvas at given rectangle. Something like that:
source.Picture.LoadFromFile(fName);
target.Canvas.Draw(0, 0, source.Picture.Graphic);
UPD.
If you want to scale arbitrary picture, it could be done this way:
source.picture.loadFromFile(fName);
buffer.Width := source.picture.Width;
buffer.Height := source.picture.Height;
buffer.PixelFormat := pf24bit;
buffer.Canvas.Draw(0, 0, source.picture.Graphic);
//so we at last have bitmap containing our image in original size
target.Canvas.CopyRect(Rect(0, 0, NewWidth, NewHeight), buffer.canvas, Rect(0, 0, buffer.Width, buffer.Height));
Here NewWidth and NewHeight are image size we want.
By the way, you don't need source: TImage if it is just temporary object to load from file. TPicture would be enough:
var pic: TPicture;
pic := TPicture.Create;
try
pic.LoadFromFile(fName);
...
buffer.Canvas.Draw(0, 0, pic.Graphic);
finally
pic.free;
end;

Trouble Getting Depth Testing To Work With Apple's Metal Graphics API

I'm spending some time in the evenings trying to learn Apple's Metal graphics API. I've run into a frustrating problem and so must be missing something pretty fundamental: I can only get rendered objects to appear on screen when depth testing is disabled, or when the depth function is changed to "Greater". What could possibly be going wrong? Also, what kinds of things can I check in order to debug this problem?
Here's what I'm doing:
1) I'm using SDL to create my window. When setting up Metal, I manually create a CAMetalLayer and insert it into the layer hierarchy. To be clear, I am not using MTKView and I don't want to use MTKView. Staying away from Objective-C and Cocoa as much as possible seems to be the best strategy for writing this application to be cross-platform. The intention is to write in platform-agnostic C++ code with SDL and a rendering engine which can be swapped at run-time. Behind this interface is where all Apple-specific code will live. However, I strongly suspect that part of what's going wrong is something to do with setting up the layer:
SDL_SysWMinfo windowManagerInfo;
SDL_VERSION(&windowManagerInfo.version);
SDL_GetWindowWMInfo(&window, &windowManagerInfo);
// Create a metal layer and add it to the view that SDL created.
NSView *sdlView = windowManagerInfo.info.cocoa.window.contentView;
sdlView.wantsLayer = YES;
CALayer *sdlLayer = sdlView.layer;
CGFloat contentsScale = sdlLayer.contentsScale;
NSSize layerSize = sdlLayer.frame.size;
_metalLayer = [[CAMetalLayer layer] retain];
_metalLayer.contentsScale = contentsScale;
_metalLayer.drawableSize = NSMakeSize(layerSize.width * contentsScale,
layerSize.height * contentsScale);
_metalLayer.device = device;
_metalLayer.pixelFormat = MTLPixelFormatBGRA8Unorm;
_metalLayer.frame = sdlLayer.frame;
_metalLayer.framebufferOnly = true;
[sdlLayer addSublayer:_metalLayer];
2) I create a depth texture to use as a depth buffer. My understanding is that this step is necessary in Metal. Though, in OpenGL, the framework creates a depth buffer for me quite automatically:
CGSize drawableSize = _metalLayer.drawableSize;
MTLTextureDescriptor *descriptor =
[MTLTextureDescriptorr texture2DDescriptorWithPixelFormat:MTLPixelFormatDepth32Float_Stencil8 width:drawableSize.width height:drawableSize.height mipmapped:NO];
descriptor.storageMode = MTLStorageModePrivate;
descriptor.usage = MTLTextureUsageRenderTarget;
_depthTexture = [_metalLayer.device newTextureWithDescriptor:descriptor];
_depthTexture.label = #"DepthStencil";
3) I create a depth-stencil state object which will be set at render time:
MTLDepthStencilDescriptor *depthDescriptor = [[MTLDepthStencilDescriptor alloc] init];
depthDescriptor.depthWriteEnabled = YES;
depthDescriptor.depthCompareFunction = MTLCompareFunctionLess;
_depthState = [device newDepthStencilStateWithDescriptor:depthDescriptor];
4) When creating my render pass object, I explicitly attach the depth texture:
_metalRenderPassDesc = [[MTLRenderPassDescriptor renderPassDescriptor] retain];
MTLRenderPassColorAttachmentDescriptor *colorAttachment = _metalRenderPassDesc.colorAttachments[0];
colorAttachment.texture = _drawable.texture;
colorAttachment.clearColor = MTLClearColorMake(0.2, 0.4, 0.5, 1.0);
colorAttachment.storeAction = MTLStoreActionStore;
colorAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassDepthAttachmentDescriptor *depthAttachment = _metalRenderPassDesc.depthAttachment;
depthAttachment.texture = depthTexture;
depthAttachment.clearDepth = 1.0;
depthAttachment.storeAction = MTLStoreActionDontCare;
depthAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassStencilAttachmentDescriptor *stencilAttachment = _metalRenderPassDesc.stencilAttachment;
stencilAttachment.texture = depthAttachment.texture;
stencilAttachment.storeAction = MTLStoreActionDontCare;
stencilAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
5) Finally, at render time, I set the depth-stencil object before drawing my object:
[_encoder setDepthStencilState:_depthState];
Note that if I go into step 3 and change depthCompareFunction to MTLCompareFunctionAlways or MTLCompareFunctionGreater then I see polygons on the screen, but ordering is (expectedly) incorrect. If I leave depthCompareFunction set to MTLCompareFunctionLess then I see nothing but the background color. It acts AS IF all fragments fail the depth test at all times.
The Metal API validator reports no errors and has no warnings...
I've tried a variety of combinations of settings for things like the depth-stencil texture format and have not made any forward progress. Honestly, I'm not sure what to try next.
EDIT: GPU Frame Capture in Xcode displays a green outline of my polygons, but none of those fragments are actually drawn.
EDIT 2: I've learned that the Metal API validator has an "Extended" mode. When this is enabled, I get these two warnings:
warning: Texture Usage Should not be Flagged as MTLTextureUsageRenderTarget: This texture is not a render target. Clear the MTLTextureUsageRenderTarget bit flag in the texture usage options. Texture = DepthStencil. Texture is used in the Depth attachment.
warning: Resource Storage Mode Should be MTLStorageModePrivate and it Should be Initialized with a Blit: This resource is rarely accessed by the CPU. Changing the storage mode to MTLStorageModePrivate and initializing it with a blit from a shared buffer may improve performance. Texture = 0x102095000.
When I head these two warnings, I get these two errors. (The warnings and errors seem to contradict one another.)]
error 'MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
failed assertion `MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
EDIT 3: When I run a sample Metal app and use the GPU frame capture tool then I see a gray scale representation of the depth buffer and the rendered object is clearly visible. This doesn't happen for my app. There, the GPU frame capture tool always shows my depth buffer as a plain white image.
Okay, I figured this out. I'm going to post the answer here to help the next guy. There was no problem writing to the depth buffer. This explains why spending time mucking with depth texture and depth-stencil-state settings was getting me nowhere.
The problem is differences in the coordinate systems used for Normalized Device Coordinates in Metal versus OpenGL. In Metal, NDC are in the space [-1,+1]x[-1,+1]x[0,1]. In OpenGL, NDC are [-1,+1]x[-1,+1]x[-1,+1]. If I simply take the projection matrix produced by glm::perspective and shove it through Metal then results will not be as expected. In order to compensate for the NDC space differences when rendering with Metal, that projection matrix must be left-multiplied by a scaling matrix with (1, 1, 0.5, 1) on the diagonal.
I found these links to be helpful:
1. http://blog.athenstean.com/post/135771439196/from-opengl-to-metal-the-projection-matrix
2. http://www.songho.ca/opengl/gl_projectionmatrix.html
EDIT: Replaced explanation with a more complete and accurate explanation. Replace solution with a better solution.

How to create 8-, 4-, and 1-bit representations of NSImage

I had created 32 bit NSImage with following code.
NSBitmapImageRep *sourceRep = [[NSBitmapImageRep alloc] initWithData: imageData];
// create a new bitmap representation scaled down
NSBitmapImageRep *newRep =
[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: NULL
pixelsWide: imageSize
pixelsHigh: imageSize
bitsPerSample: 8
samplesPerPixel: 4
hasAlpha: YES
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
bytesPerRow: 0
bitsPerPixel: 0];
// save the graphics context, create a bitmap context and set it as current
[NSGraphicsContext saveGraphicsState] ;
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithBitmapImageRep: newRep];
[NSGraphicsContext setCurrentContext: context] ;
// draw the bitmap image representation in it and restore the context
[sourceRep drawInRect: NSMakeRect(0.0f, 0.0f, imageSize, imageSize)] ;
[NSGraphicsContext restoreGraphicsState] ;
// set the size of the new bitmap representation
[newRep setSize: NSMakeSize(imageSize,imageSize)] ;
NSDictionary *imageProps2 = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:1.0], kCGImageDestinationLossyCompressionQuality,
nil];
imageData = [newRep representationUsingType: NSPNGFileType properties: imageProps2];
NSImage *bitImage = [[NSImage alloc]initWithData:imageData];
Now I need to create 8 bit(256 Colors),4 bit(16 Colors),1 bit(Black & White) NSBitmapImageRep representation. what I want to do now?
Unfortunately it seems that Cocoa doesn't support operating on paletted images.
I've been trying that before and my conclusion is that it's not possible for PNG. NSGIFFileType is a hardcoded exception, and Graphics Contexts are even more limited than bitmap representations (e.g. RGBA is supported only with premultiplied alpha).
To work around it I convert NSBitmapImageRep to raw RGBA bitmap, use libimagequant to remap it to a palette and then libpng or lodepng to write the PNG file.
Sadly, I believe you can't using core graphics. Graphics contexts don't support anything with that few bits.
The documentation has a table of supported pixel formats.
Apparently Carbon had (has?) support for it, as seen referenced here where they also lament Cocoa's lack of support for it:
Turns out that basically Cocoa/Quartz does not support downsampling images to 8-bit colour. It supports drawing them, and it supports upsampling, but not going the other way. I guess this is a deliberate design on Apple's part to move away from indexed images as a standard graphics data type - after all, 32-bit colour is much simpler, right? Well, it is, but there are still useful uses for 8-bit. So..... what to do? One possibility is using Carbon, since General/QuickDraw's General/GWorld does support downsampling, etc.
From this thread
well This is probably going to be too long for a comment...
It sure seems like this just isn't possible... all of cocoa's drawing parts really seem to want to use 24-bit color colorspaces... I was able to make an 8bit NSBitmapImageRep but it was grayscale.
So I guess we have to figure out the why here. If you want to be able to use NSImages that are backed by certain types of representations, I don't think that is possible.
if you want to naively down sample (change to the closest 24/32-bit value to any pixel), that is very possible; this would be to give the appearance of 8-bit images.
If you want to be able to write these files out with good dithering / index colors then I think the best option would be to write to an image format that supports what you want (like writing to a 256 color GIF).
If you wanted to do this downsampling yourself for some reason, there are 2 issues at hand:
Pallet or CLUT selection.
Dithering.
If you didn't want to use indexed colors and just wanted to break the 8-bits into 3-3-2 RGB that is a little bit easier, but the result is much worse than indexed color.
The 4 bit is a bit tricker, because I don't really even know of a good historical use of 4-bit color.
I used indexed color to display escape times from a mandelbrot set in a little project I did once...
I just verified that it doesn't work anymore (was old fixed render pipeline OpenGL).
but basically for the view you would use glPixelMapuiv to map the index colors to a byte value, then display the byte buffer with glDrawPixels;
So... I guess if you comment and say why you are trying to do what you are doing we may be able to help.

Getting pixel colour not accurate

I'm currently using colour picking in my application.
This works on the PC, however I'm having trouble to get it working on a variety of devices.
This is probably due to the context being set up differently, depending on the device. For example, as far as I'm aware the PC is set to a colour of 888, whereas a device might default to 565.
I was wondering if there's a way in OpenGL to get the current pixel/colour format, so that I can retrieve the colour data properly?
This is the function I'm using which works fine on the PC:
inline void ProcessColourPick(GLubyte *out, KDfloat32 x, KDfloat32 y)
{
GLint viewport[4];
GLubyte pixel[3];
glGetIntegerv(GL_VIEWPORT,viewport);
//Read colour of pixel at a specific point in the framebuffer
glReadPixels(x,viewport[3]-y,1,1,
GL_RGB,GL_UNSIGNED_BYTE,(void *)pixel);
out[0] = pixel[0];
out[1] = pixel[1];
out[2] = pixel[2];
}
Any ideas?
Yes, but it's a bit complicated.
Querying the bitdepth of the current framebuffer is fairly easy in ES 2.0 (note: this is also legal in Desktop GL, but this functionality was removed in GL 3.1 core. It's still accessible from a compatibility profile). You have to get the bitdepth of each color component:
GLint bitdepth;
glGetIntegerv(GL_x_DEPTH, &bitdepth);
Where x is one of RED, GREEN, BLUE, or ALPHA.
Once you have the bitdepth, you can test to see if it's 565 and use appropriate pixel transfer parameters and color values.
The format parameter for glReadPixels must be either GL_RGBA (always supported) or the GL_IMPLEMENTATION_COLOR_READ_FORMAT_OES (different on different devices). It's a OpenGL ES restriction.

Resources