I load an image (NSImage) from the disk, and draw it to an NSImageView on MAC, No problem image looks fine and clear.
After drawing it to the NSImageView, I call the function below with the same image, then draw the returned value to the same NSImageView. The resulting image is extremly blurry, even if all I do is lockFocus and UnlockFocus without doing anything else.
-(NSImage*)addTarget:(NSImage*)image
{
[image lockFocus]; // this image is sharp and clear
[image unlockFocus];
return image; // this image is extremely blurry
}
Anybody knows why or how to fix that?
thanks
rough
So as doing some research I realized that this is related to Retina displays. I guess locking focus will always draw at bestRepresentation which if there is a retina display attached anywhere to the computer, it will render based on that scale factor. So in order to get this to maintain proper dimensions and DPI I created a method that iterates through all screens and returns the largest backingScaleFactor
func maximumScaleFactor(screen: NSScreen) -> CGFloat {
var max: CGFloat = 0
for s in NSScreen.screens()! {
if s.backingScaleFactor > max { max = s.backingScaleFactor }
}
return max / screen.backingScaleFactor
}
Then for the 'NSImage' I did the following
-(NSImage*)addTarget:(NSImage*)image
let scale = self.maximumScaleFactor(currentScreen)
let originalSize = image.size
//cut the image size in half
var size = image.size
size.x /= scale
size.y /= scale
image.lockFocus()
//do whatever drawing you need here
image.unlockFocus()
//set the image back to its original size
image.size = originalSize
return image
}
So far this has worked well for me and the image quality subjectively appears the same to me.
To fix your problem is difficult because you don't say what you want to achieve. Why did you write that method, why do you call it, what do you expect this method does? You said: all I do is lockFocus and UnlockFocus without doing anything else. Indeed it looks like calling lockFocus unlockFocus (and nothing between) does nothing. But that is wrong. [image lockFocus] alone changes image dramatically. An NSImage object contains zero, one (in most cases) or more (icon or some TIFFs) object of class NSImageRep. A call of lockFocus on this image selects an NSImageRep, that is best suited for depicting on the screen. Then it computes how many pixels (but now for a screen) it needs to render the image with the given size but with a resolution of only 72 dpi (or 144 dpi for retina screens). And then removes the NSImageRep from the list of representations and creates instead a new NSImageRep. In former OS-versions (before 10.6) an NSCachedImageRep was created. But now an NSCGImageSnapshotRep is created which under the hood is a CGImage. Make a
NSLog(#" image is:\n%#", image );
before lockFocus and one after the call of unlockFocus and the you will see what happens: for a high resolution image the number of pixels will go down, which is a nothing else than a reduction in quality. And that makes your image blurry.
Related
My question is related to this previous question. What I want to achieve is to stack images (they have transparency), write a string on top, and save the photomontage / photocollage with full resolution.
#Override
protected void beforeMain(Form f) {
Image photoBase = fetchResourceFile().getImage("Voiture_4_3.jpg");
Image watermark = fetchResourceFile().getImage("Watermark.png");
f.setLayout(new LayeredLayout());
final Label drawing = new Label();
f.addComponent(drawing);
// Image mutable dans laquelle on va dessiner (fond blanc)
Image mutableImage = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
drawing.getUnselectedStyle().setBgImage(mutableImage);
drawing.getUnselectedStyle().setBackgroundType(Style.BACKGROUND_IMAGE_SCALED_FIT);
// Paint all the stuff
paints(mutableImage.getGraphics(), photoBase, watermark, photoBase.getWidth(), photoBase.getHeight());
// Save the collage
Image screenshot = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
f.revalidate();
f.setVisible(true);
drawing.paintComponent(screenshot.getGraphics(), true);
String imageFile = FileSystemStorage.getInstance().getAppHomePath() + "screenshot.png";
try(OutputStream os = FileSystemStorage.getInstance().openOutputStream(imageFile)) {
ImageIO.getImageIO().save(screenshot, os, ImageIO.FORMAT_PNG, 1);
} catch(IOException err) {
err.printStackTrace();
}
}
public void paints(Graphics g, Image background, Image watermark, int width, int height) {
g.drawImage(background, 0, 0);
g.drawImage(watermark, 0, 0);
g.setColor(0xFF0000);
// Upper left corner
g.fillRect(0, 0, 10, 10);
// Lower right corner
g.setColor(0x00FF00);
g.fillRect(width - 10, height - 10, 10, 10);
g.setColor(0xFF0000);
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(220, Font.STYLE_BOLD);
g.setFont(f);
// Draw a string right below the M from Mercedes on the car windscreen (measured in Gimp)
g.drawString("HelloWorld",
(int) (848 ),
(int) (610)
);
}
This is the saved screenshot I get if I use the Iphone6 skin (the payload image is smaller than the original one and is centered). If I use the Xoom skin this is what I get (the payload image is still smaller than the original image but it has moved to the left).
So to sum it all up : why is the saved screenshot with Xoom skin different from the one I get with Iphone skin ? Is there anyway to directly save the graphics on which I paint in the paints method so that the saved image would have the original dimensions ?
Thanks a lot to anyone that could help me :-)!
Cheers,
You can save an image in Codename one using the ImageIO class. Notice that you can draw a container hierarchy into a mutable image using the paintComponent(Graphics) method.
You can do both approaches with draw image on mutable or via layouts. Personally I always prefer layouts as I like the abstraction but I wouldn't say the mutable image approach is right/wrong.
Notice that if you change/repaint a lot then mutable images are slower (this will not be noticeable for regular code or on the simulator) as they are forced to use the software renderer and can't use the GPU fully.
In the previous question it seems you placed the image with a "FIT" style which naturally drew it smaller than the containing container and then drew the image on top of it manually... This is problematic.
One solution is to draw everything manually but then you will need to do the "fit" aspect of drawing yourself. If you use layouts you should position everything based on the layouts including your drawing/text.
I have searched everywhere and i cannot find any solution after 2 days of trying.
The Problem:
I'm doing an image Viewer with "Fit Image to View" feature. I load a picture of say 3000+ pixels in my GraphicsView (which is a lot smaller ofcourse), scrollbars appear that's good. When i click my btnFitView and executed:
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
This is down scaling right? After fitInView() all lines are pixelated. It looks like a saw went over the lines on the image.
For example: image of a car has lines, image of a texbook (letters become in very bad quality).
My code sample:
// select file, load image in view
QString strFilePath = QFileDialog::getOpenFileName(
this,
tr("Open File"),
"/home",
tr("Images (*.png *.jpg)"));
imageObject = new QImage();
imageObject->load(strFilePath);
image = QPixmap::fromImage(*imageObject);
scene = new QGraphicsScene(this);
scene->addPixmap(image);
scene->setSceneRect(image.rect());
ui->graphicsView->setScene(scene);
// on_btnFitView_Clicked() :
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
Just before fitInView(), sizes are:
qDebug()<<"sceneRect = "<< scene->sceneRect();
qDebug()<<"viewRect = " << ui->graphicsView->rect();
sceneRect = QRectF(0,0 1000x750)
viewRect = QRect(0,0 733x415)
If it is necessary i can upload screenshots of original loaded image and fitted in view ?
Am i doing this right? It seems all examples on the Web work with fitInView for auto-fitting. Should i use some other operations on the pixmap perhaps?
SOLUTION
// LOAD IMAGE
bool ImgViewer::loadImage(const QString &strImagePath)
{
m_image = new QImage(strImagePath);
if(m_image->isNull()){
return false;
}
clearView();
m_pixmap = QPixmap::fromImage(*m_image);
m_pixmapItem = m_scene->addPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem);
// preserve fitView if active
if(m_IsFitInView)
fitView();
return true;
}
// TOGGLED FUNCTIONS
void ImgViewer::fitView()
{
if(m_image->isNull())
return;
this->resetTransform();
QPixmap px = m_pixmap; // use local pixmap (not original) otherwise image is blurred after scaling the same image multiple times
px = px.scaled(QSize(this->width(),this->height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(px);
m_scene->setSceneRect(px.rect());
}
void ImgViewer::originalSize()
{
if(m_image->isNull())
return;
this->resetTransform();
m_pixmap = m_pixmap.scaled(QSize(m_image.width(),m_image.height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem); //ensure item is centered in the view.
}
On downshrink this produces good quality. Here are some stats after calling these 2 functions:
// "originalSize()" : IMAGE SIZE = (1152, 2048)
// "originalSize()" : PIXMAP SIZE = (1152, 2048)
// "originalSize()" : VIEW SIZE = (698, 499)
// "originalSize()" : SCENE SIZE = (1152, 2048)
// "fitView()" : IMAGE SIZE = (1152, 2048)
// "fitView()" : PIXMAP SIZE = (1152, 2048)
// "fitView()" : VIEW SIZE = (698, 499)
// "fitView()" : SCENE SIZE = (280, 499)
There is a problem now, after call to fitView() look the size of scene? Much smaller.
And if fitView() is activated, and I now scale the image on wheelEvent (zoomIn/zoomOut), with the views scale function: scale(factor,factor); ..produces terrible result.
This doesn't happen with originalSize() where scene size is equal to image size.
Think of the view as a window into the scene.
Moving the view large amounts, either zooming in or out, will likely create images that don't look great. Rather than the image being scaled as you would expect, the view is just moving away from the scene and doing its best to render the image, but the image has not been scaled, just transformed in the scene.
Rather than using QGraphicsView::fitInView, keep the main image in memory and create a scaled version of the image with QPixamp::scaled, each time FitInView is selected, or the user zooms in / out. Then set this QPixmap on the QGraphicsPixmapItem with setPixmap.
You may also want to think about dropping the scroll bars and allowing the user to drag the image around the screen, which provides a better user interface, in my opinion; though of-course it depends on your requirements.
I found a class called ClippingNode that I can use on sprites to only display a specified rectangular area: https://github.com/njt1982/ClippingNode
One problem is that I need to do exactly the opposite, meaning I want the inverse of that. I want everything outside of the specified rectangle to be displayed, and everything inside to be taken out.
In my test I'm using a position of a sprite, which will update frame, so that will need to happen to meaning that new clipping rect will be defined.
CGRect menuBoundaryRect = CGRectMake(lightPuffClass.sprite.position.x, lightPuffClass.sprite.position.y, 100, 100);
ClippingNode *clipNode = [ClippingNode clippingNodeWithRect:menuBoundaryRect];
[clipNode addChild:darkMapSprite];
[self addChild:clipNode z:100];
I noticed the ClippingNode class allocs inside but I'm not using ARC (project too big and complex to update to ARC) so I'm wondering what and where I'll need to release too.
I've tried a couple of masking classes but whatever I mask fits over the entire sprite (my sprite covers the entire screen. Additionally the mask will need to move, so I thought glscissor would be a good alternative if I can get it to do the inverse.
You don't need anything out of the box.
You have to define a CCClippingNode with a stencil, and then set it to be inverted, and you're done. I added a carrot sprite to show how to add sprites in the clipping node in order for it to be taken into account.
#implementation ClippingTestScene
{
CCClippingNode *_clip;
}
And the implementation part
_clip = [[CCClippingNode alloc] initWithStencil:[CCSprite spriteWithImageNamed:#"white_board.png"]];
_clip.alphaThreshold = 1.0f;
_clip.inverted = YES;
_clip.position = ccp(self.boundingBox.size.width/2 , self.boundingBox.size.height/2);
[self addChild:_clip];
_img = [CCSprite spriteWithImageNamed:#"carrot.png"];
_img.position = ccp(-10.0f, 0.0f);
[_clip addChild:_img];
You have to set an extra flag for this to work though, but Cocos will spit out what you need to do in the console.
I once used CCScissorNode.m from https://codeload.github.com/NoodlFroot/ClippingNode/zip/master
The implementation (not what you are looking for the inverse) was something :
CGRect innerClippedLayer = CGRectMake(SCREENWIDTH/14, SCREENHEIGHT/6, 275, 325);
CCScissorNode *tmpLayer = [CCScissorNode scissorNodeWithRect:innerClippedLayer];
[self addChild:tmpLayer];
So for you it may be like if you know the area (rectangle area that you dont want to show i.e. inverse off) and you know the screen area then you can deduct the rectangle are from screen area. This would give you the inverse area. I have not did this. May be tomorrow i can post some code.
I am developing an OS X app that uses custom Core Image filters to attain a particular effect: Set one image's luminance as another image's alpha channel. There are filters to use an image as a mask for another but they require a third -background- image; I need to output an image with transparent parts, no background set.
As explained in Apple's documentation, I wrote the kernel code and tested it in QuartzComposer; it works as expected.
The kernel code is:
kernel vec4 setMask(sampler src, sampler mask)
{
vec4 color = sample(src, samplerCoord(src));
vec4 alpha = sample(mask, samplerCoord(mask));
color.a = alpha.r;
// (mask image is grayscale; any channel colour will do)
return color;
}
But when I try to use the filter from my code (either packaging it as an image unit or directly from the app source), the output image turns out to have the following 'undefined'(?) extent:
extent CGRect origin=(x=-8.988465674311579E+307, y=-8.988465674311579E+307) size=(width=1.797693134862316E+308, height=1.797693134862316E+308)
and further processing (convert to NSImage bitmap representation, write to file, etc.) fails. The filter itself loads perfectly (not nil) and the output image it produces isn't nil either, just has an invalid rect.
EDIT: Also, I copied the exported image unit (plugin), to both /Library/Graphics/Image Units and ~/Library/Graphics/Image Units, so that it appears in QuartzComposer's Patch Library, but when I connect it to the source images and Billboard renderer, nothing is drawn (transparent background).
Am I missing something?
EDIT: Looks like I assumed to much about the default behaviour of -[CIFilter apply:].
My filter subclass code's -outputImage implementation was this:
- (CIImage*) outputImage
{
CISampler* src = [CISampler samplerWithImage:inputImage];
CISampler* mask = [CISampler samplerWithImage:inputMaskImage];
return [self apply:setMaskKernel, src, mask, nil];
}
So I tried and changed it to this:
- (CIImage*) outputImage
{
CISampler* src = [CISampler samplerWithImage:inputImage];
CISampler* mask = [CISampler samplerWithImage:inputMaskImage];
CGRect extent = [inputImage extent];
NSDictionary* options = #{ kCIApplyOptionExtent: #[#(extent.origin.x),
#(extent.origin.y),
#(extent.size.width),
#(extent.size.height)],
kCIApplyOptionDefinition: #[#(extent.origin.x),
#(extent.origin.y),
#(extent.size.width),
#(extent.size.height)]
};
return [self apply:setMaskKernel arguments:#[src, mask] options:options];
}
...and now it works!
How are you drawing it? And what does your CIFilter code look like? You'll need to provide a kCIApplyOptionDefinition most likely when you call apply: in outputImage.
Alternatively, you can also change how you are drawing the image, using CIContext's drawImage:inRect:fromRect.
I am trying to render 3D bar chart in SCNView using ScreenKit framework.
My rendering code is,
int height=10,y=0,x=0;
for (int i=0; i<10; i++) {
SCNBox *box1 = [SCNBox boxWithWidth:4 height:height length:2 chamferRadius:0];
boxNode1 = [SCNNode nodeWithGeometry:box1];
boxNode1.position = SCNVector3Make(x, y, 0);
SCNMaterial *material = [SCNMaterial material];
material.diffuse.contents = (NSColor *)[self.colorArray objectAtIndex:i%6];
material.specular.contents = [NSColor whiteColor];
material.shininess = 1.0;
box1.materials = #[material];
//boxNode1.transform = rot;
[scene.rootNode addChildNode:boxNode1];
x+=6;
height+=10;
y += 5 ;
}
I can render but while re-sizing the view the chart bars goes to the center of the view.
I need to render the chart, which cover the margins of the view and when Re-size it have to change accordingly. The image(s) below shows my problem.
Original Image:
Image where less stretching of both windows:
Can anyone please help me to fix the issue.
The the windows in the image that you had linked to in your original question was very stretched and that made it very hard to see what was going on. When I took that image and made the windows less stretched it was easier to have some idea of what is going on.
I think that you are seeing a general resizing issue. Either you are using springs and struts and have configured flexible margins on the left and right or you are using auto layout with a centered view with fixed width.
I assume that the red boxes that I have drawn in the image below is the bounds of your scene view in both these cases. You can easily see if this is the case by giving the scene view a different background color and resize it again.
My solution to your problem would be to change how your view resizes as the window resizes, to better meet your expectations.