i have an UIImageView that the user can rotate and resize touching the screen.
I want apply the same changes the user has made on the UIImageView to the UIImage inside it.
Then i will use the UIImage for masking another image.
Please can you explain me what is the correct procedure for doing that?
The main problem is that i can't apply directly the Affine Transformation Matrix of the UIImageView to the [UIImage CGImage], because they use a different coordinate system.
The steps you have to take are:
Create a new graphics context
Apply the transforms from the UIImageView to the context
Draw your original image into the context
Extract a new image from the context
Things you have to watch out for are the inverted co-ordinate system and the fact that the rotated image's bounding rectangle is now larger than the it originally was and you have to take this into account when creating your context.
See my earlier post: Creating a UIImage from a rotated UIImageView
Related
I have a UILabel that is placed on top of a UIImageView. The UIImageView can change displaying a variety of images. Depending on the image displayed in the UIImageView, the UILabel's text color must be able to change dynamically. For instance, when a light colored image is displayed, a dark UILabel text color will be used, or when a dark image is used, a light text color will be used.
In Swift, what is the best, simple, most efficient method to extract the RGB value of a single pixel or average RGB value of a group of pixels directly behind the position of UILabel that is sitting above a UIImage?
Or even better, in Swift, is there a UILabel method that changes the text colour dynamically based on the background it is positioned above?
Thank you.
Honestly, I would not even grab RGB, you should know what image you are putting into UIImageView, so plan the label based on that.
If you must choose RGB, then do so,thing like this:
UIGraphicsBeginImageContext(CGSizeMake(1,1));
let context = UIGraphicsGetCurrentContext()
//We translate in the opposite direction so that when we draw to the canvas, it will draw our point at the first spot in readable memory
CGContextTranslateCTM(context, -x, -y);
// Make the CALayer to draw in our "canvas".
self.layer.renderInContext(context!);
let colorpoint = UnsafeMutablePointer<UInt32>(CGBitmapContextGetData(context));
UIGraphicsEndImageContext();
Where x and y is the point you want to grab, and self is your UIImageView.
Edit: #Mr.T posted a link to how this is done as well, if you need the average, just grab the amount of pixels needed by changing UIGraphicsBeginImageContext(CGSizeMake(1,1)); to UIGraphicsBeginImageContext(CGSizeMake(width,height)); and compute the average with that data
I want to draw image with HardLight composite operation. I've created NSImageView with next draw code:
- (void)drawRect:(NSRect)dirtyRect {
//[super drawRect:dirtyRect];
if (self.image != NULL) {
[self.image drawInRect:self.bounds fromRect:NSZeroRect operation:NSCompositeHardLight fraction:1.0];
}
}
In usual case it works well. But it does not work over NSVisualEffectView.
How can I blend HardLight over NSVisualEffectView?
in the image linked below you can see rounded rectangle which blend HardLight over window background and colour image. But over NSVisualEffectView (red bottom rectangle) it draws just grey.
https://www.dropbox.com/s/bcpe6vdha6xfc5t/Screenshot%202015-03-27%2000.32.53.png?dl=0
Roughly speaking, image compositing takes none, one or both pixels from the source and destination, applies some composite operation and writes it to the destination. To get any effect that takes into account the destination pixel, that pixel’s color information must be known when the compositing operation takes place, which is in your implementation of -drawRect:.
I’m assuming you’re talking about behind window blending (NSVisualEffectBlendingModeBehindWindow) here. The problem with NSVisualEffectView is that it does not draw anything. Instead, it defines a region that tells the WindowServer process to do its vibrancy stuff in that region. This happens after your app draws its views.
Therefore a compositing operation in your app cannot take into account the pixels that the window server draws later. In short, this cannot be done.
I want to use NSBitmapImageRep to construct a 64x64 pixel sprite in code, and then draw it to the screen, blown up very large. The result would be very large "pixels" on the screen. Think old school Mario Bros. or Minecraft. How can I do this?
Edit I want to draw to this off-screen bitmap and then render it later on a CALayer
Open an new image context with CGBitmapContextCreate and use
void CGContextSetInterpolationQuality (
CGContextRef c,
CGInterpolationQuality quality
);
to set the interpolation quality to kCGInterpolationNone.
Then draw the image into the context.
I have created a CGBitmapContext of 256 x 256 and I have a CGImageRef that I want to draw into this context. My goal is to crop the image (and ultimately create tiles of an image), but if I use CGContextDrawImage, Core Graphics scales the image to fit the context.
So the question is, how do I make sure that only a portion of the CGImageRef is drawn into the CGBitmapContext (no scaling)?
CGContextDrawImage takes a CGRect parameter. It sounds like you are passing the bounds of your context. Instead, try passing the bounds of the image, offset appropriately to draw the desired part of it.
I am taking photos from a webcam in my Cocoa application and I would like to zoom in on the centre of the image I receive. I start by receiving a CIImage and eventually save an NSImage.
How would I go about zooming in on either of these objects?
“Zoom” means a couple of things. You'll need at least to crop the image, and you may want to scale up. Or you may want to reserve scaling for display only.
CGImage
To crop it, use a CICrop filter.
To scale it, use either a CILanczosScaleTransform filter or a CIAffineTransform filter.
To crop and scale it, use both filters. Simply pass the output of the crop as the input of the scale.
NSImage
Crop and scale are the same operation here. You'll need to create a new, empty NSImage of the desired size (whether it's the size of the source crop if you won't zoom or an increased size if you will zoom), lock focus on it, draw the crop rectangle from the source image into the bounding rectangle of the destination image, and unlock focus.
If the destination rectangle is not the same size as the source (crop) rectangle, it will scale; if they are the same size, it will simply copy or composite pixel-to-pixel.