Cropping NSImage with onscreen coordinates incorrect on different screen sizes - macos

I'm trying to replicate macOS's screenshot functionality, dragging a selection onscreen to provide coordinates for cropping an image. I have it working fine on my desktop Mac (2560x1600), but testing on my laptop (2016 rMBP 15", 2880x1800), the cropped image is completely wrong. I don't understand why I'd get the right results on my desktop, but not on my laptop. I think it has something to do with the Quarts coordinates being different from Cocoa coordinates, seeing as how on the laptop, the resulting image seems like the coordinates are flipped on the Y-axis.
Here is the code I am using to generate the cropping CGRect:
# Segment used to draw the CAShapeLayer:
private func handleDragging(_ event: NSEvent) {
let mouseLoc = event.locationInWindow
if let point = self.startPoint,
let layer = self.shapeLayer {
let path = CGMutablePath()
path.move(to: point)
path.addLine(to: NSPoint(x: self.startPoint.x, y: mouseLoc.y))
path.addLine(to: mouseLoc)
path.addLine(to: NSPoint(x: mouseLoc.x, y: self.startPoint.y))
path.closeSubpath()
layer.path = path
self.selectionRect = path.boundingBox
}
}
private func startDragging(_ event: NSEvent) {
if let window = self.window,
let contentView = window.contentView,
let layer = contentView.layer,
!self.isDragging {
self.isDragging = true
self.startPoint = window.mouseLocationOutsideOfEventStream
shapeLayer = CAShapeLayer()
shapeLayer.lineWidth = 1.0
shapeLayer.fillColor = NSColor.white.withAlphaComponent(0.5).cgColor
shapeLayer.strokeColor = NSColor.systemGray.cgColor
layer.addSublayer(shapeLayer)
}
}
Then this is the code where I actually generate the screenshot and crop using the CGRect:
public func processResults(_ rect: CGRect) {
if let windowID = self.globalWindow?.windowNumber,
let screen = self.getScreenWithMouse(), rect.width > 5 && rect.height > 5 {
self.delegate?.processingResults()
let cgScreenshot = CGWindowListCreateImage(screen.frame, .optionOnScreenBelowWindow, CGWindowID(windowID), .bestResolution)
var rect2 = rect
rect2.origin.y = NSMaxY(self.getScreenWithMouse()!.frame) - NSMaxY(rect);
if let croppedCGScreenshot = cgScreenshot?.cropping(to: rect2) {
let rep = NSBitmapImageRep(cgImage: croppedCGScreenshot)
let image = NSImage()
image.addRepresentation(rep)
self.showPreviewWindow(image: image)
let requests = [self.getTextRecognitionRequest()]
let imageRequestHandler = VNImageRequestHandler(cgImage: croppedCGScreenshot, orientation: .up, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try imageRequestHandler.perform(requests)
} catch let error {
print("Error: \(error)")
}
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {
self.hidePreviewWindow()
}
}
}
self.globalWindow = nil
}

Not 15 minutes after I asked this question, I tried one more thing and it works!
Relevant snippet:
var correctedRect = rect
// Set the Y origin properly (counteracting the flipped Y-axis)
correctedRect.origin.y = screen.frame.height - rect.origin.y - rect.height;
// Checks if we're on another screen
if (screen.frame.origin.y < 0) {
correctedRect.origin.y = correctedRect.origin.y - screen.frame.origin.y
}
// Finally, correct the x origin (if we're on another screen, the origin will be larger than zero)
correctedRect.origin.x = correctedRect.origin.x + screen.frame.origin.x
// Generate the screenshot inside the requested rect
let cgScreenshot = CGWindowListCreateImage(correctedRect, .optionOnScreenBelowWindow, CGWindowID(windowID), .bestResolution)

Related

How to merge 3 images(100x100px) to one new large (300x100px) in Swift

iam very new to swiftUI and dont found a way.
I want to create one image (300x100) dyncmicaly by merging three single 100x100 images horizontaly
ImageA(100x100) + ImageB(100x100) + ImageC(100x100) = ImageD(300x100)
Found a way to show them in a HStack but how can is get one new image to send the Date to new function
Regards
Alex
thanks a lot, I tried to use your function but got an error here "Cannot convert value of type 'UIImage?' to expected element type 'UIImage'!
Code should drat 3x zero. in the number-0.png ist just one lagre zero
//
// ContentView.swift
// imagecombine
//
// Created by Alex on 02.08.20.
// Copyright © 2020 Alex. All rights reserved.
//
import SwiftUI
func combineHorizontally(_ images: [UIImage]) -> UIImage? {
guard !images.isEmpty else { return nil }
var size = CGSize.zero
var scale = CGFloat.zero
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { context in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}
let redImage = UIImage(named: "number-0.png")
let greenImage = UIImage(named: "number-0.png")
let blueImage = UIImage(named: "number-0.png")
let image = combineHorizontally([redImage, greenImage, blueImage])
struct ContentView: View {
var body: some View {
Image(image)
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
You can use UIImageGraphicsRenderer and draw the images one after another:
func combineHorizontally(_ images: [UIImage]) -> UIImage? {
guard !images.isEmpty else { return nil }
var size = CGSize.zero
var scale = CGFloat.zero
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}
You said that there were only images and they were 100 × 100, but the above should work regardless of the number and size (memory permitting, of course).
Anyway, this
let image = combineHorizontally([redImage, greenImage, blueImage])
Results in:
To use that in a context where you don’t want an optional, you can use ! forced unwrapping operator, ?? nil coalescing operator, or some other unwrapping pattern, e.g. guard let, if let, etc.
Alternatively, if you don’t want to deal with optionals at all, you can write a rendition that doesn’t return an optional at all (but also doesn’t detect the error scenario where an empty array was provided):
func combineHorizontally(_ images: [UIImage]) -> UIImage {
var size = CGSize.zero
var scale: CGFloat = 1
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}

iOS Swift how to incrementally draw to a UIImage?

How can I use iOS core graphics to incrementally draw a large data set in a single image?
I have code which is ready to process the entire dataset at once (over 100,000 rectangles) and produces a single image. This is a very long running operation and I want this dataset to be incrementally drawn 1000 rectangles at a time, displaying these small image updates (like images downloaded from internet in the 90s)
My questions are: Would I keep the reference to the same context throughout the operation and simply add elements to it? - OR - Should I be capturing the current image using UIGraphicsGetImageFromCurrentImageContext() , then drawing it in a new context and drawing additional rectangles on top of it?
Bonus question - is this the right approach if I want to use multiple threads to append to the same image?
let context = UIGraphicsGetCurrentContext()!
context.setStrokeColor(borderColor)
context.setLineWidth(CGFloat(borderWidth))
for elementIndex in 0 ..< data.count {
context.setFillColor(color.cgColor)
let marker = CGRect(x: toX(elementIndex),
y: toY(elementIndex),
width: rectWidth,
height: rectHeight)
context.addRect(marker)
context.drawPath(using: .fillStroke)
}
// Save the context as a new UIImage
let myImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if let cgImage = myImage?.cgImage,
let orientation = myImage?.imageOrientation {
return UIImage(cgImage: cgImage, scale: 2, orientation: orientation)
}
You should:
Dispatch the whole thing to some background queue;
periodically call UIGraphicsGetImageFromCurrentImageContext and dispatch the image view update to the main queue
E.g., this will update the image view every ¼ second:
DispatchQueue.global().async {
var lastDrawn = CACurrentMediaTime()
UIGraphicsBeginImageContextWithOptions(size, false, 0)
for _ in 0 ..< 100_000 {
// draw whatever you want
let now = CACurrentMediaTime()
if now - lastDrawn > 0.25 {
self.updateImageView()
lastDrawn = now
}
}
self.updateImageView()
UIGraphicsEndImageContext()
}
Where:
func updateImageView() {
guard let image = UIGraphicsGetImageFromCurrentImageContext() else { return }
DispatchQueue.main.async {
self.imageView.image = image
}
}
Thus:
func buildImage(of size: CGSize) {
DispatchQueue.global().async {
var lastDrawn = CACurrentMediaTime()
UIGraphicsBeginImageContextWithOptions(size, false, 0)
for _ in 0 ..< 100_000 {
self.someColor().setFill()
UIBezierPath(rect: self.someRectangle(in: size)).fill()
let now = CACurrentMediaTime()
if now - lastDrawn > 0.25 {
self.updateImageView()
lastDrawn = now
}
}
self.updateImageView()
UIGraphicsEndImageContext()
}
}
func updateImageView() {
let image = UIGraphicsGetImageFromCurrentImageContext()
DispatchQueue.main.async {
self.imageView.image = image
}
}
func someRectangle(in size: CGSize) -> CGRect {
let x = CGFloat.random(in: 0...size.width)
let y = CGFloat.random(in: 0...size.height)
let width = CGFloat.random(in: 0...(size.width - x))
let height = CGFloat.random(in: 0...(size.height - y))
return CGRect(x: x, y: y, width: width, height: height)
}
func someColor() -> UIColor {
return UIColor(red: .random(in: 0...1),
green: .random(in: 0...1),
blue: .random(in: 0...1),
alpha: 1)
}
Yields:
Now, I’m not calling CoreGraphics directly, but you can and it will work the same.

NSWindow view capture to image

Update: Nov.6
Thanks to pointum I revised my question.
On 10.13, I'm trying to write a view snapshot function as general purpose NSView or window extension. Here's my take as a window delegate:
var snapshot : NSImage? {
get {
guard let window = self.window, let view = self.window!.contentView else { return nil }
var rect = view.bounds
rect = view.convert(rect, to: nil)
rect = window.convertToScreen(rect)
// Adjust for titlebar; kTitleUtility = 16, kTitleNormal = 22
let delta : CGFloat = CGFloat((window.styleMask.contains(.utilityWindow) ? kTitleUtility : kTitleNormal))
rect.origin.y += delta
rect.size.height += delta*2
Swift.print("rect: \(rect)")
let cgImage = CGWindowListCreateImage(rect, .optionIncludingWindow,
CGWindowID(window.windowNumber), .bestResolution)
let image = NSImage(cgImage: cgImage!, size: rect.size)
return image
}
}
to derive a 'flattened' snapshot of the window is what I'm after. Initially I'm using this image in a document icon drag.
It acts bizarrely. It seems to work initially - window in center, but subsequently the resulting image is different - smaller, especially when window is moved up or down in screen.
I think the rect capture is wrong ?
Adding to pointum's answer I came up with this:
var snapshot : NSImage? {
get {
guard let window = self.window, let view = self.window!.contentView else { return nil }
let inf = CGFloat(FP_INFINITE)
let null = CGRect(x: inf, y: inf, width: 0, height: 0)
let cgImage = CGWindowListCreateImage(null, .optionIncludingWindow,
CGWindowID(window.windowNumber), .bestResolution)
let image = NSImage(cgImage: cgImage!, size: view.bounds.size)
return image
}
}
As I only want / need a single window, specifying 'null' does the trick. Well all else fails, the docs, if you know where to look :o.
Use CGWindowListCreateImage:
let rect = /* view bounds converted to screen coordinates */
let image = CGWindowListCreateImage(rect, .optionIncludingWindow,
CGWindowID(window.windowNumber), .bestResolution)
To save the image use something like this:
let dest = CGImageDestinationCreateWithURL(url, "public.jpeg", 1, nil)
CGImageDestinationAddImage(destination, image, nil)
CGImageDestinationFinalize(destination)
Note that screen coordinates are flipped. From the docs:
The coordinates of the rectangle must be specified in screen coordinates, where the screen origin is in the upper-left corner of the main display and y-axis values increase downward

How can I make SKSpriteNode positions the same for any simulator/device?

In my game, the position of my SKNodes slightly change when I run the App on a virtual simulator vs on a real device(my iPad).
Here are pictures of what I am talking about.
This is the virtual simulator
This is my Ipad
It is hard to see, but the two red boxes are slightly higher on my iPad than in the simulator
Here is how i declare the size and position of the red boxes and green net:
The following code is located in my GameScene.swift file
func loadAppearance_Rim1() {
Rim1 = SKSpriteNode(color: UIColor.redColor(), size: CGSizeMake((frame.size.width) / 40, (frame.size.width) / 40))
Rim1.position = CGPointMake(((frame.size.width) / 2.23), ((frame.size.height) / 1.33))
Rim1.zPosition = 1
addChild(Rim1)
}
func loadAppearance_Rim2(){
Rim2 = SKSpriteNode(color: UIColor.redColor(), size: CGSizeMake((frame.size.width) / 40, (frame.size.width) / 40))
Rim2.position = CGPoint(x: ((frame.size.width) / 1.8), y: ((frame.size.height) / 1.33))
Rim2.zPosition = 1
addChild(Rim2)
}
func loadAppearance_RimNet(){
RimNet = SKSpriteNode(color: UIColor.greenColor(), size: CGSizeMake((frame.size.width) / 7.5, (frame.size.width) / 150))
RimNet.position = CGPointMake(frame.size.width / 1.99, frame.size.height / 1.33)
RimNet.zPosition = 1
addChild(RimNet)
}
func addBackground(){
//background
background = SKSpriteNode(imageNamed: "Background")
background.zPosition = 0
background.size = self.frame.size
background.position = CGPoint(x: self.size.width / 2, y: self.size.height / 2)
self.addChild(background)
}
Additionally my GameViewController.swift looks like this
import UIKit
import SpriteKit
class GameViewController: UIViewController {
var scene: GameScene!
override func viewDidLoad() {
super.viewDidLoad()
//Configure the view
let skView = view as! SKView
//If finger is on iphone, you cant tap again
skView.multipleTouchEnabled = false
//Create and configure the scene
//create scene within size of skview
scene = GameScene(size: skView.bounds.size)
scene.scaleMode = .AspectFill
scene.size = skView.bounds.size
//scene.anchorPoint = CGPointZero
//present the scene
skView.presentScene(scene)
}
override func shouldAutorotate() -> Bool {
return true
}
override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask {
if UIDevice.currentDevice().userInterfaceIdiom == .Phone {
return .Landscape
} else {
return .All
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Release any cached data, images, etc that aren't in use.
}
override func prefersStatusBarHidden() -> Bool {
return true
}
}
How can I make the positions of my nodes be the same for each simulator/physical device?
You should round those floating point values to integers via a call to (int)round(float) so that the values snap to whole pixels. Any place where you use CGPoint or CGSize should use whole pixels as opposed to floating point values.
If you are making a Universal application you need to declare the size of the scene using integer values. Here is an example:
scene = GameScene(size:CGSize(width: 2048, height: 1536))
Then when you initialize the positions and sizes of your nodes using CGPoint and CGSize, make them dependant on SKScene size. Here is an example:
node.position = CGPointMake(self.frame.size.width / 2, self.frame.size.height / 2)
If you declare the size of the scene for a Universal App like this:
scene.size = skView.bounds.size
then your SKSpriteNode positions will be all messed up. You may also need to change the scaleMode to .ResizeFill. This worked for me.

Center image in UIScrollView with Swift

I have a picture that is long, that I want to show all scrolling the page from top to bottom, with a ScrollView.
My problem is that the image is all to the right of the page, and there is no way to center it in the middle of the page. Instead, when I double tap the page to zoom the photo, the image come to the center of the page, and there are no problems.
The error is only when I enter to that page and the photo is all to the right, cutted because part of it go outside the page.
I am using Swift, with Autolayout (my app doesn't work without it), and this is the code of the ScrollView.
I hope that you can help me...
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
navigationItem.title = nameString!
imageView?.image = UIImage(named: imageName!)
nameLabel?.text = nameString!
// 1
let image = UIImage(named: imageName!)!
imageView = UIImageView(image: image)
imageView.frame = CGRect(origin: CGPoint(x: 0, y: 0), size:image.size)
scrollView.addSubview(imageView)
// 2
scrollView.contentSize = image.size
// 3
var doubleTapRecognizer = UITapGestureRecognizer(target: self, action: "scrollViewDoubleTapped:")
doubleTapRecognizer.numberOfTapsRequired = 2
doubleTapRecognizer.numberOfTouchesRequired = 1
scrollView.addGestureRecognizer(doubleTapRecognizer)
// 4
let scrollViewFrame = scrollView.frame
let scaleWidth = scrollViewFrame.size.width / scrollView.contentSize.width
let scaleHeight = scrollViewFrame.size.height / scrollView.contentSize.height
let minScale = min(scaleWidth, scaleHeight);
scrollView.minimumZoomScale = minScale;
// 5
scrollView.maximumZoomScale = 1.0
scrollView.zoomScale = minScale;
// 6
centerScrollViewContents()
}
func centerScrollViewContents() {
let boundsSize = scrollView.bounds.size
var contentsFrame = imageView.frame
if contentsFrame.size.width < boundsSize.width {
contentsFrame.origin.x = (boundsSize.width - contentsFrame.size.width) / 2.0
} else {
contentsFrame.origin.x = 0.0
}
if contentsFrame.size.height < boundsSize.height {
contentsFrame.origin.y = (boundsSize.height - contentsFrame.size.height) / 2.0
} else {
contentsFrame.origin.y = 0.0
}
imageView.frame = contentsFrame
}
func scrollViewDoubleTapped(recognizer: UITapGestureRecognizer) {
// 1
let pointInView = recognizer.locationInView(imageView)
// 2
var newZoomScale = scrollView.zoomScale * 1.5
newZoomScale = min(newZoomScale, scrollView.maximumZoomScale)
// 3
let scrollViewSize = scrollView.bounds.size
let w = scrollViewSize.width / newZoomScale
let h = scrollViewSize.height / newZoomScale
let x = pointInView.x - (w / 2.0)
let y = pointInView.y - (h / 2.0)
let rectToZoomTo = CGRectMake(x, y, w, h);
// 4
scrollView.zoomToRect(rectToZoomTo, animated: true)
}
func viewForZoomingInScrollView(scrollView: UIScrollView!) -> UIView! {
return imageView
}
func scrollViewDidZoom(scrollView: UIScrollView!) {
centerScrollViewContents()
}
Where is the problem? Thank you all
Solved, needs only to uncheck size classes and delete LaunchScreen size.

Resources