According to the Cocoa Drawing Guide documentation for Images, NSImage can load a Windows cursor .cur file.
But how do I obtain the hotspot needed for NSCursor - initWithImage:(NSImage *)newImage hotSpot:(NSPoint)point; ?
As the documentation also says,
In OS X v10.4 and later, NSImage supports many additional file formats using the Image I/O framework.
So let's grab a sample cursor file and experiment in a Swift playground:
import Foundation
import ImageIO
let url = Bundle.main.url(forResource: "BUSY_L", withExtension: "CUR")! as CFURL
let source = CGImageSourceCreateWithURL(url, nil)!
print(CGImageSourceCopyPropertiesAtIndex(source, 0, nil)!)
Output:
{
ColorModel = RGB;
Depth = 8;
HasAlpha = 1;
IsIndexed = 1;
PixelHeight = 32;
PixelWidth = 32;
ProfileName = "sRGB IEC61966-2.1";
hotspotX = 16;
hotspotY = 16;
}
So, to get the hotspot safely:
import Foundation
import ImageIO
if let url = Bundle.main.url(forResource: "BUSY_L", withExtension: "CUR") as CFURL?,
let source = CGImageSourceCreateWithURL(url, nil),
let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [String: Any],
let x = properties["hotspotX"] as? CGFloat,
let y = properties["hotspotY"] as? CGFloat
{
let hotspot = CGPoint(x: x, y: y)
print(hotspot)
}
Output:
(16.0, 16.0)
Related
I'm trying to replicate macOS's screenshot functionality, dragging a selection onscreen to provide coordinates for cropping an image. I have it working fine on my desktop Mac (2560x1600), but testing on my laptop (2016 rMBP 15", 2880x1800), the cropped image is completely wrong. I don't understand why I'd get the right results on my desktop, but not on my laptop. I think it has something to do with the Quarts coordinates being different from Cocoa coordinates, seeing as how on the laptop, the resulting image seems like the coordinates are flipped on the Y-axis.
Here is the code I am using to generate the cropping CGRect:
# Segment used to draw the CAShapeLayer:
private func handleDragging(_ event: NSEvent) {
let mouseLoc = event.locationInWindow
if let point = self.startPoint,
let layer = self.shapeLayer {
let path = CGMutablePath()
path.move(to: point)
path.addLine(to: NSPoint(x: self.startPoint.x, y: mouseLoc.y))
path.addLine(to: mouseLoc)
path.addLine(to: NSPoint(x: mouseLoc.x, y: self.startPoint.y))
path.closeSubpath()
layer.path = path
self.selectionRect = path.boundingBox
}
}
private func startDragging(_ event: NSEvent) {
if let window = self.window,
let contentView = window.contentView,
let layer = contentView.layer,
!self.isDragging {
self.isDragging = true
self.startPoint = window.mouseLocationOutsideOfEventStream
shapeLayer = CAShapeLayer()
shapeLayer.lineWidth = 1.0
shapeLayer.fillColor = NSColor.white.withAlphaComponent(0.5).cgColor
shapeLayer.strokeColor = NSColor.systemGray.cgColor
layer.addSublayer(shapeLayer)
}
}
Then this is the code where I actually generate the screenshot and crop using the CGRect:
public func processResults(_ rect: CGRect) {
if let windowID = self.globalWindow?.windowNumber,
let screen = self.getScreenWithMouse(), rect.width > 5 && rect.height > 5 {
self.delegate?.processingResults()
let cgScreenshot = CGWindowListCreateImage(screen.frame, .optionOnScreenBelowWindow, CGWindowID(windowID), .bestResolution)
var rect2 = rect
rect2.origin.y = NSMaxY(self.getScreenWithMouse()!.frame) - NSMaxY(rect);
if let croppedCGScreenshot = cgScreenshot?.cropping(to: rect2) {
let rep = NSBitmapImageRep(cgImage: croppedCGScreenshot)
let image = NSImage()
image.addRepresentation(rep)
self.showPreviewWindow(image: image)
let requests = [self.getTextRecognitionRequest()]
let imageRequestHandler = VNImageRequestHandler(cgImage: croppedCGScreenshot, orientation: .up, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try imageRequestHandler.perform(requests)
} catch let error {
print("Error: \(error)")
}
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {
self.hidePreviewWindow()
}
}
}
self.globalWindow = nil
}
Not 15 minutes after I asked this question, I tried one more thing and it works!
Relevant snippet:
var correctedRect = rect
// Set the Y origin properly (counteracting the flipped Y-axis)
correctedRect.origin.y = screen.frame.height - rect.origin.y - rect.height;
// Checks if we're on another screen
if (screen.frame.origin.y < 0) {
correctedRect.origin.y = correctedRect.origin.y - screen.frame.origin.y
}
// Finally, correct the x origin (if we're on another screen, the origin will be larger than zero)
correctedRect.origin.x = correctedRect.origin.x + screen.frame.origin.x
// Generate the screenshot inside the requested rect
let cgScreenshot = CGWindowListCreateImage(correctedRect, .optionOnScreenBelowWindow, CGWindowID(windowID), .bestResolution)
The code below creates a red rectangle that is animated to move across the view from left to right. I would like to have an arbitrary shape loaded from an image to either superimpose or replace the rectangle. However, the circleLayer.contents = NSImage statement in the initializeCircleLayer function doesn't produce any effect. The diagnostic print statement seems to verify that the image exists and has been found, but no image appears in the view. How do I get an image into the layer to replace the animated red rectangle? Thanks!
CODE BELOW:
import Cocoa
class ViewController: NSViewController {
var circleLayer = CALayer()
override func viewDidLoad() {
super.viewDidLoad()
self.view.wantsLayer = true
initializeCircleLayer()
simpleCAAnimationDemo()
}
func initializeCircleLayer(){
circleLayer.bounds = CGRect(x: 0, y: 0, width: 150, height: 150)
circleLayer.position = CGPoint(x: 50, y: 150)
circleLayer.backgroundColor = NSColor.red.cgColor
circleLayer.cornerRadius = 10.0
let testIm = NSImage(named: NSImage.Name(rawValue: "testImage"))
print("testIm = \(String(describing: testIm))")
circleLayer.contents = NSImage(named: NSImage.Name(rawValue: "testImage"))?.cgImage
circleLayer.contentsGravity = kCAGravityCenter
self.view.layer?.addSublayer(circleLayer)
}
func simpleCAAnimationDemo(){
circleLayer.removeAllAnimations()
let animation = CABasicAnimation(keyPath: "position")
let startingPoint = NSValue(point: NSPoint(x: 50, y: 150))
let endingPoint = NSValue(point: NSPoint(x: 600, y: 150))
animation.fromValue = startingPoint
animation.toValue = endingPoint
animation.repeatCount = Float.greatestFiniteMagnitude
animation.duration = 10.0
circleLayer.add(animation, forKey: "linearMovement")
}
}
Why it doesn't work
The reason why
circleLayer.contents = NSImage(named: NSImage.Name(rawValue: "testImage"))?.cgImage
doesn't work is because it's a reference to the cgImage(forProposedRect:context:hints:) method, meaning that its type is
((UnsafeMutablePointer<NSRect>?, NSGraphicsContext?, [NSImageRep.HintKey : Any]?) -> CGImage?)?
You can see this by assigning NSImage(named: NSImage.Name(rawValue: "testImage"))?.cgImage to a local variable and ⌥-clicking it to see its type.
The compiler allows this assignment because circleLayer.contents is an Any? property, so literally anything can be assigned to it.
How to fix it
As of macOS 10.6, you can assign NSImage objects to a layers contents directly:
circleLayer.contents = NSImage(named: NSImage.Name(rawValue: "testImage"))
How can I convert an RGB image into its grayscaled colour space? I can find a lot of code for iOS but non for macOS.. And the Apple's documentations are all in objective C....
let width = image.size.width
let height = image.size.height
let imageRect = NSMakeRect(0, 0, width, height);
let colorSpace = CGColorSpaceCreateDeviceGray();
let bits = image.representations.first as! NSBitmapImageRep;
bitmap!.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: nil)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue);
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue);
context.draw(image.cgImage!, in : imageRect);// and this line is wrong obviously..
This is what I have got so far..just copy and pasting from the internet.. but I have no idea on how to go further...
I have found an interesting way to do this.. My code are simply copied from the three sources below.
how to create grayscale image from nsimage in swift?
Greyscale Image using COCOA and NSImage
Changing the Color Space of NSImage: The second reply
My Code:
func saveImage(image:NSImage, destination:URL) throws{
let rep = greyScale(image);
var data = rep.representation(using: NSJPEGFileType, properties: [:]);
try data?.write(to: destination);
}
// rgb2gray
func greyScale(image: NSImage) -> NSBitmapImageRep{
let w = image.size.width
let h = image.size.height
let imageRect : NSRect! = NSMakeRect(0,0, w, h);
let colourSpace : ColourSpace! = CGColorSpaceCreateDeviceGray();
let context : CGContext! = CGContext(data: nil, width: Int(w),
height: Int(h), bitsPerComponent: 8,
bytesPerRow: 0, space: colorSpace,
bitmapInfo: CGImageAlphaInfo.none.rawValue);
context.draw(nsImageToCGImage(image: image), in: imageRect);
let greyImage : CGImage! = context.makeImage();
return NSBitmapImageRep(cgImage: greyImage);
}
func nsImageToCGImage(image: NSImage) -> CGImage{
if let imageData = image.tiffRepresentation as NSData! {
let imageSource : CGImageSource! = CGImageSourceCreateWithData(imageData,
nil);
let image = CGImageSourceCreateImageAtIndex(imageSource, 0, nil);
return image;
}
return nil;
}
I am still trying to understand the principle behind.
You can try CIFilter. The annoyance is that you have to convert back and forth between NSImage and CIImage:
import Cocoa
import CoreImage
let url = Bundle.main.url(forResource: "image", withExtension: "jpg")!
let image = CIImage(contentsOf: url)!
let bwFilter = CIFilter(name: "CIColorControls", withInputParameters: ["inputImage": image, "inputSaturation": 0.0])!
if let ciImage = bwFilter.outputImage {
let rep = NSCIImageRep(ciImage: ciImage)
let nsImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
// nsImage is now your black-and-white image
}
I have a NSViewController and a variable num. I want to change the size of the window dynamically according to that variable. Is there any way to do that in swift?
Let's say your window has an IBOutlet named "window", and your dynamic number is named "myDynamicNumber":
func resize() {
var windowFrame = window.frame
let oldWidth = windowFrame.size.width
let oldHeight = windowFrame.size.height
let toAdd = CGFloat(myDynamicNumber)
let newWidth = oldWidth + toAdd
let newHeight = oldHeight + toAdd
windowFrame.size = NSMakeSize(newWidth, newHeight)
window.setFrame(windowFrame, display: true)
}
In Swift 3 to resize the window you use setFrame.
An example from the ViewController:
func resizeWin(size:(CGFloat,CGFloat)){
self.view.window?.setFrame(NSRect(x:0,y:0,width:size.0,height:size.1), display: true)
}
I needed to toggle viewing a text view so I overlaid the window an invisible view - hideRect just short of the text view; in this way I can resize to the smaller (hideRect) and restore later to the original size - origRect. Hide and original rect captured at viewDidLoad(). Swift 3/Xcode 8.3.3
// class global contants
let kTitleUtility = 16
let kTitleNormal = 22
#IBOutlet var hideView: NSView!
var hideRect: NSRect?
var origRect: NSRect?
#IBAction func toggleContent(_ sender: Any) {
// Toggle content visibility
if let window = self.view.window {
let oldSize = window.contentView?.bounds.size
var frame = window.frame
if toggleButton.state == NSOffState {
frame.origin.y += ((oldSize?.height)! - (hideRect?.size.height)!)
window.setFrameOrigin(frame.origin)
window.setContentSize((hideRect?.size)!)
window.showsResizeIndicator = false
window.minSize = NSMakeSize((hideRect?.size.width)!,(hideRect?.size.height)!+CGFloat(kTitleNormal))
creditScroll.isHidden = true
}
else
{
let hugeSize = NSMakeSize(CGFloat(Float.greatestFiniteMagnitude), CGFloat(Float.greatestFiniteMagnitude))
frame.origin.y += ((oldSize?.height)! - (origRect?.size.height)!)
window.setFrameOrigin(frame.origin)
window.setContentSize((origRect?.size)!)
window.showsResizeIndicator = true
window.minSize = NSMakeSize((origRect?.size.width)!,(origRect?.size.height)!+CGFloat(kTitleNormal))
window.maxSize = hugeSize
creditScroll.isHidden = false
}
}
}
This also preserved the widow's visual origin, and sizing minimum.
I'm building a swift game and a need to set up some images.
My code works with string or integer:
for var i = 0; i < globalCurrentMembers.count; ++i {
var MembersDefaultName = NSUserDefaults.standardUserDefaults()
MembersDefaultName.setValue(globalCurrentMembers[i].name, forKey: "globalCurrentMembersName\(i)")
MembersDefaultName.synchronize()
}
But I have an error for an image
for var i = 0; i < globalCurrentMembers.count; ++i {
var MembersDefaultImage = NSUserDefaults.standardUserDefaults()
MembersDefaultImage.setValue(globalCurrentMembers[i].image,
forKey: "globalCurrentMembersImage\(i)")
MembersDefaultImage.synchronize()
}
globalCurrentMembers is an array of Member which looks like that:
class Member {
var image = UIImage ()
var name = String ()
var progression = Int()
var round = Int()
var level = Int()
var imageProgression = [UIButton]()
func Init(){
image = UIImage(named: "default.png")!
name = "default"
progression = 0
round = 0
level = 0
}
}
So please can you tell me how doing that.
UIImage does not implement NSCoding protocol, you need to convert UIImage to NSData first.
Change your code to the following:
var imageData = UIImagePNGRepresentation(globalCurrentMembers[i].image)
var myEncodedImageData = NSKeyedArchiver.archivedDataWithRootObject(imageData)
MembersDefaultImage.setObject(myEncodedImageData, forKey: "globalCurrentMembersImage\(i)")
Of course storing image to NSUserDefaults is not the best practice, you should store your image in some file directory.
You can't store images in NSUserDefaults. Have you considered storing just the path to the image, or the name of the image, instead? That would be enormously more efficient, and also work.