macOS, how resize window across screens? - macos

I'm trying to programmatically resize macOS windows. Similar to Rectangle.
I have the basic resizing code working, for example, move the window to the right half, and when there is only one screen it works fine, however when I try to resize with two screens (in a vertical layout) the math does not work:
public func moveRight() {
guard let frontmostWindowElement = AccessibilityElement.frontmostWindow()
else {
NSSound.beep()
return
}
let screens = screenDetector.detectScreens(using: frontmostWindowElement)
guard let usableScreens = screens else {
NSSound.beep()
print("Unable to obtain usable screens")
return
}
let screenFrame = usableScreens.currentScreen.adjustedVisibleFrame
print("Visible frame of current screen \(usableScreens.visibleFrameOfCurrentScreen)")
let halfPosition = CGPoint(x: screenFrame.origin.x + screenFrame.width / 2, y: -screenFrame.origin.y)
let halfSize = CGSize(width: screenFrame.width / 2, height: screenFrame.height)
frontmostWindowElement.set(size: halfSize)
frontmostWindowElement.set(position: halfPosition)
frontmostWindowElement.set(size: halfSize)
print("movedWindowRect \(frontmostWindowElement.rectOfElement())")
}
If my window is on the main screen then the resizing works correctly, however if it is a screen below (#3 in the diagram below) then the Y coordinate ends up in the top monitor (#2 or #1 depending on x coordinate) instead of the original one.
The output of the code:
Visible frame of current screen (679.0, -800.0, 1280.0, 775.0)
Raw Frame (679.0, -800.0, 1280.0, 800.0)
movedWindowRect (1319.0, 25.0, 640.0, 775.0)
As far as I can see the problem lies in how Screens and windows are positioned:
I'm trying to understand how should I position the window so that it remains in the correct screen (#3), but having no luck so far, there doesn't seem to be any method to get the absolute screen dimensions to place the screen in the correct origin.
Any idea how can this be solved?

I figured it out, I completely missed one of the functions used in the AccessibilityElement class:
static func normalizeCoordinatesOf(_ rect: CGRect) -> CGRect {
var normalizedRect = rect
let frameOfScreenWithMenuBar = NSScreen.screens[0].frame as CGRect
normalizedRect.origin.y = frameOfScreenWithMenuBar.height - rect.maxY
return normalizedRect
}
Basically, since everything is calculated based on the main screen then there is no other option than to take the coordinates of that one and then offset to get the real position of the screen element.

Related

Frame animation from outside screen and translateto inside screen

I'm new to animation with Xamarin Forms, I have a frame that I need to place it outside the screen like this:
The small frame is outside the device's screen
The small frame now inside the device screen
My problem is I need to know how I can place the frame like that (outside the screen) from the start, and how to know the width and the height of every device so I can use the TranslateTo() method to translate the frame to the exact same position for every device.
Thanks in advance
You can try this from your .cs page
Application.Current.MainPage.Width
Application.Current.MainPage.Height
You can use Xamarin.Essentials NuGet pakage to achieve this. And there is a useful class DeviceDisplay in there that should be helpful for you.
The documentation can be found here.
Usage example:
// Get Metrics
var mainDisplayInfo = DeviceDisplay.MainDisplayInfo;
// Orientation (Landscape, Portrait, Square, Unknown)
var orientation = mainDisplayInfo.Orientation;
// Rotation (0, 90, 180, 270)
var rotation = mainDisplayInfo.Rotation;
// Width (in pixels)
var width = mainDisplayInfo.Width;
// Height (in pixels)
var height = mainDisplayInfo.Height;
// Screen density
var density = mainDisplayInfo.Density;

I am able to draw on each ViewController to edit Original ImageView. Is there any way to fix this?

I am working on an animation app and in each other ViewController I can draw on the image that's being currently shown on the original ImageView. Is there any way to fix this?
This is what exactly is happening. I don't really know where in the code the problem exists.
https://drive.google.com/file/d/1M7qWKMugaqeDjGls3zvVitoRmwpOUJFY/view?usp=sharing
Expected to be able to draw only on the DrawingFrame ViewController. However, I can draw on every single ViewController in my app
The problem would appear to be that your gesture recognizer is still operational, even though you’ve presented another view on top of the current one.
This is a bit unusual. Usually when you present a view like that, the old one (and its gesture recognizers) are removed from the view hierarchy. I’m guessing that you’re just sliding this second view on top of the other. There are a few solutions:
One solution would be to make sure to define this new view such that (a) it accepts user interaction; and (b) write code so that it handles those gestures. That will avoid having the view behind it picking up those gestures.
Another solution is to disable your gesture recognizer recognizer when the menu view is presented, and re-enable it when the menu is dismissed.
The third solution is to change how you present that menu view, making sure you remove the current view from the view hierarchy when you do so. A standard show/present transition generally does this, though, so we may need to see how you’re presenting this menu view to comment further.
That having been said, a few unrelated observations:
you should use UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext;
rather than
if pencil.eraser == true { ... }
you can
if pencil.eraser { ... }
I’d suggest giving the pencil a computed property:
var color: UIColor { return UIColor(red: red, green: green, blue: blue, alpha: opacity) }
Then you can just refer to pencil.color;
property names should start with lowercase letter; and
drawingFrame is a confusing name, IMHO, because it’s not a “frame”, but rather likely a UIImageView. I’d call it drawingImageView or something like that.
Yielding:
func drawLine(from fromPoint: CGPoint, to toPoint: CGPoint) {
guard let pencil = pencil else { return }
//begins current context (and defer the ending of the context)
UIGraphicsBeginImageContextWithOptions(drawingImageView.bounds.size, false, 0)
defer { UIGraphicsEndImageContext() }
//where to draw
drawingImageView.image?.draw(in: drawingImageView.bounds)
//saves context
guard let context = UIGraphicsGetCurrentContext() else { return }
//drawing the line
context.move(to: fromPoint)
context.addLine(to: toPoint)
context.setLineCap(.round)
if pencil.eraser {
//Eraser
context.setBlendMode(.clear)
context.setLineWidth(10)
context.setStrokeColor(UIColor.white.cgColor)
} else {
//opacity, brush width, etc.
context.setBlendMode(.normal)
context.setLineWidth(pencil.pencilWidth)
context.setStrokeColor(pencil.color.cgColor)
}
context.strokePath()
//storing context back into the imageView
drawingImageView.image = UIGraphicsGetImageFromCurrentImageContext()
}
Or, even better, retire UIGraphicsBeginImageContext altogether and use the modern UIGraphicsImageRenderer:
func drawLine(from fromPoint: CGPoint, to toPoint: CGPoint) {
guard let pencil = pencil else { return }
drawingImageView.image = UIGraphicsImageRenderer(size: drawingImageView.bounds.size).image { _ in
drawingImageView.image?.draw(in: drawingImageView.bounds)
let path = UIBezierPath()
path.move(to: fromPoint)
path.addLine(to: toPoint)
path.lineCapStyle = .round
if pencil.eraser {
path.lineWidth = 10
UIColor.white.setStroke()
} else {
path.lineWidth = pencil.pencilWidth
pencil.color.setStroke()
}
path.stroke()
}
}
For more information on UIGraphicsImageRenderer, see the “Drawing off-screen” section of WWDC 2018 Image and Graphics Best Practices.
As an aside, once you get this problem behind you, you might want to revisit this “stroke from point a to point b and re-snapshot” logic to capture an array of points and build a path from a whole series, and don’t re-snapshot ever point, but only after a whole bunch have been added. This snapshotting process is slow and you’re going to find that the UX stutters a bit more than it needs. I personally re-snapshot after 100 points or so (at which point the amount of time to restroke the whole path is slow enough that it’s not much faster than the snapshot process, so if I snapshot and restart the path from where I left off, it then speeds up again).
But you say:
Expected to be able to draw only on the DrawingFrame ViewController. However, I can draw on every single ViewController in my app.
The above should draw only the image of drawingImageView and the stroke from fromPoint to toPoint. Your problem about drawing on “every single ViewController” rests elsewhere. We’d really need to see how precisely you are presenting this menu scene.

CGWindowListCreateImage yields blurred cgImage when zoomed

I'm developing a magnifying glass like application for mac. My goal is to be able to pinpoint individual pixels when zoomed in. I'm using this code in mouseMoved(with event: NSEvent):
let captureSize = self.frame.size.width / 9 //9 is the scale factor
let screenFrame = (NSScreen.main()?.frame)!
let x = floor(point.x) - floor(captureSize / 2)
let y = screenFrame.size.height - floor(point.y) - floor(captureSize / 2)
let windowID = CGWindowID(self.windowNumber)
cgImageExample = CGWindowListCreateImage(CGRect(x: x, y: y, width: captureSize,
height: captureSize), CGWindowListOption.optionOnScreenBelowWindow, windowID,
CGWindowImageOption.bestResolution)
The creation of the cgImage takes place in the CGWindowListCreateImage method. When I later draw this in an NSView, the result looks like this:
It looks blurred / like some anti-aliasing was applied during the creation of the cgImage. My goal is to get a razor sharp representation of each pixel. Can anyone point me in the right direction?
Ok, I figured it out. It was a matter of setting the interpolation quality to none on the drawing context:
context.interpolationQuality = .none
Result:
On request some more code:
//get the context
guard let context = NSGraphicsContext.current()?.cgContext else { return }
//get the CGImage
let image: CGImage = //pass the result from CGWindowListCreateImage call
//draw
context.draw(image, in: (CGRect of choice))

Sprite Particle System animation in viewController

I create a macOS single window application and add a Sprite Particle System file with template Stars. and the visual effect just like this:
And I want to add it to my viewController, as the guide of this answer, I got the result like this, and it was not which I desired:
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
let particlesNode = SCNNode()
let particleSystem = SCNParticleSystem(named: "Welcome", inDirectory: "")
particlesNode.addParticleSystem(particleSystem!)
scene.rootNode.addChildNode(particlesNode)
skView.backgroundColor = .black
skView.scene = scene
}
So, I'm wondering what's wrong and what should I do?
Here is the demo repo: Link Here
The particle system itself is the standard "star" SceneKit particle system available in Xcode, with no changes.
Well I made a little progress. If I swivel the camera around 180 degrees, I can see the stars receding, so we can tell that the particle system is running ok. In the default orientation, though, all I saw was blinking lights. So I think the particles are being generated with a Z position of 0, the same as the camera's.
If I move the system's node away from the camera
particlesNode.position = SCNVector3(0, 0, -20)
I still just see blinking lights. But if I click on the SCNView, the animation works correctly. I see stars coming at me.
I don't understand why I have to click the view to get it to work right. I tried isPlaying = true but that made no difference.

Horizontal scrollView/UIImageView layout issue

The goal: Have a scroll view that displays an array of uiimageviews (photos) that you can horizontally scroll through them
How I understand to do this: Make the frame (CGRect) of each uiimageview the height and width of the scroll view, the y value to 0 on each, and set the first imgViews x value to 0. For every imgView after that, add the width of the scrollview to the x value. In theory, this would line the imgViews (Photos) up next to each other horizontally and not allow for any vertical scrolling or zooming, purely a horizontal photo viewer.
The storyboard setup: I am creating my scrollview in a xib file (It’s a custom uiCollectionViewCell), with these constraints:
— Top space to cell (0)
— Trailing space to cell (0)
— Leading space to cell (0)
— Height of 400
— Bottom space to a view (0)
—— (See Below for img)
Laying out the UIImgViews:
func layoutScrollView() {
for (index, img) in currentImages.enumerate() {
let imgView = UIImageView(frame: CGRect(x: CGFloat(index) * scrollView.bounds.width, y: CGFloat(0), width: scrollView.bounds.width, height: scrollView.bounds.height))
imgView.contentMode = .ScaleAspectFill
imgView.image = img
scrollView.addSubview(imgView)
scrollView.contentSize = CGSize(width: imgView.frame.width * CGFloat(index), height: scrollView.bounds.height)
scrollView.setNeedsLayout()
}
}
My suspicion: I suspect the issue is stemming from the auto layout constraints i’ve specified, but (considering Im asking a SO question) not sure
If there is a better way to do this (really the correct way) please let me know! I have been trying to wrap my head around this for a few days now.
I appreciate all responses! Thanks for reading
EDIT #1
I tried paulvs approach of setting setNeedsLayout & layoutIfNeeded before the "for" loop, and still no luck. Here is (out of three images selected) the second photo displaying. It seems that both the first and second photos are way longer than the content view and that would move the middle view over (Squished).
Your code looks fine except for a few details (that may be causing the problem):
Add:
view.setNeedsLayout()
view.layoutIfNeeded()
before accessing the scrollView's frame (a good place would be before the for-loop).
This is because when using Autolayout, if you access a view's frame before the layout engine has performed a pass, you will get incorrect frames sizes/positions.
Remove these lines from inside the for-loop:
scrollView.contentSize = CGSize(width: imgView.frame.width * CGFloat(index), height: scrollView.bounds.height)
scrollView.setNeedsLayout()
and place this line after (outside) the for loop:
scrollView.contentSize = CGSize(width: imgView.frame.width * CGFloat(currentImages.count), height: scrollView.bounds.height)

Resources