How to get a screen to word position in arkit - xcode

I tried to implement a game in arkit, when user tap the screen, the bullet will be fired from the touched position.
I know how to get 2D position on the screen
as below
void HandleTapGesture(UITapGestureRecognizer sender)
{
let scnView = sender.view as! ARSCNView
let holdLocation = sender.location(in: scnView)
but I don't know how to transfer this position into the world position.
Then I can use it in below scenario
let bullet = SCNNode(geometry: SCNSphere(radius: 0.08))
bullet.position = ?
That position type is
open var position: SCNVector3
Can anyone share any opinion? Many thanks.

Related

macOS, how resize window across screens?

I'm trying to programmatically resize macOS windows. Similar to Rectangle.
I have the basic resizing code working, for example, move the window to the right half, and when there is only one screen it works fine, however when I try to resize with two screens (in a vertical layout) the math does not work:
public func moveRight() {
guard let frontmostWindowElement = AccessibilityElement.frontmostWindow()
else {
NSSound.beep()
return
}
let screens = screenDetector.detectScreens(using: frontmostWindowElement)
guard let usableScreens = screens else {
NSSound.beep()
print("Unable to obtain usable screens")
return
}
let screenFrame = usableScreens.currentScreen.adjustedVisibleFrame
print("Visible frame of current screen \(usableScreens.visibleFrameOfCurrentScreen)")
let halfPosition = CGPoint(x: screenFrame.origin.x + screenFrame.width / 2, y: -screenFrame.origin.y)
let halfSize = CGSize(width: screenFrame.width / 2, height: screenFrame.height)
frontmostWindowElement.set(size: halfSize)
frontmostWindowElement.set(position: halfPosition)
frontmostWindowElement.set(size: halfSize)
print("movedWindowRect \(frontmostWindowElement.rectOfElement())")
}
If my window is on the main screen then the resizing works correctly, however if it is a screen below (#3 in the diagram below) then the Y coordinate ends up in the top monitor (#2 or #1 depending on x coordinate) instead of the original one.
The output of the code:
Visible frame of current screen (679.0, -800.0, 1280.0, 775.0)
Raw Frame (679.0, -800.0, 1280.0, 800.0)
movedWindowRect (1319.0, 25.0, 640.0, 775.0)
As far as I can see the problem lies in how Screens and windows are positioned:
I'm trying to understand how should I position the window so that it remains in the correct screen (#3), but having no luck so far, there doesn't seem to be any method to get the absolute screen dimensions to place the screen in the correct origin.
Any idea how can this be solved?
I figured it out, I completely missed one of the functions used in the AccessibilityElement class:
static func normalizeCoordinatesOf(_ rect: CGRect) -> CGRect {
var normalizedRect = rect
let frameOfScreenWithMenuBar = NSScreen.screens[0].frame as CGRect
normalizedRect.origin.y = frameOfScreenWithMenuBar.height - rect.maxY
return normalizedRect
}
Basically, since everything is calculated based on the main screen then there is no other option than to take the coordinates of that one and then offset to get the real position of the screen element.

CGWindowListCreateImage yields blurred cgImage when zoomed

I'm developing a magnifying glass like application for mac. My goal is to be able to pinpoint individual pixels when zoomed in. I'm using this code in mouseMoved(with event: NSEvent):
let captureSize = self.frame.size.width / 9 //9 is the scale factor
let screenFrame = (NSScreen.main()?.frame)!
let x = floor(point.x) - floor(captureSize / 2)
let y = screenFrame.size.height - floor(point.y) - floor(captureSize / 2)
let windowID = CGWindowID(self.windowNumber)
cgImageExample = CGWindowListCreateImage(CGRect(x: x, y: y, width: captureSize,
height: captureSize), CGWindowListOption.optionOnScreenBelowWindow, windowID,
CGWindowImageOption.bestResolution)
The creation of the cgImage takes place in the CGWindowListCreateImage method. When I later draw this in an NSView, the result looks like this:
It looks blurred / like some anti-aliasing was applied during the creation of the cgImage. My goal is to get a razor sharp representation of each pixel. Can anyone point me in the right direction?
Ok, I figured it out. It was a matter of setting the interpolation quality to none on the drawing context:
context.interpolationQuality = .none
Result:
On request some more code:
//get the context
guard let context = NSGraphicsContext.current()?.cgContext else { return }
//get the CGImage
let image: CGImage = //pass the result from CGWindowListCreateImage call
//draw
context.draw(image, in: (CGRect of choice))

Sprite Particle System animation in viewController

I create a macOS single window application and add a Sprite Particle System file with template Stars. and the visual effect just like this:
And I want to add it to my viewController, as the guide of this answer, I got the result like this, and it was not which I desired:
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
let particlesNode = SCNNode()
let particleSystem = SCNParticleSystem(named: "Welcome", inDirectory: "")
particlesNode.addParticleSystem(particleSystem!)
scene.rootNode.addChildNode(particlesNode)
skView.backgroundColor = .black
skView.scene = scene
}
So, I'm wondering what's wrong and what should I do?
Here is the demo repo: Link Here
The particle system itself is the standard "star" SceneKit particle system available in Xcode, with no changes.
Well I made a little progress. If I swivel the camera around 180 degrees, I can see the stars receding, so we can tell that the particle system is running ok. In the default orientation, though, all I saw was blinking lights. So I think the particles are being generated with a Z position of 0, the same as the camera's.
If I move the system's node away from the camera
particlesNode.position = SCNVector3(0, 0, -20)
I still just see blinking lights. But if I click on the SCNView, the animation works correctly. I see stars coming at me.
I don't understand why I have to click the view to get it to work right. I tried isPlaying = true but that made no difference.

How to enable MKMapView 3D view?

I have an MKMapView in a window, and pitchEnabled is true (and I've confirmed this in the debugger). The "3D" thingy in the middle of the compass is grayed out, and clicking or dragging it does nothing. Option-dragging the map (like I do in Maps.app) doesn't do anything, either.
From my interpretation of the docs, setting pitchEnabled should let me use the 3D view, like Maps.app does. Am I mistaken? Is there something else I need to do to allow my users to get a 3D map view?
You can get close to the experience of 3D mode by adjusting the camera angle from which you view the map and making buildings visible. The example below allows you view the Eiffel Tower in 3D:
viewDidLoad() {
super.viewDidLoad()
mapView.mapType = MKMapType.Standard
mapView.showsBuildings = true // displays buildings
let eiffelTowerCoordinates = CLLocationCoordinate2DMake(48.85815, 2.29452)
mapView.region = MKCoordinateRegionMakeWithDistance(eiffelTowerCoordinates, 1000, 100) // sets the visible region of the map
// create a 3D Camera
let mapCamera = MKMapCamera()
mapCamera.centerCoordinate = eiffelTowerCoordinates
mapCamera.pitch = 45
mapCamera.altitude = 500 // example altitude
mapCamera.heading = 45
// set the camera property
mapView.camera = mapCamera
}
example from: this question
Since OS X El Capitan v10.11 they added a new map type: "3D flyover mode"
For some reason this option doesn't show up in XCode attributes inspector of the mapview. You have to set it programmatically. This makes the map look and behave as the one seen in the maps app.
self.mapView.mapType = MKMapTypeSatelliteFlyover;
I was able to do this in swift on iOS 11:
mapView.mapType = .hybridFlyover
That is giving me the 3D view.
Use this setup method to configure you MapView in viewDidLoad.
func setup() {
objMapView.showsUserLocation = true
objMapView.delegate = self
objMapView.showsBuildings = true
objMapView.mapType = .hybridFlyover
objLocationManager.startUpdatingLocation()
if let center = self.objLocationManager.location?.coordinate {
let currentLocationCoordinates = CLLocationCoordinate2DMake(center.latitude, center.longitude)
objMapView.region = MKCoordinateRegion.init(center: currentLocationCoordinates, latitudinalMeters: 1000, longitudinalMeters: 100)
// create a 3D Camera
let mapCamera = MKMapCamera()
mapCamera.centerCoordinate = currentLocationCoordinates
mapCamera.pitch = 45
mapCamera.altitude = 100
mapCamera.heading = 45
// set the camera property
objMapView.camera = mapCamera
}
}
In above code snippet:
mapType property hybridFlyover displays satellite map along with location names if available.
MkMapCamera instance helps in creating 3D view of map. altitude property determine altitude from where location is to be projected.
Check below screenshot for output:

How to make button with image and determine CGSize in sprite kit swift

How can I create a button with an image so I can still decided the CGSize myself. Right now I can only do this.
let playNode = SKSpriteNode(color: SKColor.redColor(), size: CGSize(width: 100, height: 44))
playNode.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame));
playNode.name = "play"
addChild(playNode)
I would like to replace the red color with an actual image. So far I haven't found a way how to actual create a button with an image AND decide its CGSize. I do know how to create a button just with an image, but I can't determine its CGSize then. Any help would be appreciated !
You can set the size property of your node after you've set the image. Like that:
let playNode = SKSpriteNode(imageNamed: "yourImage")
//Set it after you've set the image.
playNode.size = CGSizeMake(200, 200)
playNode.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame))
playNode.name = "play"
addChild(playNode)

Resources