I have a SwiftUI MapKitView that follows the pattern in https://www.hackingwithswift.com/books/ios-swiftui/advanced-mkmapview-with-swiftui adapted for macOS. In makeNSView I set the region for the interesting bit of the location I want to display, irrelevant of windows size and I can get this code to zoom appropriately by default whether the NSWindow is more landscape than portrait or vice versa.
func makeNSView(context: Context) -> MKMapView {
let mapView = MKMapView()
mapView.delegate = context.coordinator
mapView.mapType = .satellite
mapView.pointOfInterestFilter = .excludingAll
let region = MKCoordinateRegion(center: centroid, latitudinalMeters: 1160, longitudinalMeters: 1260)
let fittedRegion = mapView.regionThatFits(region)
mapView.setRegion(fittedRegion, animated: false)
}
It appears thatmapView.region is not actually updated upon the return of setRegion()
The rub is I want to orient the map other than true north, so I have to set a camera.
However, the fromDistance: parameter in creating MKMapCamera has to be computed from the region that was set, but what is the camera's field of view angle to determine how high it needs to be to include the correct extent for the window once the region is set? I basically want the zoom level set the same as via the fittedRegion and want to replicate that in the camera with the changed heading (with pitch at 0)
It appears the MKMapViewDelegate has a mapViewDidChangeVisibleRegion and I think the Coordinator is the delegate in SwiftUI. I can see the region on multiple calls to updateNSView, tho it takes a few calls before its actually set. I suspect setting the camera there will create another updateNSView() call which would pose problems.
How can I orient the map to include a given region extent regardless of window size with the zoom level and heading on initial load (but then lets the user manipulate as they see fit)..
Try the following
func makeNSView(context: Context) -> MKMapView {
let mapView = MKMapView()
mapView.delegate = context.coordinator
mapView.mapType = .satellite
mapView.pointOfInterestFilter = .excludingAll
let region = MKCoordinateRegion(center: centroid, latitudinalMeters: 1160, longitudinalMeters: 1260)
let fittedRegion = mapView.regionThatFits(region)
DispatchQueue.main.async {
mapView.setRegion(fittedRegion, animated: false)
}
return mapView
}
I think I figured it out.
Responding to func mapViewDidFinishLoadingMap(_ mapView: MKMapView) in the Coordinator and setting the mapView.camera.heading to an appropriate value there does what I'm intending.
Related
I am trying to create a object that can be dragged and rotated in an NSView and have been successful in doing so using NSBezierPath. I am creating multiple objects and storing them in a class and using NSBezierPath.transform(using: AffineTransform) to modify the path in response to drag and rotation inputs.
This all works fine but I now want to add text to the shape and it seems there are a different set of rules for dealing with text.
I have tried using Core Text by creating a CTFrame but have no idea how to move or rotate this.
Is there a good reason for why the handling of text is so different from NSBezierPath.
And then there is the difference between AffineTransform and CGAffineTransform. The whole thing is pretty confusing and good documentation explaining the difference seems hard to come by.
Below is the code for creating and moving the shape which seems work perfectly. I have no idea how to move the text, ideally without having to recreate it. Is there any way to translate and rotate the CTFrame?
var path: NSBezierPath
var location: NSPoint {
didSet {
// move()
}
}
var angle: CGFloat {
didSet {
let dx = angle - oldValue
rotate(dx)
}
}
func createPath(){
// Create a simple path with a rectangle
self.path = NSBezierPath(rect: NSRect(x: -1*width/2.0, y: -1*height/2.0, width: width, height: height))
let line = NSBezierPath()
line.move(to: NSPoint(x: width/2.0, y:0))
line.line(to: NSPoint(x: width/2.0+leader, y:0))
self.path.append(line)
// Label !!
let rect = NSRect(x: width/2.0, y: 0, width: leader, height: height/2.0)
let attrString = NSAttributedString(string: assortmentLabel, attributes: attributesForLeftText)
self.labelFrame = textFrame(attrString: attrString, rect: rect)
// ??? How to rotate the CTFrame - is this even possible
move()
rotate(angle)
}
func rotate(_ da: CGFloat){
// Move to origin
let loc = AffineTransform(translationByX: -location.x, byY: -location.y)
self.path.transform(using: loc)
let rotation = AffineTransform(rotationByDegrees: da)
self.path.transform(using: rotation)
// Move back
self.path.transform(using: AffineTransform(translationByX: location.x, byY: location.y))
}
func move(){
let loc = AffineTransform(translationByX: location.x, byY: location.y)
self.path.transform(using: loc)
}
func draw(){
guard let context = NSGraphicsContext.current?.cgContext else {
return
}
color.set()
path.stroke()
if isSelected {
path.fill()
}
if let frame = self.labelFrame {
CTFrameDraw(frame, context)
}
}
// -----------------------------------
// Modify the item location
// -----------------------------------
func offsetLocationBy(x: CGFloat, y:CGFloat)
{
location.x=location.x+x
location.y=location.y+y
let loc = AffineTransform(translationByX: x, byY: y)
self.path.transform(using: loc)
}
EDIT:
I have changed things around a bit to now draw the shape at origin 0,0 and to then apply the transformation to CGContext prior to drawing the shape.
This does the job and using CTDrawFrame now works correctly...
Well almost...
On my test app it works perfectly but when I integrated the exact same code to the production app the text appears upside-down and all characters shown on top of each other.
As far as I can tell there is nothing different about the views the drawing is taking place in - uses the same NSView subclass.
Is there something else that could upset the drawing of the text - seems like an isFlipped issue but why would this happen in the one app and not the other.
Everything else seems to draw correctly. Tearing my hair out on this.
After much struggling it seems I got lucky with the test app working at all and needed to set the textMatrix on BOTH apps to ensure things work properly under all conditions.
It also seems one can't create the CTFrame and later just scale it and redraw it - well it didn't work for me so I had to recreate it in the draw() method each time!
I've run into a simple problem, that I can not solve even after looking everywhere..
I made a grey table view, and at the top I have a cell with white background.
Is it possible to whenever the user refreshes, make it also white (on the top)?
Try this code
let refresh = UIRefreshControl()
let backgroundColor = UIColor.red
refresh.backgroundColor = backgroundColor
refresh.addTarget(self, action: #selector(self.refreshs), for: .valueChanged)
tableView.addSubview(refresh)
var frame = tableView.bounds
frame.origin.y = -frame.size.height
let backgroundView = UIView(frame: frame)
backgroundView.autoresizingMask = .flexibleWidth
backgroundView.backgroundColor = backgroundColor // background color pull to refresh
tableView.insertSubview(backgroundView, at: 0)
Modus Operandi:
1) Use an UIImageView of a base Clock Image.
2) Add MinuteHand & HourHand sublayers (containing their respective images) to the UIImageView layer.
Problem: both sublayers disappear when attempting to perform a rotation transformation.
Note: 1) I've removed the 'hour' code & ancillary radian calculations to simplify code.
2) The 'center' is the center of the clock. I had adjusted the coordinates to actually pin the hands to the clock's center.
3) The ViewDidLayoutSubviews() appear to be okay. I got the clock + hands.
class ClockViewController:UIViewController {
private let minuteLayer = CALayer()
#IBOutlet weak var clockBaseImageView: UIImageView!
#IBOutlet weak var datePicker: UIDatePicker!
override func viewDidLayoutSubviews() {
guard var minuteSize = UIImage(named: "MinuteHand")?.size,
var hourSize = UIImage(named: "HourHand")?.size
else {
return
}
var contentLayer:CALayer {
return self.view.layer
}
var center = clockBaseImageView.center
// Minute Hand:
minuteLayer.setValue("*** Minute Hand ***", forKey: "id")
minuteSize = CGSize(width: minuteSize.width/3, height: minuteSize.height/3)
minuteLayer.contents = UIImage(named: "MinuteHand")?.cgImage
center = CGPoint(x: 107.0, y: 40.0)
var handFrame = CGRect(origin: center, size: minuteSize)
minuteLayer.frame = handFrame
minuteLayer.contentsScale = clockBaseImageView.layer.contentsScale
minuteLayer.anchorPoint = center
clockBaseImageView.layer.addSublayer(minuteLayer)
}
Here's my problem: Attempting to rotate the minute hand via 0.01 radians:
func set(_ time:Date) {
minuteLayer.setAffineTransform(CGAffineTransform(rotationAngle: .01)) // random value for test.
}
Before rotation attempt:
After attempting to rotate minute hand:
The hand shifted laterally to the right vs rotate.
Why? Perhaps due to the pivot point?
I think this will solve your problem, Take a look and let me know.
import GLKit // Importing GLKit Framework
func set(_ time:Date) {
minuteLayer.transform = CGAffineTransformMakeRotation(CGFloat(GLKMathDegreesToRadians(0.01)))
}
Note: this solution doesn't solve the issue about rotating a CALayer. Instead, it bypasses the issue by replacing the layer with a subview and rotating the subview via:
func set(_ time:Date) {
minuteView.transform = CGAffineTransform(rotationAngle: 45 * CGFloat(M_PI)/180.0)
}
Here's the result:
Still, it would be nice to know how to rotate a CALayer.
I'm making a Cocoa app for OS X using Swift. The app is supposed to be very simple: basically it displays an image, then listens for mouse clicks within that image and returns the 2D mouse coords based on the origin of the image (not the absolute mouse coordinates). I'm not sure if I described that well, but for example once it registers a mouse click event, it should tell me that the mouse click occurred 23 pixels to the right and 57 pixels down from the 0,0 point of the image (or whatever the units would be).
So far I have this, but all I've been able to do is get it to return the absolute mouse coordinates:
import Cocoa
class ViewController: NSViewController {
#IBOutlet var ImageButton: NSButton!
override func viewDidLoad() {
super.viewDidLoad()
let fileName = "myTestImage.jpg"
let path = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true).first! + "/TrainingSet/" + fileName
//"/Users/dan/Documents/myTestImage.jpg"
let myImage = NSImage(contentsOfFile: path)
ImageButton.image = myImage
}
override var representedObject: AnyObject? {
didSet {
// Update the view, if already loaded.
}
}
#IBAction func ImageButtonClicked(sender: NSButton) {
//let x = sender
//let coords = x.frame
let mouseLocation = NSEvent.mouseLocation();
print( "Mouse Location X,Y = \(mouseLocation)" )
print( "Mouse Location X = \(mouseLocation.x)" )
print( "Mouse Location Y = \(mouseLocation.y)" )
}
}
How would I go about getting the information I need?
What about
sender.convertPoint(mouseLocation, fromView:nil)
There're lot of Watch apps which has rounded corners for their WKInterfaceImages. I'm trying to round even some WKInterfaceImages in my test app but I can't understand how to do that.
I can't work with imageView.layer. ... as with normal iPhone apps and I can't find an alternative to do that using code or storyboard.
Do I have to mask all PNGs or there's a simpler way?
I solved removing the WKInterfaceImage from storyboard then replacing it with an WKInterfaceGroup which I set with same sizes of previous Image then, from attribute inspector, I setted his radius (yes, with groups it's possible!) then I declared group in controller and setted the image using row.flagView.setBackgroundImageNamed(imageName).
You are right CALayer and UIView are not directly available on watchOS 2. But you are able to use graphic functions and for instance this approach is perfectly acceptable on Watch.
The analogue in Swift:
class ImageTools {
class func imageWithRoundedCornerSize(cornerRadius:CGFloat, usingImage original: UIImage) -> UIImage {
let frame = CGRectMake(0, 0, original.size.width, original.size.height)
// Begin a new image that will be the new image with the rounded corners
UIGraphicsBeginImageContextWithOptions(original.size, false, 1.0)
// Add a clip before drawing anything, in the shape of an rounded rect
UIBezierPath(roundedRect: frame, cornerRadius: cornerRadius).addClip()
// Draw the new image
original.drawInRect(frame)
// Get the new image
let roundedImage = UIGraphicsGetImageFromCurrentImageContext()
// Lets forget about that we were drawing
UIGraphicsEndImageContext()
return roundedImage
}
}
Somewhere in your WKInterfaceController class:
let originalImage = UIImage(named: "original-image")!
let roundedImage = ImageTools.imageWithRoundedCornerSize(60, usingImage: originalImage)
// Set `UIImage` for your `WKInterfaceImage`
imageOutlet.setImage(roundedImage)