SwiftUI DragGesture inconsistencies with location and start location - macos

I'm building a SwiftUI macOS app.
I've got a basic Rectangle shape with a drag gesture on it.
In the onEnded handler, I am wanting to determine if the user has effectively tapped on the object. I do this by checking that the width and height of the translation are both zero.
(There are reasons I'm not using a tap gesture).
Rectangle()
.size(.init(width:50, height: 50))
.fill(Color.blue.opacity(0.01))
.gesture(DragGesture(minimumDistance:0)
.onChanged { gesture in
// Ommited
}
.onEnded { gesture in
print("startLocation", gesture.startLocation)
print("start", gesture.location)
print("translation", gesture.translation)
if gesture.translation == .zero {
print("tap")
}
print()
}
)
I'm getting issues where translations are being reported with unexpected values.
The values reported differ based on where I click in the rectangle.
Here's a set of groups of individual clicks. The translation is derived from the startLocation and location fields.
You can see variation between the startLocation and the location fields. If it was a very small variation I could debounce, however the fact that sometimes I get a value of 3 makes me wonder why such a variation could happen (I'm being over the top careful to execute the click without movement).
Does anyone know why this variation is creeping in?
startLocation (263.5149841308594, 144.3092803955078)
start (263.51495361328125, 144.30926513671875)
translation (-3.0517578125e-05, -1.52587890625e-05)
startLocation (276.2882995605469, 144.43479919433594)
start (276.288330078125, 144.434814453125)
translation (3.0517578125e-05, 1.52587890625e-05)
startLocation (274.3827209472656, 162.3402557373047)
start (274.38275146484375, 162.34027099609375)
translation (3.0517578125e-05, 1.52587890625e-05)
startLocation (264.81805419921875, 167.47662353515625)
start (264.81805419921875, 167.47662353515625)
translation (0.0, 0.0)
tap
startLocation (254.5931396484375, 135.4690399169922)
start (254.5931396484375, 135.46905517578125)
translation (0.0, 1.52587890625e-05)
startLocation (259.1647033691406, 140.26919555664062)
start (259.16473388671875, 140.26919555664062)
translation (3.0517578125e-05, 0.0)
Edit
As pointed out below, the value of 3 is actually 3e-05 = 0.00003 which I missed at the time of writing. However, still looking for information as to why the tap gesture will have zero translation on repeated clicks in some points of the Rectangle, but have a non zero translation in others.

How about this?
if gesture.translation.width/UIScreen.main.bounds.width < 0.05 &&
gesture.translation.height/UIScreen.main.bounds.height < 0.05
{
print("tap") // 0.01 ~ 0.05
}
This way works, regardless of the size of the device and width & height, it is always constant as it falls in percentages.

Related

I am able to draw on each ViewController to edit Original ImageView. Is there any way to fix this?

I am working on an animation app and in each other ViewController I can draw on the image that's being currently shown on the original ImageView. Is there any way to fix this?
This is what exactly is happening. I don't really know where in the code the problem exists.
https://drive.google.com/file/d/1M7qWKMugaqeDjGls3zvVitoRmwpOUJFY/view?usp=sharing
Expected to be able to draw only on the DrawingFrame ViewController. However, I can draw on every single ViewController in my app
The problem would appear to be that your gesture recognizer is still operational, even though you’ve presented another view on top of the current one.
This is a bit unusual. Usually when you present a view like that, the old one (and its gesture recognizers) are removed from the view hierarchy. I’m guessing that you’re just sliding this second view on top of the other. There are a few solutions:
One solution would be to make sure to define this new view such that (a) it accepts user interaction; and (b) write code so that it handles those gestures. That will avoid having the view behind it picking up those gestures.
Another solution is to disable your gesture recognizer recognizer when the menu view is presented, and re-enable it when the menu is dismissed.
The third solution is to change how you present that menu view, making sure you remove the current view from the view hierarchy when you do so. A standard show/present transition generally does this, though, so we may need to see how you’re presenting this menu view to comment further.
That having been said, a few unrelated observations:
you should use UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext;
rather than
if pencil.eraser == true { ... }
you can
if pencil.eraser { ... }
I’d suggest giving the pencil a computed property:
var color: UIColor { return UIColor(red: red, green: green, blue: blue, alpha: opacity) }
Then you can just refer to pencil.color;
property names should start with lowercase letter; and
drawingFrame is a confusing name, IMHO, because it’s not a “frame”, but rather likely a UIImageView. I’d call it drawingImageView or something like that.
Yielding:
func drawLine(from fromPoint: CGPoint, to toPoint: CGPoint) {
guard let pencil = pencil else { return }
//begins current context (and defer the ending of the context)
UIGraphicsBeginImageContextWithOptions(drawingImageView.bounds.size, false, 0)
defer { UIGraphicsEndImageContext() }
//where to draw
drawingImageView.image?.draw(in: drawingImageView.bounds)
//saves context
guard let context = UIGraphicsGetCurrentContext() else { return }
//drawing the line
context.move(to: fromPoint)
context.addLine(to: toPoint)
context.setLineCap(.round)
if pencil.eraser {
//Eraser
context.setBlendMode(.clear)
context.setLineWidth(10)
context.setStrokeColor(UIColor.white.cgColor)
} else {
//opacity, brush width, etc.
context.setBlendMode(.normal)
context.setLineWidth(pencil.pencilWidth)
context.setStrokeColor(pencil.color.cgColor)
}
context.strokePath()
//storing context back into the imageView
drawingImageView.image = UIGraphicsGetImageFromCurrentImageContext()
}
Or, even better, retire UIGraphicsBeginImageContext altogether and use the modern UIGraphicsImageRenderer:
func drawLine(from fromPoint: CGPoint, to toPoint: CGPoint) {
guard let pencil = pencil else { return }
drawingImageView.image = UIGraphicsImageRenderer(size: drawingImageView.bounds.size).image { _ in
drawingImageView.image?.draw(in: drawingImageView.bounds)
let path = UIBezierPath()
path.move(to: fromPoint)
path.addLine(to: toPoint)
path.lineCapStyle = .round
if pencil.eraser {
path.lineWidth = 10
UIColor.white.setStroke()
} else {
path.lineWidth = pencil.pencilWidth
pencil.color.setStroke()
}
path.stroke()
}
}
For more information on UIGraphicsImageRenderer, see the “Drawing off-screen” section of WWDC 2018 Image and Graphics Best Practices.
As an aside, once you get this problem behind you, you might want to revisit this “stroke from point a to point b and re-snapshot” logic to capture an array of points and build a path from a whole series, and don’t re-snapshot ever point, but only after a whole bunch have been added. This snapshotting process is slow and you’re going to find that the UX stutters a bit more than it needs. I personally re-snapshot after 100 points or so (at which point the amount of time to restroke the whole path is slow enough that it’s not much faster than the snapshot process, so if I snapshot and restart the path from where I left off, it then speeds up again).
But you say:
Expected to be able to draw only on the DrawingFrame ViewController. However, I can draw on every single ViewController in my app.
The above should draw only the image of drawingImageView and the stroke from fromPoint to toPoint. Your problem about drawing on “every single ViewController” rests elsewhere. We’d really need to see how precisely you are presenting this menu scene.

iOS 11: Scroll to top when "adjustedContentInset" changes with larger title bars?

I noticed that this code doesn't quite work as expected on iOS 11, because the "adjustedContentInset" property value changes as the "navigationBar" shrinks during a scroll:
CGFloat contentInsetTop=[scrollView contentInset].top;
if (#available(iOS 11.0, *))
{
contentInsetTop=[scrollView adjustedContentInset].top;
}
////
[scrollView setContentOffset:CGPointMake(0, -contentInsetTop) animated:YES];
... For example, this might start out as 140, then reduce to 88 beyond a minimal scroll offset. This means if you call this, it doesn't actually scroll all the way to the top.
Aside from preserving the original offset in memory from when the UIScrollView loads, is there a way to recover this value later to ensure that it does indeed scroll to top consistently, no matter the "adjustedContentInset"?
Currently, there is indeed no way to do this with iOS 11, I have heard. The only way to do so is to capture the initial value and store it for the life of the navigation/view controller.
I will update my answer accordingly if I hear otherwise, but it will be broken in the base iOS 11 release forever unfortunately.
I had this same problem with a Large Title in iOS 11, and the following code worked for me.
The following code first scrolls the offset a reasonable size above where you want to be. The value -204.666666666667 was the tallest value from setting the Accessibility > Larger Text > Larger Accessibility Sizes to the highest. I'm sure this doesn't cover other possibilities, but it is working for me so far. -CGFloat.greatestFiniteMagnitude is otherwise too problematic.
tableView.setContentOffset(CGPoint(x: 0.0, y: -204.666666666667), animated: false)
This will now give you back the right adjusted content size. To avoid being scrolled too far higher, i.e. leaving white space, just use the value as follows.
var contentOffset = CGPoint.zero // Just setting a variable we can change as needed below, as per iOS version.
if #available(iOS 11, *) {
contentOffset = CGPoint(x: 0.0, y: -tableView.adjustedContentInset.top)
} else {
contentOffset = CGPoint(x: 0.0, y: -tableView.contentInset.top)
}
tableView.setContentOffset(contentOffset, animated: false)
In summary, set the offset higher first (-204.666666666667 in my case, or just -300 or whatever), then that readjusts the adjustedContentInset.top to include the Large Title, scroll bar, etc., then you can now set the content offset as needed.

Override the Autolayout center values of UIButton in a Subview # viewDidLayoutSubviews

I use Autolayout for a fairly complex Menu and really need it. All Buttons, UIViews etc. of my Menu are in a separate UIView called "menuSubview".
If the user presses a button the whole "menuSubview" shifts to another position to reveal other parts of the menu. Sometimes the buttons in the menuSubview move as well. I always save the "Menu State" (with Userdefaults in the get-set variable "lastMenu") and have a function to set the alphas and centers according to the saved "Menu State".
I tried calling the "openLastMenu" function in viewDidAppear, viewDidLayoutSubview - all the "viewDid" functions of the ViewController. The "menuSubview" center and alphas of the buttons always behave as expected... but the centers of the buttons simply won't - no matter what "viewDid" I call the function in.
(the code is a lot more complex - I boiled it down to debug and state my point)
override func viewDidAppear(animated: Bool) {
if lastMenu != nil {openLastMenu()}
}
func openLastMenu(){
menuSubview.center.x = view.center.x //works
menuSubview.center.y = view.center.y + 200 //works
button1.center.x = view.center.x - 50 //why you no behave???
button2.center.x = view.center.x + 50 //why you no behave???
button3.alpha = 0 //works
button4.alpha = 0 //works
}
For debugging I even made a button Subclass to fetch the "center" values with a "didSet" if they change. Seems like after taking the correct values they change once more to their Autolayout-Position.
...oh and ignoring the constraints with "translatesAutoresizingMaskIntoConstraints" on the buttons always fucks up the whole menu. I'm starting to get crazy here :)
If you position views using autolayout, any changes to the frame, like what you do here with the center property, will be ignored.
What you need to do is identify the constraints that are you need to change to move the views in the desired position. Example:
You want to move button1 50 points to the left of view.center. Assuming view is the superview of menuSubview, you would
1) deactivate the the constraint responsible for button1's horizontal placement. How you do this mainly depends on whether you created the constraints in code or Interface Builder. The latter will require you to create outlets for some of the constraints.
2) create a new constraint between button1's centerX anchor and view's centerX anchor with a constant of -50, like so (iOS 9 code)
button1.centerXAnchor.constraintEqualToAnchor(view.centerXAnchor, constant: -50.0).active = true

Convert a given point from the window’s base coordinate system to the screen coordinate system

I am trying to figure out the way to convert a given point from the window’s base coordinate system to the screen coordinate system. I mean something like - (NSPoint)convertBaseToScreen:(NSPoint)point.
But I want it from quartz/carbon.
I have CGContextRef and its Bounds with me. But the bounds are with respect to Window to which CGContextRef belongs. For Example, if window is at location (100, 100, 50, 50) with respect to screen the contextRef for window would be (0,0, 50, 50). i.e. I am at location (0,0) but actually on screen I am at (100,100). I
Any suggestion are appreciated.
Thank you.
The window maintains its own position in global screen space and the compositor knows how to put that window's image at the correct location in screen space. The context itself, however doesn't have a location.
Quartz Compositor knows where the window is positioned on the screen, but Quartz 2D doesn't know anything more than how big the area it is supposed to draw in is. It has no idea where Quartz Compositor is going to put the drawing once it is done.
Similarly, when putting together the contents of a window, the frameworks provide the view system. The view system allows the OS to create contexts for drawing individual parts of a window and manages the placement of the results of drawing in those views, usually by manipulating the context's transform, or by creating temporary offscreen contexts. The context itself, however, doesn't know where the final graphic is going to be rendered.
I'm not sure if you can use directly CGContextRef, you need window or view reference or something like do the conversion.
The code I use does the opposite convert mouse coordinates from global (screen) to view local and it goes something like this:
Point mouseLoc; // point you want to convert to global coordinates
HIPoint where; // final coordinates
PixMapHandle portPixMap;
// portpixmap is needed to get correct offset otherwise y coord off at least by menu bar height
portPixMap = portPixMap = GetPortPixMap( GetWindowPort( GetControlOwner( view ) ) );
QDGlobalToLocalPoint(GetWindowPort( GetControlOwner( view ), &mouseLoc);
where.x = mouseLoc.h - (**portPixMap).bounds.left;
where.y = mouseLoc.v - (**portPixMap).bounds.top;
HIViewConvertPoint( &where, NULL, view );
so I guess the opposite is needed for you (haven't tested if it actually works):
void convert_point_to_screen(HIView view, HIPoint *point)
{
Point point; // used for QD calls
PixMapHandle portPixMap = GetPortPixMap( GetWindowPort( GetControlOwner( view ) ) );
HIViewConvertPoint( &where, view, NULL ); // view local to window local coordtinates
point.h = where->x + (**portPixMap).bounds.left;
point.w = where->y + (**portPixMap).bounds.top;
QDLocalToGlobalPoint(GetWindowPort( GetControlOwner( view ), &point);
// convert Point to HIPoint
where->x = point.h;
where->y = point.v;
}

Time Machine style Navigation

I've been doing some programming for iPhone lately and now I'm venturing into the iPad domain. The concept I want to realise relies on a navigation that is similar to time machine in osx. In short I have a number of views that can be panned and zoomed, as any normal view. However, the views are stacked upon each other using a third dimension (in this case depth). the user will the navigate to any view by, in this case, picking a letter, whereupon the app will fly through the views until it reaches the view of the selected letter.
My question is: can somebody give the complete final code for how to do this? Just kidding. :) What I need is a push in the right direction, since I'm unsure how to even start doing this, and whether it is at all possible using the frameworks available. Any tips are appreciated
Thanks!
Core Animation—or more specifically, the UIView animation model that's built on Core Animation—is your friend. You can make a Time Machine-like interface with your views by positioning them in a vertical line within their parent view (using their center properties), having the ones farther up that line be scaled slightly smaller than the ones below (“in front of”) them (using their transform properties, with the CGAffineTransformMakeScale function), and setting their layers’ z-index (get the layer using the view’s layer property, then set its zPosition) so that the ones farther up the line appear behind the others. Here's some sample code.
// animate an array of views into a stack at an offset position (0 has the first view in the stack at the front; higher values move "into" the stack)
// took the shortcut here of not setting the views' layers' z-indices; this will work if the backmost views are added first, but otherwise you'll need to set the zPosition values before doing this
int offset = 0;
[UIView animateWithDuration:0.3 animations:^{
CGFloat maxScale = 0.8; // frontmost visible view will be at 80% scale
CGFloat minScale = 0.2; // farthest-back view will be at 40% scale
CGFloat centerX = 160; // horizontal center
CGFloat frontCenterY = 280; // vertical center of frontmost visible view
CGFloat backCenterY = 80; // vertical center of farthest-back view
for(int i = 0; i < [viewStack count]; i++)
{
float distance = (float)(i - offset) / [viewStack count];
UIView *v = [viewStack objectAtIndex:i];
v.transform = CGAffineTransformMakeScale(maxScale + (minScale - maxScale) * distance, maxScale + (minScale - maxScale) * distance);
v.alpha = (i - offset > 0) ? (1 - distance) : 0; // views that have disappeared behind the screen get no opacity; views still visible fade as their distance increases
v.center = CGPointMake(centerX, frontCenterY + (backCenterY - frontCenterY) * distance);
}
}];
And here's what it looks like, with a couple of randomly-colored views:
do you mean something like this on the right?
If yes, it should be possible. You would have to arrange the Views like on the image and animate them going forwards and backwards. As far as I know aren't there any frameworks for this.
It's called Cover Flow and is also used in iTunes to view the artwork/albums. Apple appear to have bought the technology from a third party and also to have patented it. However if you google for ios cover flow you will get plenty of hits and code to point you in the right direction.
I have not looked but would think that it was maybe in the iOS library but i do not know for sure.

Resources