create a UIBezierPath of 3 pixels width on retina #2x display - uiimage

I'm trying to create a line of 3 pixels width on a retina #2x display. The simple idea would be to create a 1.5 width line :
UIGraphicsBeginImageContextWithOptions(CGSizeMake(20, 20), NO, 0.0f);
CGContextRef aRef = UIGraphicsGetCurrentContext();
CGContextSetAllowsAntialiasing(aRef, NO);
CGContextSetShouldAntialias(aRef, NO);
UIBezierPath* bezierPath = UIBezierPath.bezierPath;
[bezierPath moveToPoint: CGPointMake(10, 0)];
[bezierPath addLineToPoint: CGPointMake(10, 10)];
bezierPath.lineWidth = 1.5;
[bezierPath stroke];
UIImage * myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But at the end, I end up with a 4 pixels line width on screen.
The thing is, I'm using the iPad 3 (so retina #2x), and when I use a UIBarButtonItem with the predefined system button UIBarButtonSystemItemAdd, the two paths of the cross are 3 pixels width on my screen.

I suspect it's b/c your path is 1.5 points, but is started at (0,0). since the path is drawn half on one side of the path, and half on the other, it means that .75 pts are draw above/below the path.
that'd mean your path reaches from (in px):
left: (-1.5, 0) => (-1.5, 10)
center: (0,0) => (0,10)
right: (1.5,0) => (1.5,10)
that means each side will render using 2 pixels.
instead, you probably want to create the line from (.5, 0) => (.5, 10), which would align the path width to the pixels on screen:
left: (-1, 0) => (-1, 10)
center: (.5, 0) => (.5, 10)
right: (2, 0) => (2, 10)

It worked with :
UIBezierPath* rectanglePath = [UIBezierPath bezierPathWithRect: CGRectMake(13, 21, 18, 1.5)];

Related

SceneKit shows only partly a large rotated SCNPlane

I try to create a large SCNPlane to cover whole screen. The test code is bellow in which a red box (size 1x1x1) is in the middle of a blue plane (size 200 x200). They all are in the central point (0, 0, 0) and the camera is only +5 from that point.
When the plane node faces to the camera (with a large angle), it works well (figure 1) and both left and right sides could cover whole left and right sides of the screen. However when I rotate the plane to a small angle (with the camera), only a small part is shown. In figure 2, the left side of the plane comes closer to the camera. That left side should be wide enough (side of 100) to cover all left side of the screen but it is not. Increasing the size of the plane to 10 times (to 2000) did not help.
Any idea about the problem and solution? Thanks
override func viewDidLoad() {
super.viewDidLoad()
let scnView = self.view as! SCNView
scnView.backgroundColor = UIColor.darkGray
scnView.autoenablesDefaultLighting = true
scnView.allowsCameraControl = true
scnView.scene = SCNScene()
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scnView.scene?.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 5)
let theBox = SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0)
theBox.firstMaterial?.diffuse.contents = UIColor.red
let theBoxNode = SCNNode(geometry: theBox)
theBoxNode.position = SCNVector3(0, 0, 0)
scnView.scene?.rootNode.addChildNode(theBoxNode)
let plane = SCNPlane(width: 200, height: 200)
plane.firstMaterial?.diffuse.contents = UIColor.blue
let planeNode = SCNNode(geometry: plane)
scnView.scene?.rootNode.addChildNode(planeNode)
}
you might want to check your camera's zNear property to ensure that the plane isn't clipped. You can find an explanation of clipping planes here.

Transform matrices with Matrix.CreatePerspectiveOffCenter in XNA: vanishing point to center

I'm trying to get the following perspective of view:
In essence I'm doing a 2D game with some 3D graphics so I switched from Matrix.CreateOrthographicOffCenter to Matrix.CreatePerspectiveOffCenter
I have drawn a primitive and by decreasing it's z-index it goes further away, but it always vanishes in direction of the (0, 0) (top-left), while the vanishing point should be the center.
My transform settings now look like this ((640, 360) is the center of the screen):
basicEffect.Projection = Matrix.CreatePerspectiveOffCenter(0, graphicsDevice.Viewport.Width, graphicsDevice.Viewport.Height, 0, 1, 10);
basicEffect.View = Matrix.Identity * Matrix.CreateLookAt(new Vector3(640, 360, 1), new Vector3(640, 360, 0), new Vector3(0, 1, 0));
basicEffect.World = Matrix.CreateTranslation(0, 0, 0);
I can't get the vanishing point to the center of the screen. I managed to (sort of) do it with CreatePerspective view but I want to keep using CreatePerspectiveOffCenter because I can translate normal pixel positions easily to the 3D space. What am I missing?
In the end I used the following. If you're looking for a solution to create a 3d view with a '2D feel' this might come in handy. With these translations a z-index of 0 exactly matches the screen's width and height and the vanishing point is in the center of the screen.
basicEffect.Projection = Matrix.CreatePerspectiveFieldOfView((float)Math.PI / 2f, 1, 1f / 1000, 1000f);
basicEffect.View = Matrix.CreateLookAt(new Vector3(0, 0, 1f), new Vector3(0, 0, 0), new Vector3(0, 1, 0));
basicEffect.World = Matrix.CreateTranslation(0, 0, 0);

Why does a CATransform3DMakeRotation not take (0,0,1) as rotating around the Z-axis?

If I do a transform to an NSView in Cocoa's app:
self.view.layer.transform = CATransform3DMakeRotation(30 * M_PI / 180, 0, 0, 1);
I see the square not rotated around the Z-axis, but rotated as if that vector is pointing downward and outward. I need to make it
self.view.layer.transform = CATransform3DMakeRotation(30 * M_PI / 180, 1, 1, 1);
to make it rotate around the Z-axis, as shown in the picture on this question.
However, if I set an NSTimer and then update the transform in the update method, then using
-(void) update:(id) info {
self.view.layer.transform =
CATransform3DMakeRotation(self.degree * M_PI / 180, 0, 0, 1);
self.degree += 10;
}
(this time, using (0, 0, 1)) will work: it rotates around the Z-axis. I wonder why (1, 1, 1) is needed inside of applicationDidFinishLaunching , but (0, 0, 1) can be used in the update method?
It turns out the view needs to be added to window.contentView first:
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
self.view = [[NSView alloc] initWithFrame:NSMakeRect(100, 100, 200, 200)];
[self.window.contentView addSubview:self.view];
self.view.wantsLayer = YES;
self.view.layer = [CALayer layer];
self.view.layer.backgroundColor = [[NSColor yellowColor] CGColor];
self.view.layer.anchorPoint = CGPointMake(0.5, 0.5);
self.view.layer.transform = CATransform3DMakeRotation(30 * M_PI / 180, 0, 0, 1);
}
also, the line self.view.layer = [CALayer layer]; needs to be after the line self.view.wantsLayer = YES;. Why these order, I am not sure, but it works as it is supposed to. I can't find the docs that mentions the requirement for this order yet, but the code above works. I will update the answer when I find more info about why the order is needed.

what does bounds really mean in CALayer?

I was confused about the property bounds of CALayer,
- (void)viewDidLoad {
[super viewDidLoad];
CALayer *sublayer = [CALayer layer];
sublayer.backgroundColor = [UIColor blueColor].CGColor;
sublayer.frame = CGRectMake(18, 18, 154, 154);
[self.view.layer addSublayer:sublayer];
CALayer *sublayer2 = [CALayer layer];
sublayer2.backgroundColor = [UIColor redColor].CGColor;
sublayer2.frame = CGRectMake(20, 20, 150, 150);
sublayer2.bounds = CGRectMake(0, 0, 50, 50);
sublayer2.zPosition = 10;
[self.view.layer addSublayer:sublayer2];
}
sublayer2 draw a small 50X50 rectangle in the center of the rectangle of sublayer1,
but it will draw a 150X150 rectangle if this line is commented out:
sublayer2.bounds = CGRectMake(0, 0, 50, 50);
after read sch's "guide", I think the behavior is due to the following reason:
1 as metioned in the guide
The bounds rectangle is expressed in the view’s own local coordinate
system. The default origin of this rectangle is (0, 0) and its size
matches the size of the frame rectangle.
//and this is what bounds really mean!
...
...
When you set the size of the bounds property, the size value in the
frame property changes to match the new size of the bounds rectangle.
... ...
so when execute
sublayer2.bounds = CGRectMake(0, 0, 50, 50);
the frame's size will change to 50X50 automatically,
here CGRectMake(0, 0,..) "0,0" could be any value,because it will not take any effect.
2 because we didn't change anchorPoint ,by default anchorPoint is (0.5,0.5), and its corresponding position is (95,95), so finally it will draw a 5X5 rectangle whose center is (95,95)
please correct me if I am wrong

Cropping CIImage with CICrop isn't working properly

I'm having troubles cropping image. For me CICrop filter is not working properly. If my CIVector x and y (origins) are 0 everything working fine (image is cropped from left bottom corner), image is cropped by my rectangle width and height, but if CIVector origins (x and y) aren't 0 in my cropped image becomes space (because CICrop filter cropping from bottom left corner no matter what origins (x and y) are).
I'm cropping CIImage with rectangle, source:
CIVector *cropRect =[CIVector vectorWithX:150 Y:150 Z: 300 W: 300];
CIFilter *cropFilter = [CIFilter filterWithName:#"CICrop"];
[cropFilter setValue:myCIImage forKey:#"inputImage"];
[cropFilter setValue:cropRect forKey:#"inputRectangle"];
CIImage *croppedImage = [cropFilter valueForKey:#"outputImage"];
Output Image with CIVector X 150 and Y 150: (I drawn the border for clarity)
Output Image with CIVector X 0 and Y 0:
Original Image:
What I'm doing wrong? Or is it supposed to do this?
Are you sure the output image is the size you are expecting? How are you drawing the output image?
The CICrop filter does not reduce the size of the original image, it just blanks out the content you don't want.
To get the result you want you probably need to just do this:
[image drawAtPoint:NSZeroPoint fromRect:NSMakeRect(150, 150, 300, 300) operation:NSCompositeSourceOver fraction:1.0];
If you want an actual CIImage as output rather than just drawing it, just do this:
CIImage* croppedImage = [image imageByCroppingToRect:CGRectMake(150, 150, 300, 300)];
//you also need to translate the origin
CIFilter* transform = [CIFilter filterWithName:#"CIAffineTransform"];
NSAffineTransform* affineTransform = [NSAffineTransform transform];
[affineTransform translateXBy:-150.0 yBy:-150.0];
[transform setValue:affineTransform forKey:#"inputTransform"];
[transform setValue:croppedImage forKey:#"inputImage"];
CIImage* transformedImage = [transform valueForKey:#"outputImage"];
It's important to note that the coordinate system of a view is top-left-corner, whereas CIImage is bottom left. This will make you crazy if you don't catch it when you're doing these transforms! This other post describes a one-directional conversion: Changing CGrect value to user coordinate system.
This is how CICrop works -- it crop the rect you specified, and the un-cropped area becomes transparent. If you print extent you will see that it is still the same original rect.
As suggested, you can do a translation. This is now just 1 line, in Swift 5:
let newImage = myCIImage.transformed(by: CGAffineTransform(translationX: -150, y: -150)

Resources