Rotate CIImage in Swift 2 using CGAffineTransformMakeRotation - rotation

I try and rotate a CIImage in Swift2 using
let rotatedImage = someCIImage.imageByApplyingTransform(CGAffineTransformMakeRotation(CGFloat(M_PI / 2.0))))
When I look at the sized of the resulting rectangle, it has been rotated. (it was 1000x500 and now is 500x1000). However, the calculations I do subsequently (convert to bitmap and access individual pixels) indicate differently. Am I right that the the above transformation rotates around the center of the image, i.e. in the above example around 500/250?

That transform rotates around the image's origin. This version sets the pivot point to the centre:
var tx = CGAffineTransformMakeTranslation(
image.extent.width / 2,
image.extent.height / 2)
tx = CGAffineTransformRotate(
tx,
CGFloat(M_PI_2))
tx = CGAffineTransformTranslate(
tx,
-image.extent.width / 2,
-image.extent.height / 2)
var transformImage = CIFilter(
name: "CIAffineTransform",
withInputParameters: [
kCIInputImageKey: image,
kCIInputTransformKey: NSValue(CGAffineTransform: tx)])!.outputImage!
Simon

In Swift 5, the code gets nicer. This is a CIImage extension method to easily rotate itself around the center.
func rotate(_ angle: CGFloat) -> CIImage {
let transform = CGAffineTransform(translationX: extent.midX, y: extent.midY)
.rotated(by: angle)
.translatedBy(x: -extent.midX, y: -extent.midY)
return applyingFilter("CIAffineTransform", parameters: [kCIInputTransformKey: transform])
}

Related

How to rotate UIBezierPath around center of its own bounds?

Lets say we have a UIBezierPath... the bounds of which are perfectly square... like this:
func getExponentPath(rotate180: Bool) -> UIBezierPath {
// establish unit of measure (grid) based on this containing view's bounds... (not to be confused with this bezierpath's bounds)
let G = bounds.width / 5
let exponentPath = UIBezierPath()
let sstartPoint = CGPoint(x:(3.8)*G,y:(1.2)*G)
exponentPath.move(to: sstartPoint)
exponentPath.addLine(to: CGPoint(x:(5)*G,y:(1.2)*G))
exponentPath.addLine(to: CGPoint(x:(4.4)*G,y:(0.2)*G))
exponentPath.addLine(to: CGPoint(x:(5)*G,y:(0.2)*G))
exponentPath.addLine(to: CGPoint(x:(5)*G,y:(0)*G))
exponentPath.addLine(to: CGPoint(x:(3.8)*G,y:(0)*G))
exponentPath.addLine(to: CGPoint(x:(3.8)*G,y:(0.2)*G))
exponentPath.addLine(to: CGPoint(x:(4.4)*G,y:(0.2)*G))
exponentPath.addLine(to: sstartPoint)
exponentPath.close()
// this does not work:
// if rotate180 { exponentPath.apply(CGAffineTransform(rotationAngle: CGFloat.pi)) }
return exponentPath
}
If rotated, this bezierpath still needs to occupy the exact same area within its containing view.
I can only presume this does not work because there's some problem with the center of rotation not being what I intend... although I get the same (wrong) result even when saying "rotate by 0."
So how can the path be rotated around it's own center point?
It seems like there should be a simple linear algebra matrix multiplication type thingy that could be applied to the set of points. =T
extension UIBezierPath
{
func rotateAroundCenter(angle: CGFloat)
{
let center = self.bounds.getCenter()
var transform = CGAffineTransform.identity
transform = transform.translatedBy(x: center.x, y: center.y)
transform = transform.rotated(by: angle)
transform = transform.translatedBy(x: -center.x, y: -center.y)
self.apply(transform)
}
}
I don't think you need the rotation. To draw the same shape upside down, just flip it:
exponentPath.apply(CGAffineTransform(scaleX: 1, y: -1))
exponentPath.apply(CGAffineTransform(translationX: 0, y: G))
So in case anyone else is trying to rotate a UIBezierPath on the center of it's own bounding rectangle... this is the actual working solution arrived at with help from previous answers/comments:
func getExponentPath(rotationAngle: CGFloat) -> UIBezierPath {
// ...
let x_translation = -( (bounds.width) - ( exponentPath.bounds.width/2) )
let y_translation = -exponentPath.bounds.height/2
exponentPath.apply(CGAffineTransform(translationX: x_translation, y: y_translation))
exponentPath.apply(CGAffineTransform(rotationAngle: rotationAngle))
exponentPath.apply(CGAffineTransform(translationX: -x_translation, y: -y_translation))
// ...
}

SceneKit shows only partly a large rotated SCNPlane

I try to create a large SCNPlane to cover whole screen. The test code is bellow in which a red box (size 1x1x1) is in the middle of a blue plane (size 200 x200). They all are in the central point (0, 0, 0) and the camera is only +5 from that point.
When the plane node faces to the camera (with a large angle), it works well (figure 1) and both left and right sides could cover whole left and right sides of the screen. However when I rotate the plane to a small angle (with the camera), only a small part is shown. In figure 2, the left side of the plane comes closer to the camera. That left side should be wide enough (side of 100) to cover all left side of the screen but it is not. Increasing the size of the plane to 10 times (to 2000) did not help.
Any idea about the problem and solution? Thanks
override func viewDidLoad() {
super.viewDidLoad()
let scnView = self.view as! SCNView
scnView.backgroundColor = UIColor.darkGray
scnView.autoenablesDefaultLighting = true
scnView.allowsCameraControl = true
scnView.scene = SCNScene()
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scnView.scene?.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 5)
let theBox = SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0)
theBox.firstMaterial?.diffuse.contents = UIColor.red
let theBoxNode = SCNNode(geometry: theBox)
theBoxNode.position = SCNVector3(0, 0, 0)
scnView.scene?.rootNode.addChildNode(theBoxNode)
let plane = SCNPlane(width: 200, height: 200)
plane.firstMaterial?.diffuse.contents = UIColor.blue
let planeNode = SCNNode(geometry: plane)
scnView.scene?.rootNode.addChildNode(planeNode)
}
you might want to check your camera's zNear property to ensure that the plane isn't clipped. You can find an explanation of clipping planes here.

How to convert world rotation to screen rotation?

I need to convert the position and rotation on a 3d object to screen position and rotation. I can convert the position easily but not the rotation. I've attempted to convert the rotation of the camera but it does not match up.
Attached is an example plunkr & conversion code.
The white facebook button should line up with the red plane.
https://plnkr.co/edit/0MOKrc1lc2Bqw1MMZnZV?p=preview
function toScreenPosition(position, camera, width, height) {
var p = new THREE.Vector3(position.x, position.y, position.z);
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
return vector;
}
function updateScreenElements() {
var btn = document.querySelector('#btn-share')
var pos = plane.getWorldPosition();
var vec = toScreenPosition(pos, camera, canvas.width, canvas.height);
var translate = "translate3d("+vec.x+"px,"+vec.y+"px,"+vec.z+"px)";
var euler = camera.getWorldRotation();
var rotate = "rotateX("+euler.x+"rad)"+
" rotateY("+(euler.y)+"rad)"+
" rotateY("+(euler.z)+"rad)";
btn.style.transform= translate+ " "+rotate;
}
... And a screenshot of the issue.
I would highly recommend not trying to match this to the camera space, but instead to apply the image as a texture map to the red plane, and then use a raycast to see whether a click goes over the plane. You'll save yourself headache in translating and rotating and then hiding the symbol when it's behind the cube, etc
check out the THREEjs examples to see how to use the Raycaster. It's a lot more flexible and easier than trying to do rotations and matching. Then whatever the 'btn' onclick function is, you just call when you detect a raycast collision with the plane

Projecting a point from world to screen. SO solutions give bad coordinates

I'm trying to place an HTML div element over a three.js object. Most stackoverflow solutions offer a pattern similar to this:
// var camera = ...
function toScreenXY(pos, canvas) {
var width = canvas.width, height = canvas.height;
var p = new THREE.Vector3(pos.x, pos.y, pos.z);
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
return vector;
}
I've tried many variations on this idea, and all of them agree on giving me this result:
console.log(routeStart.position); // target mesh
console.log(toScreenXY(routeStart.position));
// output:
//
// mesh pos: T…E.Vector3 {x: -200, y: 200, z: -100}
// screen pos: T…E.Vector3 {x: -985.2267639636993, y: -1444.7267503738403, z: 0.9801980328559876}
The actual screen coordinates for this camera position and this mesh position are somewhere around x: 470, y: 80 - I determined them by hardcoding my div position.
-985, -1444 are not even close to the actual screen coords :)
Please don't offer links to existing solutions if they follow the same logic as the snippet I provided. I would be especially thankful if someone could explain why I get these negative values, even though this approach seems to work for everyone else.
Here's a couple of examples using the same principle:
Three.js: converting 3d position to 2d screen position
Converting World coordinates to Screen coordinates in Three.js using Projection
Now, I've figured out the problem myself! Turns out, you can't project things before calling renderer.render(). It's very confusing that it gives you back weird negative coords.
Hope other people will find this answer useful.

Rotate image on its own center kineticJS

I'm trying to rotate an image added to my canvas using KineticJS.
I got it almost working.
I know I need to set the offset to 'move' the rotation point, that part is working.
But it is also moving to that location of the offset.
After doing some rotating I can drag my image to another location in the canvas and continue rotating around its own center.
I don't want to rotate the whole canvas, because I have multiple images on a layer.
The relevant code:
function rotateLayer() {
// Rotate bird image
var rotation = 15;
// Set rotation point:
imageDict[1].setOffsetX(imageDict[1].width() / 2);
imageDict[1].setOffsetY(imageDict[1].height() / 2);
// rotation in degrees
imageDict[1].rotate(rotation);
imageDict[1].getLayer().draw();
}
A working demo is on jsfiddle: http://jsfiddle.net/kp61vcfg/1/
So in short I want the rotation but not the movement.
How you want to rotate without movement?
KineticJS rotate objects relative it's "start point" . For example for Kinetic.Rect start points is {0, 0} - top left corner. You may move such "start point" to any position with offset params.
After a lot of trail and error I found the solution.
The trick is to set the offset during load to the half width and height to set the rotation point to the middle of the image AND don't call image.cache:
function initAddImage(imgId, imgwidth, imgheight) {
var imageObj = new Image();
imageObj.src = document.getElementById(imgId).src;
imageObj.onload = function () {
var image = new Kinetic.Image({
image: imageObj,
draggable: true,
shadowColor: '#787878',
shadowOffsetX: 2,
shadowOffsetY: 2,
width: imgwidth,
height: imgheight,
x: 150, // half width of container
y: 150, // half height of container
offset : {x : imgwidth / 2, y : imgheight / 2}, // Rotation point
imgId: imgId
});
layer.add(image);
//image.cache();
layer.draw();
imageDict[currentLayerHandle] = image;
currentLayerHandle++;
};
}
I've updated my demo to a working version:
http://jsfiddle.net/kp61vcfg/2/

Resources