Related
Assume I have a 2x2 matrix filled with values which will represent a plane. Now I want to rotate the plane around itself in a 3-D way, in the "z-Direction". For a better understanding, see the following image:
I wondered if this is possible by a simple affine matrix, thus I created the following simple script:
%Create a random value matrix
A = rand*ones(200,200);
%Make a box in the image
A(50:200-50,50:200-50) = 1;
Now I can apply transformations in the 2-D room simply by a rotation matrix like this:
R = affine2d([1 0 0; .5 1 0; 0 0 1])
tform = affine3d(R);
transformed = imwarp(A,tform);
However, this will not produce the desired output above, and I am not quite sure how to create the 2-D affine matrix to create such behavior.
I guess that a 3-D affine matrix can do the trick. However, if I define a 3-D affine matrix I cannot work with the 2-D representation of the matrix anymore, since MATLAB will throw the error:
The number of dimensions of the input image A must be 3 when the
specified geometric transformation is 3-D.
So how can I code the desired output with an affine matrix?
The answer from m3tho correctly addresses how you would apply the transformation you want: using fitgeotrans with a 'projective' transform, thus requiring that you specify 4 control points (i.e. 4 pairs of corresponding points in the input and output image). You can then apply this transform using imwarp.
The issue, then, is how you select these pairs of points to create your desired transformation, which in this case is to create a perspective projection. As shown below, a perspective projection takes into account that a viewing position (i.e. "camera") will have a given view angle defining a conic field of view. The scene is rendered by taking all 3-D points within this cone and projecting them onto the viewing plane, which is the plane located at the camera target which is perpendicular to the line joining the camera and its target.
Let's first assume that your image is lying in the viewing plane and that the corners are described by a normalized reference frame such that they span [-1 1] in each direction. We need to first select the degree of perspective we want by choosing a view angle and then computing the distance between the camera and the viewing plane. A view angle of around 45 degrees can mimic the sense of perspective of normal human sight, so using the corners of the viewing plane to define the edge of the conic field of view, we can compute the camera distance as follows:
camDist = sqrt(2)./tand(viewAngle./2);
Now we can use this to generate a set of control points for the transformation. We first apply a 3-D rotation to the corner points of the viewing plane, rotating around the y axis by an amount theta. This rotates them out of plane, so we now project the corner points back onto the viewing plane by defining a line from the camera through each rotated corner point and finding the point where it intersects the plane. I'm going to spare you the mathematical derivations (you can implement them yourself from the formulas in the above links), but in this case everything simplifies down to the following set of calculations:
term1 = camDist.*cosd(theta);
term2 = camDist-sind(theta);
term3 = camDist+sind(theta);
outP = [-term1./term2 camDist./term2; ...
term1./term3 camDist./term3; ...
term1./term3 -camDist./term3; ...
-term1./term2 -camDist./term2];
And outP now contains your normalized set of control points in the output image. Given an image of size s, we can create a set of input and output control points as follows:
scaledInP = [1 s(1); s(2) s(1); s(2) 1; 1 1];
scaledOutP = bsxfun(#times, outP+1, s([2 1])-1)./2+1;
And you can apply the transformation like so:
tform = fitgeotrans(scaledInP, scaledOutP, 'projective');
outputView = imref2d(s);
newImage = imwarp(oldImage, tform, 'OutputView', outputView);
The only issue you may come across is that a rotation of 90 degrees (i.e. looking end-on at the image plane) would create a set of collinear points that would cause fitgeotrans to error out. In such a case, you would technically just want a blank image, because you can't see a 2-D object when looking at it edge-on.
Here's some code illustrating the above transformations by animating a spinning image:
img = imread('peppers.png');
s = size(img);
outputView = imref2d(s);
scaledInP = [1 s(1); s(2) s(1); s(2) 1; 1 1];
viewAngle = 45;
camDist = sqrt(2)./tand(viewAngle./2);
for theta = linspace(0, 360, 360)
term1 = camDist.*cosd(theta);
term2 = camDist-sind(theta);
term3 = camDist+sind(theta);
outP = [-term1./term2 camDist./term2; ...
term1./term3 camDist./term3; ...
term1./term3 -camDist./term3; ...
-term1./term2 -camDist./term2];
scaledOutP = bsxfun(#times, outP+1, s([2 1])-1)./2+1;
tform = fitgeotrans(scaledInP, scaledOutP, 'projective');
spinImage = imwarp(img, tform, 'OutputView', outputView);
if (theta == 0)
hImage = image(spinImage);
set(gca, 'Visible', 'off');
else
set(hImage, 'CData', spinImage);
end
drawnow;
end
And here's the animation:
You can perform a projective transformation that can be estimated using the position of the corners in the first and second image.
originalP='peppers.png';
original = imread(originalP);
imshow(original);
s = size(original);
matchedPoints1 = [1 1;1 s(1);s(2) s(1);s(2) 1];
matchedPoints2 = [1 1;1 s(1);s(2) s(1)-100;s(2) 100];
transformType = 'projective';
tform = fitgeotrans(matchedPoints1,matchedPoints2,'projective');
outputView = imref2d(size(original));
Ir = imwarp(original,tform,'OutputView',outputView);
figure; imshow(Ir);
This is the result of the code above:
Original image:
Transformed image:
I've created a 3D-Scene with Blender and computed the Projection Matrix P (Also have information about the Translation T- and Rotation-Matrix R).
Like I mentioned in the title I try to calculate the z-Value or depth to an Vertex (x,y,z) from my given camera C with these Matrices.
Example:
Vertex v = [1.4,1,2.3] and position of camera c = [0,-0.7,10]. The Result should be anything around 10-2.3 = 7.7. Thank you for your help!
Usually rotation matrix is applied before translation. So
transform = R * T
R is the rotation matrix (usually 4 rows and 4 columns)
T is the translation matrix (4 rows and 4 columns)
* is the matrix multiplication wich apply first T and then R
of course I'm assuming you already know how to perform matrix multiplication, I'm not providing anycode because it is not clear if you need python snippet or you are using the exported model somehwere else.
After that you apply the final projection matrix (I'm assuming your projection matrix is already multiplied by view matrix)
final = P * transform
P is the projection matrix ( 4 rows and 4 columns)
transform is your previously obtained (4 rows and 4 columns) matrix
the final matrix is the one that will transform every vector of your 3D model, again here you do a matrix multiplication (but in this case the second operand is a colum vector wich 4th element is 1)
transformedVertex = final * Vec4(originalVertex,1)
transformedVertex is a column vector ( 4x1 matrix)
final is the final matrix (4x4)
a vertex is onl 3 coordinates, so we add 1 to make it (4x1)
* is still matrix multiplication
once transformed the vertex Z value is the one that gets directly mapped into Z buffer and ence into a Depth value.
At this point there is one operation that is done "by convention" and is dividing Z by W to normalize it, then values outside range [0..1] are discarded (nearest than near clip plane or farest than far clip plane).
See also this question:
Why do I divide Z by W?
EDIT:
I may have misinterpreted your question, if you need distance between camera and a point it is simply
function computeDistance( cam, pos)
dx = cam.x-pos.x
dy = cam.y-pos.y
dz = cam.z-pos.z
distance = sqrt( dx*dx + dy*dy + dz*dz)
end function
example
cameraposition = 10,0,0
vertexposition = 2,0,0
the above code
computeDistance ( cameraposition, vertexposition)
outputs
8
Thanks for your help, here is what I was looking for:
Data setup
R rotation matrix 4x4
T translation matrix 4x4
v any vertex in with [x,y,z,1] 4x1
Result
vec4 vector 4x1 (x,y,z,w)
vec4 = R * T * v
The vec4.z value is the result I was looking for. Thanks!
I'm trying to code a general algorithm that can find a polygon from the area swept out by a circle (red line) that follows some known path (green line), and where the circle gets bigger as it moves further down the known path. Basically, can anyone point me down a direction to solve this, please? I can't seem to nail down which tangent points are part of the polygon for any point (and thus circle) on the path.
Any help is appreciated.
Well, the easiest is to approximate your path by small segments on which your path is linear, and your circle grows linearly.
Your segments and angles will likely be small, but for the sake of the example, let's take bigger (and more obvious) angles.
Going through the geometry
Good lines for the edges of your polygon are the tangents to both circles. Note that there aren't always close to the lines defined by the intersections between the circles and the orthogonal line to the path, especially with stronger growth speeds. See the figure below, where (AB) is the path, we want the (OE) and (OF) lines, but not the (MN) one for example :
The first step is to identify the point O. It is the only point that defines a homothetic transformation between both circles, with a positive ratio.
Thus ratio = OA/OB = (radius C) / (radius C') and O = A + AB/(1-ratio)
Now let u be the vector from O to A normalized, and v a vector orthogonal to u (let us take it in the direction from A to M).
Let us call a the vector from O to E normalized, and beta the angle EOA. Then, since (OE) and (AE) are perpendicular, sin(beta) = (radius C) / OA. We also have the scalar product a.u = cos(beta) and since the norm of a is 1, a = u * cos(beta) + v * sin(beta)
Then it comes easily that with b the vector from O to F normalized, b = u * cos(beta) - v * sin(beta)
Since beta is an angle less than 90° (otherwise the growth of the circle would be so much faster than it going forward, that the second circle contains the first completely), we know that cos(beta) > 0.
Pseudo-code-ish solution
For the first and last circles you can do something closer to them -- fort the sake of simplicity, I'm just going to use the intersection between the lines I'm building and the tangent to the circle that's orthogonal to the first (or last) path, as illustrated in the first figure of this post.
Along the path, you can make your polygon arbitrarily close to the real swept area by making the segments smaller.
Also, I assume you have a function find_intersection that, given two parametric equations of two lines, returns the point of intersection between them. First of all, it makes it trivial to see if they are parallel (which they should never be), and it allows to easily represent vertical lines.
w = 1; // width of the first circle
C = {x : 0, y : 0}; // first circle center
while( (new_C, new_w) = next_step )
{
// the vector (seg_x, seg_y) is directing the segment
seg = new_C - C;
norm_seg = sqrt( seg.x * seg.x + seg.y * seg.y );
// the vector (ortho_x, ortho_y) is orthogonal to the segment, with same norm
ortho = { x = -seg.y, y = seg.x };
// apply the formulas we devised : get ratio-1
fact = new_w / w - 1;
O = new_C - seg / fact;
sin_beta = w * fact / norm_seg;
cos_beta = sqrt(1 - sin_beta * sin_beta);
// here you know the two lines, parametric equations are O+t*a and O+t*b
a = cos_beta * seg + sin_beta * ortho;
b = cos_beta * seg - sin_beta * ortho;
if( first iteration )
{
// initialize both "old lines" to a line perpendicular to the first segment
// that passes through the opposite side of the circle
old_a = ortho;
old_b = -ortho;
old_O = C - seg * (w / norm_seg);
}
P = find_intersection(old_O, old_a, O, a);
// add P to polygon construction clockwise
Q = find_intersection(old_O, old_b, O, b);
// add Q to polygon construction clockwise
old_a = a;
old_b = b;
old_O = O;
w = new_w;
C = new_C;
}
// Similarly, finish with line orthogonal to last direction, that is tangent to last circle
O = C + seg * (w / norm_seg);
a = ortho;
b = -ortho;
P = find_intersection(old_O, old_a, O, a);
// add P to polygon construction clockwise
Q = find_intersection(old_O, old_b, O, b);
// add Q to polygon construction clockwise
Let's suppose the centers are along the positive x-axis, and the lines in the envelope are y=mx and y=-mx for some m>0. The distance from (x,0) to y=mx is mx/sqrt(1+m^2). So, if the radius is increasing at a rate of m/sqrt(1+m^2) times the distance moved along the x-axis, the enveloping lines are y=mx and y=-mx.
Inverting this, if you put a circle of radius cx at the center of (x,0), then c=m/sqrt(1+m^2) so
m = c/sqrt(1-c^2).
If c=1 then you get a vertical line, and if c>1 then every point in the plane is included in some circle.
This is how you can tell how much faster than sound a supersonic object is moving from the Mach angle of the envelope of the disturbed medium.
You can rotate this to nonhorizontal lines. It may help to use the angle formulation mu = arcsin(c), where mu is the angle between the envelope and the path, and the Mach number is 1/c.
From a three dimensional cartesian coordinate, object A's coordinate can be expressed as xyzwpr (green arrow). And from object A's coordinate world, object B can be also expressed as xyzwpr (blue arrow).
Then can anyone write down the C# code for calculating xyzwpr of object B relative to the original coordinate system (red arrow)?
Say A's coordinate is (30,50,70, -15,44,-80) B (60,90,110, 33,150,-90).
And say the order of the rotation is yaw(z)-> pitch(x) -> roll(y)
--- EDIT ---
Can anyone validate below assumptions?
Assumption for xyz of point B.
xyz of point B, the smaller airplane, can be calculated by adding xyz of point A, the first airplane, and the xyz of B and then applying the 3d rotation of A's wpr on the A's xyz.
The order of doing this is;
1) translate the A point to the origin (subtract A which is translate by -Ax,-Ay,-Az)
2) rotate about the origin (can use 3×3 matrix R0 of A)
3) then translate back. (add A which is translate by +Ax,+Ay,+Az)
Assumption for wpr of point B
is simply succession of rotations of two points. AwApArBwBpBr.
--- SOLVED. A few references with detailed explanation and codes ---
Global frame-of-reference VS Local frame-of-reference
3D matrix rotation about an arbitrary point
Euler to matrix conversion
This question has some issues.
First, I think it's not good practice to request directly for code. Instead, show code you tried, ask for errors in your code, or a better approach, or libraries that may help you.
I would suggest rephrasing your question. Now it looks like "Can anyone do my homework, please?".
What problems are you facing? Maybe you don't want to implement matrix multiplication and you would like to know libraries that already do it, or you don't know how to make the call to atan2.
Once you get matrix multiplication, translation matrix build up, rotation matrix build up and atan2 (made by yourself or by a library), you just have to (pseudocode):
Matrix c = a;
Matrix yaw, pitch, roll;
Matrix pos;
buildTranslationMatrix(pos, x, y, z);
buildRotationZMatrix(yaw, w);
buildRotationXMatrix(pitch, p);
buildRotationYMatrix(roll, r);
mult (c, c, pos); //c = c*pos
mult (c, c, yaw); //c = c*yaw
mult (c, c, pitch);
mult (c, c, roll);
decomposePos(c, x, y, z); // obtain final xyz from c
decomposeAngles(c, w, p, r); // obtain final wpr from c
Note the post-multiplication.
Hope I made a constructive criticism. :)
EDIT
Second assumption is correct.
Maybe I misunderstood the first one, but I think it's wrong. As I am more used to transformation matrices than to euler angles (and you pointed that link), I understand it this way:
To obtain xyz (as well as wpr) I would compute the transformation matrix, which contains all the values. The final transformation matrix of the second plane, in the original coordinate system, is computed as:
M = TA * RA * TB * RB
(TA is A translation matrix of plane A and RA is its rotation matrix)
Transformation matrices can be understood this way:
r r r t
r r r t
M = r r r t
s s s w
We only care about rotation and translation. If you multiply TA*RA:
1 0 0 x r r r 0 r r r x
0 1 0 y r r r 0 r r r y
0 0 1 z * r r r 0 = r r r z
0 0 0 1 0 0 0 1 0 0 0 1
which is how we understand the coordinate system of A. Remember that this means first rotating, as if it were at the origin, and then translating to position x, y, z. Post-multiplicating means intern transformation, transformation in the mobile coordinate system. So, if we continue post-multiplicating, we will composite the final transformation matrix.
Also, matrices are associative, so
M = (TA * RA) * (TB * RB)
is the same as
M = ((TA * RA) * TB) * RB
Recapitulation
xyz will be in the last column of M and wpr will have to be decomposed from 3*3 submatrix of M.
I have two cartesian coordinate systems with known unit vectors:
System A(x_A,y_A,z_A)
and
System B(x_B,y_B,z_B)
Both systems share the same origin (0,0,0). I'm trying to calculate a quaternion, so that vectors in system B can be expressed in system A.
I am familiar with the mathematical concept of quaternions. I have already implemented the required math from here: http://content.gpwiki.org/index.php/OpenGL%3aTutorials%3aUsing_Quaternions_to_represent_rotation
One possible solution could be to calculate Euler angles and use them for 3 quaternions. Multiplying them would lead to a final one, so that I could transform my vectors:
v(A) = q*v(B)*q_conj
But this would incorporate Gimbal Lock again, which was the reason NOT to use Euler angles in the beginning.
Any idead how to solve this?
You can calculate the quaternion representing the best possible transformation from one coordinate system to another by the method described in this paper:
Paul J. Besl and Neil D. McKay
"Method for registration of 3-D shapes", Sensor Fusion IV: Control Paradigms and Data Structures, 586 (April 30, 1992); http://dx.doi.org/10.1117/12.57955
The paper is not open access but I can show you the Python implementation:
def get_quaternion(lst1,lst2,matchlist=None):
if not matchlist:
matchlist=range(len(lst1))
M=np.matrix([[0,0,0],[0,0,0],[0,0,0]])
for i,coord1 in enumerate(lst1):
x=np.matrix(np.outer(coord1,lst2[matchlist[i]]))
M=M+x
N11=float(M[0][:,0]+M[1][:,1]+M[2][:,2])
N22=float(M[0][:,0]-M[1][:,1]-M[2][:,2])
N33=float(-M[0][:,0]+M[1][:,1]-M[2][:,2])
N44=float(-M[0][:,0]-M[1][:,1]+M[2][:,2])
N12=float(M[1][:,2]-M[2][:,1])
N13=float(M[2][:,0]-M[0][:,2])
N14=float(M[0][:,1]-M[1][:,0])
N21=float(N12)
N23=float(M[0][:,1]+M[1][:,0])
N24=float(M[2][:,0]+M[0][:,2])
N31=float(N13)
N32=float(N23)
N34=float(M[1][:,2]+M[2][:,1])
N41=float(N14)
N42=float(N24)
N43=float(N34)
N=np.matrix([[N11,N12,N13,N14],\
[N21,N22,N23,N24],\
[N31,N32,N33,N34],\
[N41,N42,N43,N44]])
values,vectors=np.linalg.eig(N)
w=list(values)
mw=max(w)
quat= vectors[:,w.index(mw)]
quat=np.array(quat).reshape(-1,).tolist()
return quat
This function returns the quaternion that you were looking for. The arguments lst1 and lst2 are lists of numpy.arrays where every array represents a 3D vector. If both lists are of length 3 (and contain orthogonal unit vectors), the quaternion should be the exact transformation. If you provide longer lists, you get the quaternion that is minimizing the difference between both point sets.
The optional matchlist argument is used to tell the function which point of lst2 should be transformed to which point in lst1. If no matchlist is provided, the function assumes that the first point in lst1 should match the first point in lst2 and so forth...
A similar function for sets of 3 Points in C++ is the following:
#include <Eigen/Dense>
#include <Eigen/Geometry>
using namespace Eigen;
/// Determine rotation quaternion from coordinate system 1 (vectors
/// x1, y1, z1) to coordinate system 2 (vectors x2, y2, z2)
Quaterniond QuaternionRot(Vector3d x1, Vector3d y1, Vector3d z1,
Vector3d x2, Vector3d y2, Vector3d z2) {
Matrix3d M = x1*x2.transpose() + y1*y2.transpose() + z1*z2.transpose();
Matrix4d N;
N << M(0,0)+M(1,1)+M(2,2) ,M(1,2)-M(2,1) , M(2,0)-M(0,2) , M(0,1)-M(1,0),
M(1,2)-M(2,1) ,M(0,0)-M(1,1)-M(2,2) , M(0,1)+M(1,0) , M(2,0)+M(0,2),
M(2,0)-M(0,2) ,M(0,1)+M(1,0) ,-M(0,0)+M(1,1)-M(2,2) , M(1,2)+M(2,1),
M(0,1)-M(1,0) ,M(2,0)+M(0,2) , M(1,2)+M(2,1) ,-M(0,0)-M(1,1)+M(2,2);
EigenSolver<Matrix4d> N_es(N);
Vector4d::Index maxIndex;
N_es.eigenvalues().real().maxCoeff(&maxIndex);
Vector4d ev_max = N_es.eigenvectors().col(maxIndex).real();
Quaterniond quat(ev_max(0), ev_max(1), ev_max(2), ev_max(3));
quat.normalize();
return quat;
}
What language are you using? If c++, feel free to use my open source library:
http://sourceforge.net/p/transengine/code/HEAD/tree/transQuaternion/
The short of it is, you'll need to convert your vectors to quaternions, do your calculations, and then convert your quaternion to a transformation matrix.
Here's a code snippet:
Quaternion from vector:
cQuat nTrans::quatFromVec( Vec vec ) {
float angle = vec.v[3];
float s_angle = sin( angle / 2);
float c_angle = cos( angle / 2);
return (cQuat( c_angle, vec.v[0]*s_angle, vec.v[1]*s_angle,
vec.v[2]*s_angle )).normalized();
}
And for the matrix from quaternion:
Matrix nTrans::matFromQuat( cQuat q ) {
Matrix t;
q = q.normalized();
t.M[0][0] = ( 1 - (2*q.y*q.y + 2*q.z*q.z) );
t.M[0][1] = ( 2*q.x*q.y + 2*q.w*q.z);
t.M[0][2] = ( 2*q.x*q.z - 2*q.w*q.y);
t.M[0][3] = 0;
t.M[1][0] = ( 2*q.x*q.y - 2*q.w*q.z);
t.M[1][1] = ( 1 - (2*q.x*q.x + 2*q.z*q.z) );
t.M[1][2] = ( 2*q.y*q.z + 2*q.w*q.x);
t.M[1][3] = 0;
t.M[2][0] = ( 2*q.x*q.z + 2*q.w*q.y);
t.M[2][1] = ( 2*q.y*q.z - 2*q.w*q.x);
t.M[2][2] = ( 1 - (2*q.x*q.x + 2*q.y*q.y) );
t.M[2][3] = 0;
t.M[3][0] = 0;
t.M[3][1] = 0;
t.M[3][2] = 0;
t.M[3][3] = 1;
return t;
}
I just ran into this same problem. I was on the track to a solution, but I got stuck.
So, you'll need TWO vectors which are known in both coordinate systems. In my case, I have 2 orthonormal vectors in the coordinate system of a device (gravity and magnetic field), and I want to find the quaternion to rotate from device coordinates to global orientation (where North is positive Y, and "up" is positive Z). So, in my case, I've measured the vectors in the device coordinate space, and I'm defining the vectors themselves to form the orthonormal basis for the global system.
With that said, consider the axis-angle interpretation of quaternions, there is some vector V about which the device's coordinates can be rotated by some angle to match the global coordinates. I'll call my (negative) gravity vector G, and magnetic field M (both are normalized).
V, G and M all describe points on the unit sphere.
So do Z_dev and Y_dev (the Z and Y bases for my device's coordinate system).
The goal is to find a rotation which maps G onto Z_dev and M onto Y_dev.
For V to rotate G onto Z_dev the distance between the points defined by G and V must be the same as the distance between the points defined by V and Z_dev. In equations:
|V - G| = |V - Z_dev|
The solution to this equation forms a plane (all points equidistant to G and Z_dev). But, V is constrained to be unit-length, which means the solution is a ring centered on the origin -- still an infinite number of points.
But, the same situation is true of Y_dev, M and V:
|V - M| = |V - Y_dev|
The solution to this is also a ring centered on the origin. These rings have two intersection points, where one is the negative of the other. Either is a valid axis of rotation (the angle of rotation will just be negative in one case).
Using the two equations above, and the fact that each of these vectors is unit length you should be able to solve for V.
Then you just have to find the angle to rotate by, which you should be able to do using the vectors going from V to your corresponding bases (G and Z_dev for me).
Ultimately, I got gummed up towards the end of the algebra in solving for V.. but either way, I think everything you need is here -- maybe you'll have better luck than I did.
Define 3x3 matrices A and B as you gave them, so the columns of A are x_A,x_B, and x_C and the columns of B are similarly defined. Then the transformation T taking coordinate system A to B is the solution TA = B, so T = BA^{-1}. From the rotation matrix T of the transformation you can calculate the quaternion using standard methods.
You need to express the orientation of B, with respect to A as a quaternion Q. Then any vector in B can be transformed to a vector in A e.g. by using a rotation matrix R derived from Q. vectorInA = R*vectorInB.
There is a demo script for doing this (including a nice visualization) in the Matlab/Octave library available on this site: http://simonbox.info/index.php/blog/86-rocket-news/92-quaternions-to-model-rotations
You can compute what you want using only quaternion algebra.
Given two unit vectors v1 and v2 you can directly embed them into quaternion algebra and get the corresponding pure quaternions q1 and q2. The rotation quaternion Q that align the two vectors such that:
Q q1 Q* = q2
is given by:
Q = q1 (q1 + q2)/(||q1 + q2||)
The above product is the quaternion product.