I'm playing around with OpenXmlSDK to see if it's a viable solution for our Powerpoint needs. One thing that is required is the ability to position shapes in the Powerpoint. I've been searching around for a way to get the position of a Shape, but have only come across is the MSDN "How To" http://msdn.microsoft.com/en-us/library/cc850828.aspx and a Position class (but no way to get it from a Shape) http://msdn.microsoft.com/en-us/library/office/documentformat.openxml.wordprocessing.position%28v=office.14%29.aspx.
How do I do something like:
PresentationDocument presentationDocument = PresentationDocument.Open("C:\\MyDoc.pptx", true);
IdPartPair pp = presentationDocument.PresentationPart.SlideParts.First().Parts.FirstOrDefault();
var shape = pp.OpenXmlPart;
// How do I get the position and dimensions?
You have 2 variables for the dimension of the shape :
- Offset gives the position of the top corner of your shape
- Extents gives the size off your shape
shape.ShapeProperties.Transform2D.Offset.X //gives the x position of top left corner
shape.ShapeProperties.Transform2D.Offset.Y //gives the y position of top left corner
shape.ShapeProperties.Transform2D.Extents.X //gives the x size of the shape : the width
shape.ShapeProperties.Transform2D.Extents.Y //gives the y size of the shape : the height
Go through the XML for the slide in question and look for xfrm elements, which should contain off (offset) and ext (extent) sub-elements. The measurements are in EMUs (see last page of Wouter van Vugt's document).
Sometimes ShapeProperties is not displayed as a Shape property, you must write
var sP = ((DocumentFormat.OpenXml.Presentation.Shape)shape).ShapeProperties;
After you can use Transform2D and find coordinates as Deunz wrote.
Related
I am using nvd3 boxplot for my charts. Is there any option to have mean as an asterisk (*) on the boxplot? Can we also have the n value above the top whisker similar to the image below.
This issue has been posted here.
Thanks in advance.
Edit
I would like to add a mean value which I calculate from the data points and not just the center of the box plot. The computed mean may not be in the center of the box plot due to outliers.
You can achieve this by doing the following algorithm:
Get all the rectangles
Find the middle point
Create a text and put it in the above calculated center
Code snippet:
function makeMarkOnMean(){
d3.selectAll(".mean").remove();//remove all * mean markers
//get all the rectangles
d3.selectAll(".nv-boxplot-box")[0].forEach(function(r){
window.setTimeout(function(){
var x = parseFloat(d3.select(r).attr("x")) + d3.select(r).attr("width")/2 - 3; //x position of the star
var y = parseFloat(d3.select(r).attr("y")) + parseFloat(d3.select(r).attr("height"))/2+12;//y position of the star
//now make the star on the above x and y
d3.select(r.parentNode).append("text").attr("class", "mean").style("font-size", "x-large").text("*").style("fill", "red").attr("x",x).attr("y", y);
},500)
});
Working code here.
So according to the Unity documentation RectTransform.anchoredPosition will return the screen coordinates of a UI element if the anchors are touching at the pivot point of the RectTransform. However, if they are separated (in my case positioned at the corners of the rect) they will give you the position of the anchors relative to the pivot point. This is wonderful unless you want to keep appropriate dimensions of a UI object through multiple resolutions and position a different object based on that position at the same time.
Let's break this down. I have object1 and object2. object1 is positioned at (322.5, -600) and when the anchor points meet at the center (pivot) of the object anchoredPosition returns just that and object2 is positioned just fine. On the other hand once I have placed the anchors at the 4 corners of object1 anchoredPosition returns (45.6, -21). Thats just no good. I've even tried using Transform.position and then Camera.WorldToScreenPoint(), but that does just about as much to getting me to my goal.
I was hoping that you might be able to help me find a way to get the actual screen coordinates of this object. If anyone has any insight into this subject it would be greatly appreciated.
Notes: I've already attempted to use RectTranfrom.rect.center and it returned (0, 0)
I've also looked into RectTransformUtility and those helper functions have done all of squat.
anchoredPosition returns "The position of the pivot of this RectTransform relative to the anchor reference point." It has nothing to do with screen coordinates or world space.
If you're looking for the screen coordinates of a UI element in Unity, you can either use rectTransform.TransformPoint or rectTransform.GetWorldCorners to get any of the Vector3s you'd need in world space. Which ever you decide to go with, you can then pass them into Camera.WorldToScreenPoint()
Here's a glimpse on how finding world space coordinates of UI elements works if your stuck and need to roll your own transformations from view-space to world-space.
This may be beneficial if say you need something more than rectTransform.TransformPoint or want to know how this works.
Ok, so you want to do a transformation from normalised UI coordinates in the range [-1, 1], and de-project them back into world space coordinates.
To do this you could use something like Camera.main.ScreenToWorldPoint or Camera.main.ViewportToWorldPoint, or even rectTransform.position if your a lacker.
This is how to do it with just the camera's projection matrix.
/// <summary>
/// Get the world position of an anchor/normalised device coordinate in the range [-1, 1]
/// </summary>
private Vector3 GetAnchor(Vector2 ndcSpace)
{
Vector3 worldPosition;
Vector4 viewSpace = new Vector4(ndcSpace.x, ndcSpace.y, 1.0f, 1.0f);
// Transform to projection coordinate.
Vector4 projectionToWorld = (_mainCamera.projectionMatrix.inverse * viewSpace);
// Perspective divide.
projectionToWorld /= projectionToWorld.w;
// Z-component is backwards in Unity.
projectionToWorld.z = -projectionToWorld.z;
// Transform from camera space to world space.
worldPosition = _mainCamera.transform.position + _mainCamera.transform.TransformVector(projectionToWorld);
return worldPosition;
}
I've found out that you can multiply your coordinate by the 2 times the camera size and divide it to screen height.
I have a panel placed at (0, 1080) on a fullHD screen (1920 x 1080), camera size is 7. So the Y coordinate in world space will be 1080 * 7 * 2 / 1080 = 14 -> (0, 14).
ScreenToWorldPoint convert canvas position to world position :
Camera.main.ScreenToWorldPoint(transform.position)
I'm working on a map zoom algorithm which change the area (part of the map visible) coordinates on click.
For example, at the beginning, the area has this coordinates :
(0, 0) for the corner upper left
(100, 100) for the corner lower right
(100, 100) for the center of the area
And when the user clicks somewhere in the area, at a (x, y) coordinate, I say that the new coordinates for the area are :
(x-(100-0)/3, y-(100-0)/3) for the corner upper left
(x+(100-0)/3, y+(100-0)/3) for the corner upper right
(x, y) for the center of the area
The problem is that algorithm is not really powerful because when the user clicks somewhere, the point which is under the mouse moves to the middle of the area.
So I would like to have an idea of the algorithm used in Google Maps to change the area coordinates because this algorithm is pretty good : when the user clicks somewhere, the point which is under the mouse stays under the mouse, but the rest of area around is zoomed.
Somebody has an idea of how Google does ?
Lets say you have rectangle windowArea which holds drawing area coordinates(i.e web browser window area in pixels), for example if you are drawing map on the whole screen and the top left corner has coordinates (0, 0) then that rectangle will have values:
windowArea.top = 0;
windowArea.left = 0;
windowArea.right = maxWindowWidth;
windowArea.bottom = maxWindowHeight;
You also need to know visible map fragment, that will be longitude and latitude ranges, for example:
mapArea.top = 8.00; //lat
mapArea.left = 51.00; //lng
mapArea.right = 12.00; //lat
mapArea.bottom = 54.00; //lng
When zooming recalculate mapArea:
mapArea.left = mapClickPoint.x - (windowClickPoint.x- windowArea.left) * (newMapWidth / windowArea.width());
mapArea.top = mapClickPoint.y - (windowArea.bottom - windowClickPoint.y) * (newMapHeight / windowArea.height());
mapArea.right = mapArea.left + newWidth;
mapArea.bottom = mapArea.top + newHeight;
mapClickPoint holds map coordinates under mouse pointer(longitude, latitude).
windowClickPoint holds window coordinates under mouse pointer(pixels).
newMapHeight and newMapWidth hold new ranges of visible map fragment after zoom:
newMapWidth = zoomFactor * mapArea.width;//lets say that zoomFactor = <1.0, maxZoomFactor>
newMapHeight = zoomFactor * mapArea.height;
When you have new mapArea values you need to stretch it to cover whole windowArea, that means mapArea.top/left should be drawn at windowArea.top/left and mapArea.right/bottom should be drawn at windowArea.right/bottom.
I am not sure if google maps use the same algorithms, it gives similar results and it is pretty versatile but you need to know window coordinates and some kind of coordinates for visible part of object that will be zoomed.
Let us state the problem in 1 dimension, with the input (left, right, clickx, ratio)
So basically, you want to have the ratio to the click from the left and to the right to be the same:
Left'-clickx right'-clickx
------------- = --------------
left-clickx right-clickx
and furthermore, the window is reduced, so:
right'-left'
------------ = ratio
right-left
Therefore, the solution is:
left' = ratio*(left -clickx)+clickx
right' = ratio*(right-clickx)+clickx
And you can do the same for the other dimensions.
I am trying to find an effective algorithm for the following 3D Cube Selection problem:
Imagine a 2D array of Points (lets make it square of size x size) and call it a side.
For ease of calculations lets declare max as size-1
Create a Cube of six sides, keeping 0,0 at the lower left hand side and max,max at top right.
Using z to track the side a single cube is located, y as up and x as right
public class Point3D {
public int x,y,z;
public Point3D(){}
public Point3D(int X, int Y, int Z) {
x = X;
y = Y;
z = Z;
}
}
Point3D[,,] CreateCube(int size)
{
Point3D[,,] Cube = new Point3D[6, size, size];
for(int z=0;z<6;z++)
{
for(int y=0;y<size;y++)
{
for(int x=0;x<size;x++)
{
Cube[z,y,x] = new Point3D(x,y,z);
}
}
}
return Cube;
}
Now to select a random single point, we can just use three random numbers such that:
Point3D point = new Point(
Random(0,size), // 0 and max
Random(0,size), // 0 and max
Random(0,6)); // 0 and 5
To select a plus we could detect if a given direction would fit inside the current side.
Otherwise we find the cube located on the side touching the center point.
Using 4 functions with something like:
private T GetUpFrom<T>(T[,,] dataSet, Point3D point) where T : class {
if(point.y < max)
return dataSet[point.z, point.y + 1, point.x];
else {
switch(point.z) {
case 0: return dataSet[1, point.x, max]; // x+
case 1: return dataSet[5, max, max - point.x];// y+
case 2: return dataSet[1, 0, point.x]; // z+
case 3: return dataSet[1, max - point.x, 0]; // x-
case 4: return dataSet[2, max, point.x]; // y-
case 5: return dataSet[1, max, max - point.x];// z-
}
}
return null;
}
Now I would like to find a way to select arbitrary shapes (like predefined random blobs) at a random point.
But would settle for adjusting it to either a Square or jagged Circle.
The actual surface area would be warped and folded onto itself on corners, which is fine and does not need compensating ( imagine putting a sticker on the corner on a cube, if the corner matches the center of the sticker one fourth of the sticker would need to be removed for it to stick and fold on the corner). Again this is the desired effect.
No duplicate selections are allowed, thus cubes that would be selected twice would need to be filtered somehow (or calculated in such a way that duplicates do not occur). Which could be a simple as using a HashSet or a List and using a helper function to check if the entry is unique (which is fine as selections will always be far below 1000 cubes max).
The delegate for this function in the class containing the Sides of the Cube looks like:
delegate T[] SelectShape(Point3D point, int size);
Currently I'm thinking of checking each side of the Cube to see which part of the selection is located on that side.
Calculating which part of the selection is on the same side of the selected Point3D, would be trivial as we don't need to translate the positions, just the boundary.
Next would be 5 translations, followed by checking the other 5 sides to see if part of the selected area is on that side.
I'm getting rusty in solving problems like this, so was wondering if anyone has a better solution for this problem.
#arghbleargh Requested a further explanation:
We will use a Cube of 6 sides and use a size of 16. Each side is 16x16 points.
Stored as a three dimensional array I used z for side, y, x such that the array would be initiated with: new Point3D[z, y, x], it would work almost identical for jagged arrays, which are serializable by default (so that would be nice too) [z][y][x] but would require seperate initialization of each subarray.
Let's select a square with the size of 5x5, centered around a selected point.
To find such a 5x5 square substract and add 2 to the axis in question: x-2 to x+2 and y-2 to y+2.
Randomly selectubg a side, the point we select is z = 0 (the x+ side of the Cube), y = 6, x = 6.
Both 6-2 and 6+2 are well within the limits of 16 x 16 array of the side and easy to select.
Shifting the selection point to x=0 and y=6 however would prove a little more challenging.
As x - 2 would require a look up of the side to the left of the side we selected.
Luckily we selected side 0 or x+, because as long as we are not on the top or bottom side and not going to the top or bottom side of the cube, all axis are x+ = right, y+ = up.
So to get the coordinates on the side to the left would only require a subtraction of max (size - 1) - x. Remember size = 16, max = 15, x = 0-2 = -2, max - x = 13.
The subsection on this side would thus be x = 13 to 15, y = 4 to 8.
Adding this to the part we could select on the original side would give the entire selection.
Shifting the selection to 0,6 would prove more complicated, as now we cannot hide behind the safety of knowing all axis align easily. Some rotation might be required. There are only 4 possible translations, so it is still manageable.
Shifting to 0,0 is where the problems really start to appear.
As now both left and down require to wrap around to other sides. Further more, as even the subdivided part would have an area fall outside.
The only salve on this wound is that we do not care about the overlapping parts of the selection.
So we can either skip them when possible or filter them from the results later.
Now that we move from a 'normal axis' side to the bottom one, we would need to rotate and match the correct coordinates so that the points wrap around the edge correctly.
As the axis of each side are folded in a cube, some axis might need to flip or rotate to select the right points.
The question remains if there are better solutions available of selecting all points on a cube which are inside an area. Perhaps I could give each side a translation matrix and test coordinates in world space?
Found a pretty good solution that requires little effort to implement.
Create a storage for a Hollow Cube with a size of n + 2, where n is the size of the cube contained in the data. This satisfies the : sides are touching but do not overlap or share certain points.
This will simplify calculations and translations by creating a lookup array that uses Cartesian coordinates.
With a single translation function to take the coordinates of a selected point, get the 'world position'.
Using that function we can store each point into the cartesian lookup array.
When selecting a point, we can again use the same function (or use stored data) and subtract (to get AA or min position) and add (to get BB or max position).
Then we can just lookup each entry between the AA.xyz and BB.xyz coordinates.
Each null entry should be skipped.
Optimize if required by using a type of array that return null if z is not 0 or size-1 and thus does not need to store null references of the 'hollow cube' in the middle.
Now that the cube can select 3D cubes, the other shapes are trivial, given a 3D point, define a 3D shape and test each part in the shape with the lookup array, if not null add it to selection.
Each point is only selected once as we only check each position once.
A little calculation overhead due to testing against the empty inside and outside of the cube, but array access is so fast that this solution is fine for my current project.
When laying out a figure in MATLAB, typing axis equal ensures that no matter what the figure dimensions, the axes will always be square:
My current problem is that I want to add a second axes to this plot. Usually, that's no problem; I would just type axes([x1 y1 x2 y2]), and a new square figure would be added with corners at (x1, y1), (x2, y2), which is a fixed location relative to the figure. The problem is, I want this new axes to be located at a fixed location relative to the first axes.
So, my questions are:
Does anyone know how I can position an axes in a figure by specifying the location relative to another axes?
Assuming I can do 1, how can I have this new axes remain in the same place even if I resize the figure?
An axis position property is relative to its parent container. Therefore, one possibility is to create a transparent panel with the same size as the first axis, then inside it create the second axis, and set its location and size as needed. The position specified would be as if it were relative to the first axis.
Now we need to always maintain the panel to be the same size/location as the first axis. Usually this can be done using LINKPROP which links a property of multiple graphic objects (panel and axis) to be the same, namely the 'Position' property.
However, this would fail in your case: when calling axis image, it fixes the data units to be the same in every direction by setting aspect ratio properties like 'PlotBoxAspectRatio' and 'DataAspectRatio'. The sad news is that the 'Position' property will not reflect the change in size, thus breaking the above solution. Here is an example to illustrate the problem: if you query the position property before/after issuing the axis image call, it will be the same:
figure, plot(1:10,1:10)
get(gca,'Position')
pause(1)
axis image
get(gca,'Position')
Fortunately for us, there is a submission on FEX (plotboxpos) that solves this exact issue, and returns the actual position of the plotting region of the axis. Once we have that, it's a matter of syncing the panel position to the axis position. One trick is to create a event listener for when the axis changes size (it appears that the 'TightInset' property changes unlike the 'Position' property, so that could be the trigger in our case).
I wrapped the above in a function AXESRELATIVE for convenience: you call it as you would the builtin AXES function. The only difference is you give it as first argument the handle to the axis you want to relatively-position the newly created axis against. It returns handles to both the new axis and its containing panel.
Here is an example usage:
%# automatic resize only works for normalized units
figure
hParentAx = axes('Units','normalized');
axis(hParentAx, 'image')
%# create a new axis positioned at normalized units with w.r.t the previous axis
%# the axis should maintain its relative position on resizing the figure
[hAx hPan] = axesRelative(hParentAx, ...
'Units','normalized', 'Position',[0.7 0.1 0.1 0.1]);
set(hAx, 'Color','r')
And the function implementation:
function [hAx hPan] = axesRelative(hParentAx, varargin)
%# create panel exactly on top of parent axis
s = warning('off', 'MATLAB:hg:ColorSpec_None');
hPan = uipanel('Parent',get(hParentAx, 'Parent'), ...
'BorderType','none', 'BackgroundColor','none', ...
'Units',get(hParentAx,'Units'), 'Position',plotboxpos(hParentAx));
warning(s)
%# sync panel to always match parent axis position
addlistener(handle(hParentAx), ...
{'TightInset' 'Position' 'PlotBoxAspectRatio' 'DataAspectRatio'}, ...
'PostSet',#(src,ev) set(hPan, 'Position',plotboxpos(hParentAx)) );
%# create new axis under the newly created panel
hAx = axes('Parent',hPan, varargin{:});
end
On a completely different note: before you recent edit, I got the impression that you were trying to produce a scatter plot of images (i.e like a usual scatter plot, but with full images instead of points).
What you suggested (from what I understand) is creating one axis for each image, and setting its position corresponding to the x/y coordinates of the point.
My solution is to use the IMAGE/IMAGESC functions and draw the small images by explicitly setting the 'XData' and 'YData' properties to shift and scale the images appropriately. The beauty of this is it require a single axis, and doesn't suffer from having to deal with resizing issues..
Here is a sample implementation for that:
%# create fan-shaped coordinates
[R,PHI] = meshgrid(linspace(1,2,5), linspace(0,pi/2,10));
X = R.*cos(PHI); Y = R.*sin(PHI);
X = X(:); Y = Y(:);
num = numel(X);
%# images at each point (they don't have to be the same)
img = imread('coins.png');
img = repmat({img}, [num 1]);
%# plot scatter of images
SCALE = 0.2; %# image size along the biggest dimension
figure
for i=1:num
%# compute XData/YData vectors of each image
[h w] = size(img{i});
if h>w
scaleY = SCALE;
scaleX = SCALE * w/h;
else
scaleX = SCALE;
scaleY = SCALE * h/w;
end
xx = linspace(-scaleX/2, scaleX/2, h) + X(i);
yy = linspace(-scaleY/2, scaleY/2, w) + Y(i);
%# note: we are using the low-level syntax of the function
image('XData',xx, 'YData',yy, 'CData',img{i}, 'CDataMapping','scaled')
end
axis image, axis ij
colormap gray, colorbar
set(gca, 'CLimMode','auto')
This is usually the sort of thing you can take care of with a custom 'ResizeFcn' for your figure which will adjust the position and size of the smaller axes with respect the the larger. Here's an example of a resize function that maintains the size of a subaxes so that it is always 15% the size of the larger square axes and located in the bottom right corner:
function resizeFcn(src,event,hAxes,hSubAxes)
figurePosition = get(get(hAxes,'Parent'),'Position');
axesPosition = get(hAxes,'Position').*figurePosition([3 4 3 4]);
width = axesPosition(3);
height = axesPosition(4);
minExtent = min(width,height);
newPosition = [axesPosition(1)+(width-minExtent)/2+0.8*minExtent ...
axesPosition(2)+(height-minExtent)/2+0.05*minExtent ...
0.15*minExtent ...
0.15*minExtent];
set(hSubAxes,'Units','pixels','Position',newPosition);
end
And here's an example of its use:
hFigure = figure('Units','pixels'); %# Use pixel units for figure
hAxes = axes('Units','normalized'); %# Normalized axes units so it auto-resizes
axis(hAxes,'image'); %# Make the axes square
hSubAxes = axes('Units','pixels'); %# Use pixel units for subaxes
set(hFigure,'ResizeFcn',{#resizeFcn,hAxes,hSubAxes}); %# Set resize function