Eclipse GEF/Draw2d: Bounds vs. Constraints - eclipse-gef

Can someone explain the difference/relationship between bounds and constraints in Draw2d?
I'm trying to set up a GEF editor where instances of the same EditPart class are nested inside each other (I can post a simplified version of the code if necessary, but my question is really just conceptual). Each figure has an XYLayout and I'm setting the bounds and constraints (Rectangles) of each figure in refreshVisuals.
Right now my bounds and constraints are the same for each figure. Is that correct? Since I'm using XYLayout, are the coordinates of the bounds relative to the parent Figure? How about for the constraints?

The bounds of a child figure are only relative to the parent figure, if isCoordinateSystem() of the parent returns true - which is hardly ever the case. So in practice bounds are absolute coordinates.
The rectangles you provide as constraints in XYLayout are expected to contain coordinates relative to the parent and the layout then converts those relative coordinates in a way that is appropriate. If e.g. no figure in the parent chain has a local coordinate system, then the resulting bounds will be absolute coordinates.

Related

Transition Between Curve Types Using D3.js

I'd like to transition between curve types using D3.js.
Take a look at this block. The data stay the same but the curve type changes. I was expecting the paths to maintain their approximate positions on the plane -- the data stay the same, after all -- but they don't. The paths appear to be redrawn, although I don't understand why with basis to linear the paths seem to be redrawn from left to right whilst with linear to basis the paths seem to be redrawn from right to left.
I've read Mike Bostock's post on Path Transitions, but I think this is a slightly different problem. There, the data change but the curve type remains the same. Here, the data stay the same but the curve type changes.
Thanks in advance for any help!
To understand why you have such a strange transition, let's compare the d attribute of the paths using a curveBasis and a curveLinear.
First, a curveBasis:
d="M0,101.2061594964L45.48756294826797,89.52282837400001C90.97512589653594,77.83949725160001,181.95025179307189,54.47283500680002,271.46268884480395,84.08731623460001C360.975125896536,113.70179746240001,449.0248741034641,196.2974221628,538.5373111551961,222.09899531679994C628.0497482069281,247.90056847079998,719.0248741034642,216.90809007840002,764.512437051732,201.4118508822L810,185.915611686"
Now a curveLinear (same data):
d="M0,101.2061594964L272.92537768960784,31.10617276200003L537.0746223103922,278.89304686319997L810,185.915611686"
As you can see, the path is way simpler with curveLinear. So, the strange transition is the expected behaviour.
A possible solution is using a path interpolation, as proposed in this code from Mike Bostock.
Here is your bl.ocks with a path interpolation: http://blockbuilder.org/anonymous/02125b1fb145a979e53f369c4976a772
PS: If you want to avoid that strange transition when you load the page (all paths coming from the top left corner), draw them the first time using a regular attr method.

Field of view/ convexity map

On a shape from a logical image, I am trying to extract the field of view from any point inside the shape on matlab :
I tried something involving to test each line going through the point but it is really really long.(I hope to do it for each points of the shape or at least each point of it's contour wich is quite a few times)
I think a faster method would be working iteratively by the expansion of a disk from the considered point but I am not sure how to do it.
How can I find this field of view in an efficient way?
Any ideas or solution would be appreciated, thanks.
Here is a possible approach (the principle behind the function I wrote, available on Matlab Central):
I created this test image and an arbitrary point of view:
testscene=zeros(500);
testscene(80:120,80:120)=1;
testscene(200:250,400:450)=1;
testscene(380:450,200:270)=1;
viewpoint=[250, 300];
imsize=size(testscene); % checks the size of the image
It looks like this (the circle marks the view point I chose):
The next line computes the longest distance to the edge of the image from the viewpoint:
maxdist=max([norm(viewpoint), norm(viewpoint-[1 imsize(2)]), norm(viewpoint-[imsize(1) 1]), norm(viewpoint-imsize)]);
angles=1:360; % use smaller increment to increase resolution
Then generate a set of points uniformly distributed around the viewpoint.:
endpoints=bsxfun(#plus, maxdist*[cosd(angles)' sind(angles)'], viewpoint);
for k=1:numel(angles)
[CX,CY,C] = improfile(testscene,[viewpoint(1), endpoints(k,1)],[viewpoint(2), endpoints(k,2)]);
idx=find(C);
intersec(k,:)=[CX(idx(1)), CY(idx(1))];
end
What this does is drawing lines from the view point to each directions specified in the array angles and look for the position of the intersection with an obstacle or the edge of the image.
This should help visualizing the process:
Finally, let's use the built-in roipoly function to create a binary mask from a set of coordinates:
FieldofView = roipoly(testscene,intersec(:,1),intersec(:,2));
Here is how it looks like (obstacles in white, visible field in gray, viewpoint in red):

drag drawLine jcanvas optimisation coordinates

considering this basic case, one may expect the coordinates of the layer to be updated... but they would not.
Instead, there is the possibility of remembering the starting point, compute the mouse offset and then update the coordinates, like in this test but... the effect is quite extreme.
Expected : point x1,y1 is static
Result : point x1,y1 moves incredibly fast
If setting coordinates to constant, the drag remains the same.
The main problem here is that drag action applies to the whole layer.
Fix : apply the modification at the end of the drag, like in this snippet.
But it is relatively ugly. Anyone has a better way to
get on the run the actual coordinates of the points of the line
manage to keep a point of the line static while the others are moving
Looking forward your suggestions,
In order to maintain the efficiency of dragging layers, jCanvas only offsets the x and y properties for any draggable layer (including paths). Therefore, when dragging, you can compute the absolute positions of any set of path coordinates using something along these lines:
var absX1 = layer.x + layer.x1;
var absY1 = layer.y + layer.y1;
(assuming layer references a jCanvas layer, of course)

How to implement origin/anchor point in GLKit scene graph?

I'm trying to implement a simple scene graph on iOS using GLKit but handling origin/anchor points is giving me fits. The requirements are pretty straightforward:
There is a graph of nodes each with translation, rotation, scale and
origin point.
Each node combines the properties above into a single
matrix (which is multiplied by it's parent's matrix if it has a
parent).
Nodes need to honor their parent's coordinate system,
including the origin point (i.e. barring translations, etc. a child's origin should line up with the parent's origin)
So the question is:
What operations (e.g. translationMatrix * rotationMatrix * scaleMatrix, etc.) need to be performed and in what order so as to achieve the proper handling of origin/anchor points?
P.S. - If you are kind enough to post an answer please mention whether your answer is based on column or row major matrices - that's a perennial source of confusion for me.
Have a look at both SpriteKit and SceneKit. Both APIs provide the building blocks for creating scene graphs on iOS.

Best approach for specific Object/Image Recognition task?

I'm searching for an certain object in my photograph:
Object: Outline of a rectangle with an X in the middle. It looks like a rectangular checkbox. That's all. So, no fill, just lines. The rectangle will have the same ratios of length to width but it could be any size or any rotation in the photograph.
I've looked a whole bunch of image recognition approaches. But I'm trying to determine the best for this specific task. Most importantly, the object is made of lines and is not a filled shape. Also, there is no perspective distortion, so the rectangular object will always have right angles in the photograph.
Any ideas? I'm hoping for something that I can implement fairly easily.
Thanks all.
You could try using a corner detector (e.g. Harris) to find the corners of the box, the ends and the intersection of the X. That simplifies the problem to finding points in the right configuration.
Edit (response to comment):
I'm assuming you can find the corner points in your image, the 4 corners of the rectangle, the 4 line endings of the X and the center of the X, plus a few other corners in the image due to noise or objects in the background. That simplifies the problem to finding a set of 9 points in the right configuration, out of a given set of points.
My first try would be to look at each corner point A. Then I'd iterate over the points B close to A. Now if I assume that (e.g.) A is the upper left corner of the rectangle and B is the lower right corner, I can easily calculate, where I would expect the other corner points to be in the image. I'd use some nearest-neighbor search (or a library like FLANN) to see if there are corners where I'd expect them. If I can find a set of points that matches these expected positions, I know where the symbol would be, if it is present in the image.
You have to try if that is good enough for your application. If you have too many false positives (sets of corners of other objects that accidentially form a rectangle + X), you could check if there are lines (i.e. high contrast in the right direction) where you would expect them. And you could check if there is low contrast where there are no lines in the pattern. This should be relatively straightforward once you know the points in the image that correspond to the corners/line endings in the object you're looking for.
I'd suggest the Generalized Hough Transform. It seems you have a fairly simple, fixed shape. The generalized Hough transform should be able to detect that shape at any rotation or scale in the image. You many need to threshold the original image, or pre-process it in some way for this method to be useful though.
You can use local features to identify the object in image. Feature detection wiki
For example, you can calculate features on some referent image which contains only the object you're looking for and save the results, let's say, to a plain text file. After that you can search for the object just by comparing newly calculated features (on images with some complex scenes containing the object) with the referent ones.
Here's some good resource on local features:
Local Invariant Feature Detectors: A Survey

Resources