I am trying to adapt Mike Bostock's constrained zoom example to fit my needs. (http://bl.ocks.org/mbostock/4987520). Is there any way to calculate the geo bounds (in long/lat) of the projection when the map is zoomed. The d3.geo.bounds() method expects a 'feature' -- I really don't want to zoom on any particular feature. All I want is the geo bounds for the visible area of the projection.
Thanks in advance,
Kishore
My other answer was a misreading of the question, but I'll leave it there in case someone else misreads the question in the same way.
To find the bounding box of the visual area of your map on screen, simply use the projection.invert() function and feed it the top-left and bottom-right corners of your SVG. If you have a 500x500 SVG, then that looks like this:
projection.invert([0,0])
projection.invert([500,500])
This is a bounding box of your screen, in lat-long (or whatever coordinate system you're using).
After that, you can get the bounds of your features and test to see if they are fully-contained or intersecting or have their centroid within those bounds. I'm not going to explain how to do that here, because that's a different question with many different answers depending on which definition of "within these bounds" you decide on.
I'm not aware of any built in functionality to give the bounds of a set of features, but here's a pretty simple function that does that:
function boundingExtent(features) {
var boundExtent = [[0,0],[0,0]];
for (var x in features) {
thisBounds = d3.geo.bounds(features[x]);
boundExtent[0][0] = Math.min(thisBounds[0][0],boundExtent[0][0]);
boundExtent[0][1] = Math.min(thisBounds[0][1],boundExtent[0][1]);
boundExtent[1][0] = Math.max(thisBounds[1][0],boundExtent[1][0]);
boundExtent[1][1] = Math.max(thisBounds[1][1],boundExtent[1][1]);
}
return boundExtent;
}
With that, you can just pass the array of features to boundingExtent(featureArray) and it will give you back a bounding box for your entire set.
Related
On a shape from a logical image, I am trying to extract the field of view from any point inside the shape on matlab :
I tried something involving to test each line going through the point but it is really really long.(I hope to do it for each points of the shape or at least each point of it's contour wich is quite a few times)
I think a faster method would be working iteratively by the expansion of a disk from the considered point but I am not sure how to do it.
How can I find this field of view in an efficient way?
Any ideas or solution would be appreciated, thanks.
Here is a possible approach (the principle behind the function I wrote, available on Matlab Central):
I created this test image and an arbitrary point of view:
testscene=zeros(500);
testscene(80:120,80:120)=1;
testscene(200:250,400:450)=1;
testscene(380:450,200:270)=1;
viewpoint=[250, 300];
imsize=size(testscene); % checks the size of the image
It looks like this (the circle marks the view point I chose):
The next line computes the longest distance to the edge of the image from the viewpoint:
maxdist=max([norm(viewpoint), norm(viewpoint-[1 imsize(2)]), norm(viewpoint-[imsize(1) 1]), norm(viewpoint-imsize)]);
angles=1:360; % use smaller increment to increase resolution
Then generate a set of points uniformly distributed around the viewpoint.:
endpoints=bsxfun(#plus, maxdist*[cosd(angles)' sind(angles)'], viewpoint);
for k=1:numel(angles)
[CX,CY,C] = improfile(testscene,[viewpoint(1), endpoints(k,1)],[viewpoint(2), endpoints(k,2)]);
idx=find(C);
intersec(k,:)=[CX(idx(1)), CY(idx(1))];
end
What this does is drawing lines from the view point to each directions specified in the array angles and look for the position of the intersection with an obstacle or the edge of the image.
This should help visualizing the process:
Finally, let's use the built-in roipoly function to create a binary mask from a set of coordinates:
FieldofView = roipoly(testscene,intersec(:,1),intersec(:,2));
Here is how it looks like (obstacles in white, visible field in gray, viewpoint in red):
considering this basic case, one may expect the coordinates of the layer to be updated... but they would not.
Instead, there is the possibility of remembering the starting point, compute the mouse offset and then update the coordinates, like in this test but... the effect is quite extreme.
Expected : point x1,y1 is static
Result : point x1,y1 moves incredibly fast
If setting coordinates to constant, the drag remains the same.
The main problem here is that drag action applies to the whole layer.
Fix : apply the modification at the end of the drag, like in this snippet.
But it is relatively ugly. Anyone has a better way to
get on the run the actual coordinates of the points of the line
manage to keep a point of the line static while the others are moving
Looking forward your suggestions,
In order to maintain the efficiency of dragging layers, jCanvas only offsets the x and y properties for any draggable layer (including paths). Therefore, when dragging, you can compute the absolute positions of any set of path coordinates using something along these lines:
var absX1 = layer.x + layer.x1;
var absY1 = layer.y + layer.y1;
(assuming layer references a jCanvas layer, of course)
How can I change transform the coordinates in a window from 0,0 topleft to 0,0 bottomleft.
I have tried various solutions with
SetMapMode(hdc,MM_TEXT);,
SetViewportExtEx(hdc,0,-clientrect.bottom,NULL);
SetViewPortOrgEx(hdc,0,-clientrect.bottom,NULL);
SetWindowOrgEx(hdc,0,-clientrect.bottom,NULL);
SetWindowExtEx(hdc,0,-clientrect.bottom,NULL);
I have even tried google for a solution but to no prevail, so I turn to you the more experienced people on the internet.
The idea is I'm creating a custom control for linear interpolation and I could reverse the coordinate system by x,y in top right corner but I want it right. At the moment I get a reversed linear interpolation when I try to draw it as I cannot get the coords to be bottomleft.
I'm using win32 api, and I suspect I can skip the code as the screen coordinate system is almost identical on all systems, by that I mean 0,0 is "always" topleft on the screen if you are keeping it to standard 2d window and frames.
I really don't want a whole codesample to ease the typing pain for you guys, but I want some direction as it seems I cannot grasp the simple concept of flipping the coords in win32 api.
Thanks and a merry christmas
EDIT !
I would like to add my own answer to this question as I used simple math to reverse the view so to say.
If for an example I got the valuepair x,y (150,57) and another pair x,y (100,75) then I used this formulae height + (-1 * y) and voila I get a proper cartesian coordinate field :) ofcourse in this example height is undefined variable but in my application its 200px in height.
According to the documentation for SetViewportOrgEx, you generally want to use it or SetWindowOrgEx, but not both. That said, you probably want the viewport origin to be (0, clientrect.bottom), not -clientrect.bottom.
Setting transforms with GDI always made me crazy. I think you're better off using GDI+. With it, you can create a matrix that describes a translation of (0, clientRect.bottom), and a scaling of (1.0, -1.0). Then you can call SetWorldTransform.
See the example at Using Coordinate Spaces and Transformations. For general information about transforms: Coordinate Spaces and Transformations.
Additional information:
I've not tried this with direct Windows API calls, but if I do the following in C# using the Graphics class (which is a wrapper around GDI+), it works:
Graphics g = GetGraphics(); // gets a canvas to draw on
SetTranslateTransform(0, clientRect.Bottom);
SetScaleTransform(1.0f, -1.0f);
That puts the origin at the bottom left, with x increasing to the right and y increasing as you go up. If you use SetWorldTransform as I suggested, the above will work for you.
If you have to use GDI, then you'll want to use SetViewportOrgEx(0, clientRect.bottom), and then set the scaling. I don't remember how to do scaling with the old GDI functions.
Note also that the documentation for SetViewportExtEx says:
When the following mapping modes are set, calls to the SetWindowExtEx
and SetViewportExtEx functions are ignored.
MM_HIENGLISH
MM_HIMETRIC
MM_LOENGLISH
MM_LOMETRIC
MM_TEXT
MM_TWIPS
Here's my problem - I have a map of the world or some sort of region, like this:
I need to generate a "border points" table for this map of a region in order to generate imagemaps and dynamically highlight certain areas. All of the maps' regions will have borders of one color to define them (in the example image, white).
So far, I'm thinking of some sort of flood-fill based method - note that speed and efficiency are not that important, as the script is in no way intended to be used in real time.
Is there a better way to do this that I don't know of? Is my approach fundamentally wrong? Any suggestions would be appreciated!
If the regions are completely isolated one from each other, looking at connected components would do the trick. In Mathematica it looks like:
First create a binary image from the world map:
regions = ColorNegate[Binarize[img, .9]]
Then compute the connected components:
components = MorphologicalComponents[regions, CornerNeighbors -> False];
Now you may extract properties for each of the components (masks, perimeters, etc.). Here I colorized each regions with a unique color:
Colorize[components]
To get the border of a given component, one can query for the binary mask of the component and then compute the perimeter.
This gets all the masks:
masks = ComponentMeasurements[components, "Mask"];
As an example, get the border, or contour, of one region:
country = Image[masks[[708, 2]], "Bit"]
border = MorphologicalPerimeter[country]
Getting 2D positions for the border is just a matter of extracting the white pixels in the image:
pos = Position[ImageData[border], 1]
If possible; try to get the vector data behind your map from another source. I understand this doesn't answer your question, but for world borders (and many others) you can find them publicly on the internet (google for "world borders shapefile"). This will give you more precise data, allow you to zoom at any level, reproject your map, use google maps or other layers, etc. You can display the vector data nicely with libraries like openlayers etc, but then you're slowly moving towards more complex GIS stuff.
If all you really need is based on an image, your flood fill approach might work (if the borders are indeed completely closed).
I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.