Zoom to show all locations in bing maps - windows-phone-7

Say I have 3 pushpins: (1) California, (2) Florida, (3) New York. In order for all the 3 of them to be visible, I'd have to zoom out far enough to pretty much see the whole country. But say instead of that I had (1) California, (2) Nevada, (3) Texas. I'd have to zoom out only to cover the south west corner of the US. Is there any function in the bing maps for Windows Phone 7 API that helps me with this. Basically, I want to zoom out just enough to see a set of locations.
Thanks!

Yes. it is possible.
CurrentItems is source for my map.
var locations = CurrentItems.Select(model => model.Location);
map.SetView(LocationRect.CreateLocationRect(locations));

I don't know about a function that will do what you want directly, but you can find the bounding box that just surrounds all of your locations and you should be able to set the viewport to that extent.
If you start with an inverted box where the bottom left corner is (maxVal, maxVal) and the top right is (-maxVal, -maxval). you can loop over all your points and reset the bottom left if the point is less than its current value or the top right if it's greater than its current value.
The end result will be the smallest box that everything fits into. Add a little to the size to cope with rounding error and to make sure your pins are actually on the map and then set the extent of the viewport.

Related

Windows HtmlHelp popup on wrong monitor

I'm using the HTMLHelp function with C to provide short, context sensitive help for the edit fields on various dialogs. It works perfectly except when the dialog is located on a monitor whose screen has negative X coordinates.
For example, I have three monitors and the top, left point on the CENTER of the three is the (0,0) point on the screen. That makes ALL X-coordinates on the LEFT screen have negative values. When I call HTMLHelp for HH_DISPLAY_TEXT_POPUP, the tooltip it displays shows up stuck to the left edge of the CENTER screen instead of on the LEFT screen, where it belongs. When the coordinate for the help popup is on the center or right screens, the popup is exactly where it should be.
Does anyone know a way to get the HTMLHelp function to work correctly and just use the given coordinates instead of applying an invalid range check and "fixing" my X location?
If not, I guess I will be forced to write my own version of a help popup function, but I'd much rather use the system feature.

How can I pick an intelligent location from a list of points/coordinates on 2D tile map?

So I have an arrayList of the locations of each pixel (represent resources in my game). arrayList.get(0); would get me the top right point, and then continuing by increment the x coordinate. When it reaches the end, it increments the y coordinate and repeats until it gets to the last point (left-most, bottom)/end of the list.
I want to build a factory site, but first I need to pick an intelligent location to build my factory/resource site. See picture (red is an ideal location).
If I simply take an average of all points, it would give me somewhere around orange arrow (I'm guessing).
Sounds like you're looking for the highest point density. This StackOverflow answer discusses a couple of algorithms, the first doing box convolution.
Of course if each pixel just has value 1 this may be more than you need.

How do you get the view coordinates of a CListCtrl in LVS_REPORT style on Win XP

I'm trying to coordinate the scrolling of a CListCtrl with another control. Contrary to the documentation on Win 7 you can call CListCtrl::GetViewRect or CListCtrl::GetOrigin to get the viewable area coordinates.
e.g. If you're scrolled 10 units across CListCtrl::GetOriginwill return x=10, y=0.
Unfortunately Win XP does follow the SDK documentation which says "... if the control is in report view, the return value is always zero".
I'm sure this must be really simple but whats the best way to get the top left coordinates of a CListCtrl viewable area?
It turns out that GetScrollInfo will do the trick. The nPos value matches the window coordinates (ie the min/max range represents the total size of columns not a fixed 0-100 range).

How can I make a google maps like interface

Hi I want to create a google maps like interface from very high resolution maps. eg. 11k x 11k resolution. My goal is to be able to pan/zoom and put pins on desired locations on the map. I was able to implement zooming/panning of the image using mapbox plugin. my question is how to place a pin on (x,y) location on image and keep the pin while zooming in/out.
Thanks
The easiest way would be to treat the entire image as a 11k by 11k grid with (0,0) in the top right corner. Then, the pin would be located at (x,y). Then, when you scale the image, you treat the new view as a subset of the main grid.
For example, it may start at (5000, 3500) and be 500 by 500 pixels. Then, if the pin is in those coords, you calculate where to place it.
Let's say you have two pins: {(5230, 3550), (5400, 3700)}.
Now, if your zoomed on 5000, 3500 and the view port is 500x500, your pins real locations are:
{(230, 3550), (5400, 3700)}
The way you'll need to do the translations exactly will vary on how exactly your handling zooming/panning, but that should be the general idea.

Best approach for specific Object/Image Recognition task?

I'm searching for an certain object in my photograph:
Object: Outline of a rectangle with an X in the middle. It looks like a rectangular checkbox. That's all. So, no fill, just lines. The rectangle will have the same ratios of length to width but it could be any size or any rotation in the photograph.
I've looked a whole bunch of image recognition approaches. But I'm trying to determine the best for this specific task. Most importantly, the object is made of lines and is not a filled shape. Also, there is no perspective distortion, so the rectangular object will always have right angles in the photograph.
Any ideas? I'm hoping for something that I can implement fairly easily.
Thanks all.
You could try using a corner detector (e.g. Harris) to find the corners of the box, the ends and the intersection of the X. That simplifies the problem to finding points in the right configuration.
Edit (response to comment):
I'm assuming you can find the corner points in your image, the 4 corners of the rectangle, the 4 line endings of the X and the center of the X, plus a few other corners in the image due to noise or objects in the background. That simplifies the problem to finding a set of 9 points in the right configuration, out of a given set of points.
My first try would be to look at each corner point A. Then I'd iterate over the points B close to A. Now if I assume that (e.g.) A is the upper left corner of the rectangle and B is the lower right corner, I can easily calculate, where I would expect the other corner points to be in the image. I'd use some nearest-neighbor search (or a library like FLANN) to see if there are corners where I'd expect them. If I can find a set of points that matches these expected positions, I know where the symbol would be, if it is present in the image.
You have to try if that is good enough for your application. If you have too many false positives (sets of corners of other objects that accidentially form a rectangle + X), you could check if there are lines (i.e. high contrast in the right direction) where you would expect them. And you could check if there is low contrast where there are no lines in the pattern. This should be relatively straightforward once you know the points in the image that correspond to the corners/line endings in the object you're looking for.
I'd suggest the Generalized Hough Transform. It seems you have a fairly simple, fixed shape. The generalized Hough transform should be able to detect that shape at any rotation or scale in the image. You many need to threshold the original image, or pre-process it in some way for this method to be useful though.
You can use local features to identify the object in image. Feature detection wiki
For example, you can calculate features on some referent image which contains only the object you're looking for and save the results, let's say, to a plain text file. After that you can search for the object just by comparing newly calculated features (on images with some complex scenes containing the object) with the referent ones.
Here's some good resource on local features:
Local Invariant Feature Detectors: A Survey

Resources