How can I make a google maps like interface - ajax

Hi I want to create a google maps like interface from very high resolution maps. eg. 11k x 11k resolution. My goal is to be able to pan/zoom and put pins on desired locations on the map. I was able to implement zooming/panning of the image using mapbox plugin. my question is how to place a pin on (x,y) location on image and keep the pin while zooming in/out.
Thanks

The easiest way would be to treat the entire image as a 11k by 11k grid with (0,0) in the top right corner. Then, the pin would be located at (x,y). Then, when you scale the image, you treat the new view as a subset of the main grid.
For example, it may start at (5000, 3500) and be 500 by 500 pixels. Then, if the pin is in those coords, you calculate where to place it.
Let's say you have two pins: {(5230, 3550), (5400, 3700)}.
Now, if your zoomed on 5000, 3500 and the view port is 500x500, your pins real locations are:
{(230, 3550), (5400, 3700)}
The way you'll need to do the translations exactly will vary on how exactly your handling zooming/panning, but that should be the general idea.

Related

Stack drone images in IDL_overlap degree

I have about 6000 aerial images taken by 3DR drone for vegetation sites.
The images have to overlap to some extant because the drone flights cover the area go from EW and then again NS, so the images present the same area from two directions. I need the overlap for the images for extra accuracy.
I don't know to write a code on IDL to combine the images and create that overlap. Can anyone help please?
Thanks
What you need is something that is identifiable that occurs in both images. Preferably you would have several things across the field of view so that you could get the correct rotation as well as a simple x-y shift.
The basic steps you will need to follow are:
Source Identification - Identify sources in all images that will later be used to align the images. Make sure the centering of these soruces are good so that they will align better later.
Basic alignment. Start with a guess on where the images should align then try to match the sources.
Match the sources. There are several libraries that can do this for stars (in astronomical images) that could be adapted for this.
Shift and rotate the images. This can be done to the pixels or to the header that is read in and have a program manipulate the pixels on the fly.

Create heightmap in C# based on elevation data

I recently came accross a site
http://www.daftlogic.com/sandbox-google-maps-find-altitude.htm
which gives me the elevation data of each point of any earth's location.
Now I want to build a desktop application in C# , using google maps api or otherwise , in which the user will search for the location and the image will be displayed of that location , where user can zoom in or out and select the required area. After selection , the tool will get elevation data of every point in the selected area , and it will create respective color in that point , so the highest point will have white and lowest point will have black , then it will convert the data into an image format. So if I can do this , I can basically create heightmap in seconds.
Can anyone tell me how to achieve this ? I don't have any idea regarding how to get the elevation data of every point in selected region , and how to calculate and create color image from that data.
Thanks.
I've had a look at the Google address you've pasted. My answer here is not exactly about that but it might halp you understand how elevation works. Google, as many other websites, uses a DEM map (Digital Elevation Maps: http://en.wikipedia.org/wiki/Digital_elevation_model) which is a rasterized image of a certain area, in which every pixel represent a real sample from the Satellite/Shuttle (or the interpolation between two sampled points).
These maps are quite huge, depending on the sampling frequence. For some area of the Globe the sampling area is very dense, while for other areas is more sparse. The USA have the best detail.
If you want to download the free DEMs this is a good starting point: http://srtm.csi.cgiar.org/SELECTION/inputCoord.asp
You could download a sample of every part of the area you want to include in your application, convert the data into a database (latitude, longitude, altitude) and have your application query the DB and return a set of pixels that you can paint in different colors, accordingly with your altitude ranges.
Hope this helped

What is the main idea of creating click heatmap?

in one of my projects, I would like to create heatmap of user clicks. I was searching a while and found this library - http://www.patrick-wied.at/static/heatmapjs/examples.html . That is basically exactly what I would like to make. I would like to create heatmap in SVG, if possible, that is only difference.
I would like to create my own heatmap and I'm just wondering how to do that. I have XY clicks position. Each click has mostly different XY position, but there can be exceptions time to time, a few clicks can have the came XY position.
I found a few solutions based on grid on website, where you have to check which clicks belong into the same column in this grid and according to these informations you are able to fill the most clicked columns with red or orange and so on. But it seems a little bit complicated to me and maybe slower for bigger grids.
So I'm wondering if there is another solution how to "calculate" heatmap colors or I would like to know the main idea used in library above.
Many thanks
To make this kind of heat map, you need some kind of writable array (or, as you put it, a "grid"). User clicks are added onto this array in a cumulative fashion, by adding a small "filter" sub-array (aligned around each click) to the writable array.
Unfortunately, this "grid" method seems to be the easiest, simplest way to get that kind of smooth, blobby appearance. Fortunately, this kind of operation is well-supported by software and hardware, under the name "computer graphics".
When considered as a computer graphics operation, the writable array is called an "accumulation buffer". The filter is what gives you the nice blobby appearance, even with a relatively small number of clicks -- you can tweak the size of the filter according to the needs of your application.
After accumulating the user clicks, you will need to convert from the raw accumulated values to some kind of visible color scale. This may involve looking through the entire accumulation buffer to find the largest value, and mapping your chosen color scale accordingly. Alternately, you could adjust your scale according to the number of mouse clicks, or (as in the demo you linked to) just choose a fixed scale regardless of the content of the buffer.
Finally, I should mention that SVG is not well-adapted to representing this kind of graphic. It should probably be saved as some kind of image file (.jpg or .png) instead.

Algorithm to interpolate any view from individual view mapped on a sphere

I'm trying to create a graphics engine to show point cloud data (in first person for now). My idea is to precalculate individual views from different points in the space we are viewing and mapping them into a sphere. Is it possible to interpolate that data to determine the view from any point on the space?
I apologise for my english and my poor explanation, but I'm can't figure out another way to explain. If you don't understand my question I'll be happy to reformulate it if it's needed.
EDIT:
I'll try to explain it with an example
Image 1:
Image 2:
In these images we can see two different views of the pumpkin (imagine that we have a sphere map of the 360 view in both cases). In the first case we have a far view of the pumpkin and we can see the surroundings of it and imagine that we have a chest right behind the character (we'd have a detailed view of the chest if we looked behind).
So, first view: surroundings and low detail image of the pumpkin and good detail of the chest but without the surroundings.
In the second view we have the exact opposite: a detailed view of the pumpkin and a non detailed general view of the chest (still behind us).
The idea would be to combine the data from both views to calculate every view between them. So going towars the pumpin would mean to streach the points of the first image and to fill the gaps with the second one (forget all the other elements, just the pumpkin). At the same time, we would comprime the image of the chest and fill the surroundings with the data from the general view of the second one.
What I would like is to have an algorithm that dictates that streching, compriming and comination of pixels (not only forward and backwards, also diagonaly, using more than two sphere maps). I know it's fearly complicated, I hope I expressed myself well enough this time.
EDIT:
(I'm using a lot the word view and I think that's part of the problem, here is the definition of what I mean with "view": "A matrix of colored points, where each point corresponds to a pixel on the screen. The screen only displays part of the matrix each time (the matrix would be the 360 sphere and the display a fraction of that sphere). A view is a matrix of all the possible points you can see by rotating the camera without moving it's position." )
Okay, it seems that you people still don't understand the concept around it. The idea is to be able to display as much detailed enviroments as possible by "precoocking" the maximun amount of data before displaying it at real time. I'll deal with the preprocesing and the compression of data for now, I'm not asking about that. The most "precoocked" model would be to store the 360 view at each point on the space displayed (if the character is moving at, for example, 50 points per frame, then store a view each 50 points, the thing is to precalculate the lighting and shading and to filter the points that wont be seen, so that they are not processed for nothing). Basicaly to calculate every possible screenshot (on a totally static enviroment). But of course, that's just ridiculous, even if you could compress a lot that data it would still be too much.
The alternative is to store only some strategic views, less frecuently. Most of the points are repeated in each frame if we store all the possible ones. The change in position of the points on screen is also mathematically regular. What I'm asking is that, a algorithm to determine the position of each point on the view based on a fiew strategic viewpoints. How to use and combinate data from strategic views on different possitions to calculate the view in any place.

Zoom to show all locations in bing maps

Say I have 3 pushpins: (1) California, (2) Florida, (3) New York. In order for all the 3 of them to be visible, I'd have to zoom out far enough to pretty much see the whole country. But say instead of that I had (1) California, (2) Nevada, (3) Texas. I'd have to zoom out only to cover the south west corner of the US. Is there any function in the bing maps for Windows Phone 7 API that helps me with this. Basically, I want to zoom out just enough to see a set of locations.
Thanks!
Yes. it is possible.
CurrentItems is source for my map.
var locations = CurrentItems.Select(model => model.Location);
map.SetView(LocationRect.CreateLocationRect(locations));
I don't know about a function that will do what you want directly, but you can find the bounding box that just surrounds all of your locations and you should be able to set the viewport to that extent.
If you start with an inverted box where the bottom left corner is (maxVal, maxVal) and the top right is (-maxVal, -maxval). you can loop over all your points and reset the bottom left if the point is less than its current value or the top right if it's greater than its current value.
The end result will be the smallest box that everything fits into. Add a little to the size to cope with rounding error and to make sure your pins are actually on the map and then set the extent of the viewport.

Resources