How to center spatial data - ssrs-2012

I am mapping spacial data (Lat/Long) to data points around the country (Australia). I find that the map area changes every time I run my query. e.g. If there are no results in Western Australia, Western Australia will disappear off the map, and the rest of the country will be zoomed in on.
I can not for the life of my figure out to anchor the map tilelayer to show the entire country every single time, and not move according to where the data points returned are.

I've never done anything with this type of data but my instinct would be to add two locations that would be at the top left and bottom right of an imaginary rectangle drawn around your desired map area. As these would always be displayed, and always be the most extreme points, the map would center as you expected.

Related

Plotting a star chart efficiently

I'd like to visualize astronomical star catalogues that can contain hundreds of thousands of entries. The catalogues usually consist of simply a list of stars, with spherical coordinates and other data for each star. By spherical coordinates I mean right ascension (0-360 degrees or 0-24 hours) and declination (-90 degrees to +90 degrees). This corresponds to longitude and latitude, just on the celestial sphere instead of Earth's surface.
I'd like to plot all the stars in the catalogue that are located inside a certain field of view, defined by the center (in spherical coordinates) and the size of the field of view (in degrees) and the projection (e.g. stereographic projection).
Plotting the stars by going through the whole catalogue and just checking whether each star is inside the field of view or not is very inefficient.
How could I make this more efficient? Is there a good algorithm or data structure for this kind of a problem?
For modern gfx cards numbers like 300K (and more) stars are still manageable...
So you can try to load them all to gfx as VBO/VAO and leave the render/clipping to gfx alone. I use Hipparcos (118322 stars) in this way without problems while each star is a transparent Quad. You just need to pre-compute the quads to view position prior to rendering (just once). Here screenshot from one of my apps where Hipparcos is used in this manner as background stars (in RT)
You can also use geometry shaders to ease up things a lot (can send just points or even Ra,Dec,Distance instead of Quads) but this will limit your Target gfx HW to only those supporting geometry shaders.
If you got more stars then your HW can handle then use sorted dataset
Most catalogs are sorted by Ra or Dec. You can use this by:
select view area min(Ra,Dec),max(Ra,Dec)
let assume your data is sorted by Ra ascending
find first i0 where star[i0].Ra>=min.Ra
use binary search !!!
find first i1 where star[i1].Ra>=max.Ra
use binary search !!!
process stars i0<=i<i1
test if min.Dec <= star[i].Dec <= max.Dec and if yes then render it.
If even this is not fast enough you need to use spatial subdivision
So divide your dataset to smaller ones. And prior to rendering according to selected view area use only datasets near that area. This will lower the amound of data processed significantly.

Invoice / OCR: Detect two important points in invoice image

I am currently working on OCR software and my idea is to use templates to try to recognize data inside invoices.
However scanned invoices can have several 'flaws' with them:
Not all invoices, based on a single template, are correctly aligned under the scanner.
People can write on invoices
etc.
Example of invoice: (Have to google it, sadly cannot add a more concrete version as client data is confidential obviously)
I find my data in the invoices based on the x-values of the text.
However I need to know the scale of the invoice and the offset from left/right, before I can do any real calculations with all data that I have retrieved.
What have I tried so far?
1) Making the image monochrome and use the left and right bounds of the first appearance of a black pixel. This fails due to the fact that people can write on invoices.
2) Divide the invoice up in vertical sections, use the sections that have the highest amount of black pixels. Fails due to the fact that the distribution is not always uniform amongst similar templates.
I could really use your help on (1) how to identify important points in invoices and (2) on what I should focus as the important points.
I hope the question is clear enough as it is quite hard to explain.
Detecting rotation
I would suggest you start by detecting straight lines.
Look (perhaps randomly) for small areas with high contrast, i.e. mostly white but a fair amount of very black pixels as well. Then try to fit a line to these black pixels, e.g. using least squares method. Drop the outliers, and fit another line to the remaining points. Iterate this as required. Evaluate how good that fit is, i.e. how many of the pixels in the observed area are really close to the line, and how far that line extends beyond the observed area. Do this process for a number of regions, and you should get a weighted list of lines.
For each line, you can compute the direction of the line itself and the direction orthogonal to that. One of these numbers can be chosen from an interval [0°, 90°), the other will be 90° plus that value, so storing one is enough. Take all these directions, and find one angle which best matches all of them. You can do that using a sliding window of e.g. 5°: slide accross that (cyclic) region and find a value where the maximal number of lines are within the window, then compute the average or median of the angles within that window. All of this computation can be done taking the weights of the lines into account.
Once you have found the direction of lines, you can rotate your image so that the lines are perfectly aligned to the coordinate axes.
Detecting translation
Assuming the image wasn't scaled at any point, you can then try to use a FFT-based correlation of the image to match it to the template. Convert both images to gray, pad them with zeros till the originals take up at most 1/2 the edge length of the padded image, which preferrably should be a power of two. FFT both images in both directions, multiply them element-wise and iFFT back. The resulting image will encode how much the two images would agree for a given shift relative to one another. Simply find the maximum, and you know how to make them match.
Added text will cause no problems at all. This method will work best for large areas, like the company logo and gray background boxes. Thin lines will provide a poorer match, so in those cases you might have to blur the picture before doing the correlation, to broaden the features. You don't have to use the blurred image for further processing; once you know the offset you can return to the rotated but unblurred version.
Now you know both rotation and translation, and assumed no scaling or shearing, so you know exactly which portion of the template corresponds to which portion of the scan. Proceed.
If rotation is solved already, I'd just sum up all pixel color values horizontally and vertically to a single horizontal / vertical "line". This should provide clear spikes where you have horizontal and vertical lines in the form.
p.s. Generated a corresponding horizontal image with Gimp's scaling capabilities, attached below (it's a bit hard to see because it's only one pixel high and may get scaled down because it's > 700 px wide; the url is http://i.stack.imgur.com/Zy8zO.png ).

Algorithm to interpolate any view from individual view mapped on a sphere

I'm trying to create a graphics engine to show point cloud data (in first person for now). My idea is to precalculate individual views from different points in the space we are viewing and mapping them into a sphere. Is it possible to interpolate that data to determine the view from any point on the space?
I apologise for my english and my poor explanation, but I'm can't figure out another way to explain. If you don't understand my question I'll be happy to reformulate it if it's needed.
EDIT:
I'll try to explain it with an example
Image 1:
Image 2:
In these images we can see two different views of the pumpkin (imagine that we have a sphere map of the 360 view in both cases). In the first case we have a far view of the pumpkin and we can see the surroundings of it and imagine that we have a chest right behind the character (we'd have a detailed view of the chest if we looked behind).
So, first view: surroundings and low detail image of the pumpkin and good detail of the chest but without the surroundings.
In the second view we have the exact opposite: a detailed view of the pumpkin and a non detailed general view of the chest (still behind us).
The idea would be to combine the data from both views to calculate every view between them. So going towars the pumpin would mean to streach the points of the first image and to fill the gaps with the second one (forget all the other elements, just the pumpkin). At the same time, we would comprime the image of the chest and fill the surroundings with the data from the general view of the second one.
What I would like is to have an algorithm that dictates that streching, compriming and comination of pixels (not only forward and backwards, also diagonaly, using more than two sphere maps). I know it's fearly complicated, I hope I expressed myself well enough this time.
EDIT:
(I'm using a lot the word view and I think that's part of the problem, here is the definition of what I mean with "view": "A matrix of colored points, where each point corresponds to a pixel on the screen. The screen only displays part of the matrix each time (the matrix would be the 360 sphere and the display a fraction of that sphere). A view is a matrix of all the possible points you can see by rotating the camera without moving it's position." )
Okay, it seems that you people still don't understand the concept around it. The idea is to be able to display as much detailed enviroments as possible by "precoocking" the maximun amount of data before displaying it at real time. I'll deal with the preprocesing and the compression of data for now, I'm not asking about that. The most "precoocked" model would be to store the 360 view at each point on the space displayed (if the character is moving at, for example, 50 points per frame, then store a view each 50 points, the thing is to precalculate the lighting and shading and to filter the points that wont be seen, so that they are not processed for nothing). Basicaly to calculate every possible screenshot (on a totally static enviroment). But of course, that's just ridiculous, even if you could compress a lot that data it would still be too much.
The alternative is to store only some strategic views, less frecuently. Most of the points are repeated in each frame if we store all the possible ones. The change in position of the points on screen is also mathematically regular. What I'm asking is that, a algorithm to determine the position of each point on the view based on a fiew strategic viewpoints. How to use and combinate data from strategic views on different possitions to calculate the view in any place.

Find street intersections within an area in using Google Maps API

Given a square area, what is the best way to find the approximate coordinates of every street intersection within the given area ?
Since there is no description of your application, I can't tell if you need to use Google Maps or if another data source would answer your needs.
If http://openstreetmap.org fulfills the requirements of your application, then it's easy:
the OSM API has a request to pull data from a rectangular region. You get XML data.
filter this data to keep only the street you are interested in, probably the "key=highway" tags
filter this to keep only the points belonging to two or more lines.
Please disregard this if Google Maps is a requirement.
But still: since the roads exist independently of the database, the above method will yield roads intersections (in lat/long coordinates) with a pretty high correlation with what you would get from Google maps ;-) You can then use those points to display them over a Google map, knowing that both datasets aren't identical so it won't be perfect.
Might not be the easiest method but I used a seperate database of our countries roads with their linestrings.
I took the first and last points of each line string, then counted the number of roads within 50 m of each start/end point. I then took the nodes from navigation route and used these to compare the number of roads intersecting with each node. I then looked at the direction each start point and the next point along that road, which gives you direction. from that with a bit of maths you can work out the number and angle of the roads at the next intersection. I then made a road rules application that tells you which vehicles to give way to

Finding Room for a Shape on a Grid

I'm working on a game where you have an 8x12 grid where each cell is the same size and all the cells are directly next to one another.
You drag around various Tetris-like shapes and place them on the grid in valid locations, a valid location being one where all the cells that the shape will occupy are not occupied by some other shape.
My problem is that I'm not sure how to search the grid space for valid locations. I've been searching for an algorithm that will solve this sort of problem, but I have come up empty handed thus far. It seems like it should be pretty straightforward to detect valid locations, but I have not been able to come to a successful solution.
Any processes, algorithm suggestions, or ideas for how to go about solving this would be extremely helpful. Thanks!
EDIT:
Here is the expected functionality: When the shape is in a valid location, it can be dragged between valid locations freely and follows the mouse pointer. However, when you try to drag the shape into an invalid area (ie movement in the direction specified would place one or more blocks of the shape in invalid locations), it stays in the last valid location.
At this point, when the mouse is in an invalid area, I want to do some "predictive" movement so that if the player moves the mouse cursor near a valid position, the shape then "snaps" into place, say if the valid position is two grid spaces away.
Thanks for your suggestion so far; I hadn't thought of that method!
As described, the algorithm would be quite simple. Arbitrarily choose a starting block on your shape, and try to match it to each open cell of the grid. If "drawing" the shape with the block aligned with the current cell doesn't cause a collision, you've found a valid position.

Resources