I've asked similar questions before and I'm still struggling.
I want to create geo based info graphics at the level of a city.
I need to be able to take some latitude/longitude values and project them such that they are centered and appropriately zoomed.
It would help me a great deal to see an example that plots a small number of points.
37.781040, -122.497681
37.720504, -122.495622
37.723220, -122.395028
This is roughly an L shape and all three points should be in San Francisco.
It could be as simple as 3 black dots on a white background. I hope to learn:
which projection?
how do you adjust the projection so that an area the size of San Francisco is on the canvas?
how do you translate those coordinates and position them on that canvas?
Could someone create such an example?
Thanks.
-Kelly
I created a simple example that works.
https://gist.github.com/kellyfelkins/9741723
I think I was making multiple mistakes that made it really difficult to correct.
In case others have troubles too, here are some things to watch out for:
The projection method expects an array. For a while I was passing it 2 arguments, but it needs a single argument that is an array.
The projection expects the values in longitude, latitude order.
-Kelly
Related
Is there a way to plot data cubes with any kind of program? In order to plot this (1) image, I had to use TinkerCard, that by the way doesn't allow to put names along the data cube dimensions. Is there any kind of tool that allows do do something similar to (2) or (3)? Thanks in advance
I think this is a not possible.
The cubes which I play with all have atleast 10 dimensions.
The problem is that the short name, olap cube, suggests that some sort of pictorial representation is possible.
I think it is maybe better to remeber them using the fuller name multi-dimensional cubes.
Once past 3 dimensions I don't understand how it will be possible to draw a representative picture.
Saying all the above I have a book co-authored by Mosha Pasumadsky which contains some simple cube pictures similar to your picture. Also I attended a course run by Chris Webb and he used pictures of cubes to help our understanding. The pictures they have used are all very simplified, and of only small spaces within a full cube.
I need some advice about how to improve the visualization of cartographic information.
User can select different species and the webmapping app shows its geographical distribution (polygonal degree cells), each specie with a range of color (e.g darker orange color where we find more info, lighter orange where less info).
The problem is when more than one specie overlaps. What I am currently doing is just to calculate the additive color mix of two colors using http://www.xarg.org/project/jquery-color-plugin-xcolor/
As you can see in the image below, the resulting color where two species overlap (mixed blue and yellow) is not intuitive at all.
Someone has any idea or knows similar tools where to get inspiration? for creating the polygons I use d3.js, so if more complex SVG features have to be created I can give a try.
Some ideas I had are...
1) The more data on a polygon, the thicker the border (or each part of the border with its corresponding color)
2) add a label at the center of polygon saying how many species overlap.
3) Divide polygon in different parts, each one with corresponding species color.
thanks in advance,
Pere
My suggestion is something along the lines of option #3 that you listed, with a twist. Rather painting the entire cell with species colors, place a dot in each cell, one for each species. You can vary the color of each dot in the same way that you currently are: darker for more, ligher for less. This doesn't require you to blend colors, and it will expose more of your map to provide more context to the data. I'd try this approach with the border of the cell and without, and see which one works best.
Your visualization might also benefit from some interactivity. A tooltip providing more detailed information and perhaps a further breakdown of information could be displayed when the user hovers his mouse over each cell.
All of this is very subjective. However one thing's for sure: when you're dealing with multi-dimensional data as you are, the less you project dimensions down onto the same visual/perceptual axis, the better. I've seen some examples of "4-dimensional heatmaps" succeed in doing this (here's an example of visualizing latency on a heatmap, identifying different sources with different colors), but I don't think any attempt's made to combine colors.
My initial thoughts about what you are trying to create (a customized variant of a heat map for a slightly crowded data set, I believe:
One strategy is to employ a formula suggested for
n + 1
with regards to breaks in bin spacing. This causes me some concern regarding how many outliers your set has.
Equally-spaced breaks are ideal for compact data sets without
outliers. In many real data sets, especially proteomics data sets,
outliers can make this representation less effective.
One suggestion I have would be to consider the idea of adding some filters to your categories if you have not yet. This would allow slimming down the rendered data for faster reading by the user.
another solution would be to use something like (Comprehensive) R
or maybe even DanteR
Tutorial in displaying mass spectrometry-based proteomic data using heat maps
(Particularly worth noting I felt, was 'Color mapping'.)
Assume I have a model that is simply a cube. (It is more complicated than a cube, but for the purposes of this discussion, we will simplify.)
So when I am in Sketchup, the cube is Xmm by Xmm by Xmm, where X is an integer. I then export the a Collada file and subsequently load that into threejs.
Now if I look at the geometry bounding box, the values are floats, not integers.
So now assume I am putting cubes next to each other with a small space in between say 1 pixel. Because screens can't draw half pixels, sometimes I see one pixel and sometimes I see two, which causes a lack of uniformity.
I think I can resolve this satisfactorily if I can somehow get the imported model to have integer dimensions. I have full access to all parts of the model starting with Sketchup, so any point in the process is fair game.
Is it possible?
Thanks.
Clarification: My app will have two views. The view that this is concerned with is using an OrthographicCamera that is looking straight down on the pieces, so this is really a 2D view. For purposes of this question, after importing the model, it should look like a grid of squares with uniform spacing in between.
UPDATE: I would ask that you please not respond unless you can provide an actual answer. If I need help finding a way to accomplish something, I will post a new question. For this question, I am only interested in knowing if it is possible to align an imported Collada model to full pixels and if so how. At this point, this is mostly to serve my curiosity and increase my knowledge of what is and isn't possible. Thank you community for your kind help.
Now you have to learn this thing about 3D programming: numbers don't mean anything :)
In the real world 1mm, 2.13cm and 100Kg specify something that can be measured and reproduced. But for a drawing library, those numbers don't mean anything.
In a drawing library, 3D points are always represented with 3 float values.You submit your points to the library, it transforms them in 2D points (they must be viewed on a 2D surface), and finally these 2D points are passed to a rasterizer which translates floating point values into integer values (the screen has a resolution of NxM pixels, both N and M being integers) and colors the actual pixels.
Your problem simply is not a problem. A cube of 1mm really means nothing, because if you are designing an astronomic application, that object will never be seen, but if it's a microscopic one, it will even be way larger than the screen. What matters are the coordinates of the point, and the scale of the overall application.
Now back to your cubes, don't try to insert 1px in between two adjacent ones. Your cubes are defined in terms of mm, so try to choose the distance in mm appropriate to your world, and let the rasterizer do its job and translate them to pixels.
I have been informed by two co-workers that I tracked down that this is indeed impossible using normal means.
Here's my problem - I have a map of the world or some sort of region, like this:
I need to generate a "border points" table for this map of a region in order to generate imagemaps and dynamically highlight certain areas. All of the maps' regions will have borders of one color to define them (in the example image, white).
So far, I'm thinking of some sort of flood-fill based method - note that speed and efficiency are not that important, as the script is in no way intended to be used in real time.
Is there a better way to do this that I don't know of? Is my approach fundamentally wrong? Any suggestions would be appreciated!
If the regions are completely isolated one from each other, looking at connected components would do the trick. In Mathematica it looks like:
First create a binary image from the world map:
regions = ColorNegate[Binarize[img, .9]]
Then compute the connected components:
components = MorphologicalComponents[regions, CornerNeighbors -> False];
Now you may extract properties for each of the components (masks, perimeters, etc.). Here I colorized each regions with a unique color:
Colorize[components]
To get the border of a given component, one can query for the binary mask of the component and then compute the perimeter.
This gets all the masks:
masks = ComponentMeasurements[components, "Mask"];
As an example, get the border, or contour, of one region:
country = Image[masks[[708, 2]], "Bit"]
border = MorphologicalPerimeter[country]
Getting 2D positions for the border is just a matter of extracting the white pixels in the image:
pos = Position[ImageData[border], 1]
If possible; try to get the vector data behind your map from another source. I understand this doesn't answer your question, but for world borders (and many others) you can find them publicly on the internet (google for "world borders shapefile"). This will give you more precise data, allow you to zoom at any level, reproject your map, use google maps or other layers, etc. You can display the vector data nicely with libraries like openlayers etc, but then you're slowly moving towards more complex GIS stuff.
If all you really need is based on an image, your flood fill approach might work (if the borders are indeed completely closed).
i want to identify a ball in the picture. I am thiking of using sobel edge detection algorithm,with this i can detect the round objects in the image.
But how do i differentiate between different objects. For example, a foot ball is there in one picture and in another picture i have a picture of moon.. how to differentiate what object has been detected.
When i use my algorithm i get ball in both the cases. Any ideas?
Well if all the objects you would like to differentiate are round, you could even use a hough transformation for round objects. This is a very good way of distinguishing round objects.
But your basic problem seems to be classification - sorting the objects on your image into different classes.
For this you don't really need a Neural Network, you could simply try with a Nearest Neighbor match. It's functionalities are a bit like neural networks since you can give it several reference pictures where you tell the system what can be seen there and it will optimize itself to the best average values for each attribute you detected. By this you get a dictionary of clusters for the different types of objects.
But for this you'll of course first need something that distinguishes a ball from a moon.
Since they are all real round objects (which appear as circles) it will be useless to compare for circularity, circumference, diameter or area (only if your camera is steady and if you know a moon will always have the same size on your images, other than a ball).
So basically you need to look inside the objects itself and you can try to compare their mean color value or grayscale value or the contrast inside the object (the moon will mostly have mid-gray values whereas a soccer ball consists of black and white parts)
You could also run edge filters on the segmented objects just to determine which is more "edgy" in its texture. But for this there are better methods I guess...
So basically what you need to do first:
Find several attributes that help you distinguish the different round objects (assuming they are already separated)
Implement something to get these values out of a picture of a round object (which is already segmented of course, so it has a background of 0)
Build a system that you feed several images and their class to have a supervised learning system and feed it several images of each type (there are many implementations of that online)
Now you have your system running and can give other objects to it to classify.
For this you need to segment the objects in the image, by i.e Edge filters or a Hough Transformation
For each of the segmented objects in an image, let it run through your classification system and it should tell you which class (type of object) it belongs to...
Hope that helps... if not, please keep asking...
When you apply an edge detection algorithm you lose information.
Thus the moon and the ball are the same.
The moon has a diiferent color, a different texture, ... you can use these informations to differnentiate what object has been detected.
That's a question in AI.
If you think about it, the reason you know it's a ball and not a moon, is because you've seen a lot of balls and moons in your life.
So, you need to teach the program what a ball is, and what a moon is. Give it some kind of dictionary or something.
The problem with a dictionary of course would be that to match the object with all the objects in the dictionary would take time.
So the best solution would probably using Neural networks. I don't know what programming language you're using, but there are Neural network implementations to most languages i've encountered.
You'll have to read a bit about it, decide what kind of neural network, and its architecture.
After you have it implemented it gets easy. You just give it a lot of pictures to learn (neural networks get a vector as input, so you can give it the whole picture).
For each picture you give it, you tell it what it is. So you give it like 20 different moon pictures, 20 different ball pictures. After that you tell it to learn (built in function usually).
The neural network will go over the data you gave it, and learn how to differentiate the 2 objects.
Later you can use that network you taught, give it a picture, and it a mark of what it thinks it is, like 30% ball, 85% moon.
This has been discussed before. Have a look at this question. More info here and here.