all.
I'm working on a project that involves overlaying a map of the state of Kansas (that includes county borders, and some other geographic details) on top of a static, equirectangular base map that I've designed in Adobe Illustrator.
I'm having no problems generating the equirectangular projection. But when it comes to sizing the projection so that it fits exactly on top of the static base map, I can't seem to figure it out.
I'm guessing the secret lies within these functions.
var projection = d3.geo.equirectangular()
.scale(1)
.translate([0, 0]);
If the total size of my static base map image is, say, 1000px by 300px, and the outline of Kentucky in which the projection must fit is 530px by 230px, how can I tweak the scale/transform and other functions to resize the projection?
Here is a link to a proof-of-concept illustrating what I'm trying to accomplish.
http://i.imgur.com/acRk6RF.png
The background image is obviously fixed in width/height. The transparent outline of Kansas that's overlaid would be generated by D3, but would have to be tweaked to fit the map.
Any help would be appreciated!
Related
I based some code on the "Add a 3D model" example at maplibre.org in order to draw only a horizontal plane on a map which uses setTerrain to add a terrain layer.
My intention is to draw a couple of semitransparent layers at a given heights above sea level and have them intersect with mountains, somewhat similar to contour lines.
In my first test I just created a 1km-wide square at altitude 0.
I am somewhat confused by the behavior, since altitude 0 turns out to be the height of the terrain in the center of the visible map area. When I then drag the map and release the mouse, altitude 0 gets somehow reset again to the new altitude 0 of the terrain at the center, making the plane change its relative altitude.
The following animated GIF illustrates the problem:
The GIF has a mountain range to the right and the elevation is greatly exaggerated, in order to better illustrate the issue.
What do I need to do in order to be able to specify the height of the plane in meters above sea level and have it appear to be fixed at that height when dragging the map?
I think I need to get the height of the terrain at the center and then add/substract it from the plane's z-position in order for it to stay put at the height relative to given landmarks, but I have got no idea on how to do this inside of a custom layer's render function.
In this jsfiddle I have a D3 map that I took from here, but I'm trying to fit it in an svg that is half the original size. For that, I changed:
var width = 480;
var height = 300;
.....
var path = d3.geoPath(d3.geoIdentity().translate([width/2, height/2]).scale(height*.5));
But it's not working. How to make the map fit the svg?
D3's geoIdentity exposes almost all the standard methods of a d3 projection (off the top of my head, only rotation is not possible as the identity assumes cartesian data). Most importantly here, it exposes the fitSize and fitExtent methods. These methods set translate and scale based on the coordinate extent of displayed geojson data and the pixel extent of the svg/canvas.
To scale your features with a geo identity you can use:
d3.geoIdentity().fitSize([width,height],geojsonObject)
Note that an array of geojson features won't work, but a geojson feature collection or any individual feature/geometry object works too. The width and height are of the svg/canvas.
If you want to apply a margin, you can use:
d3.geoIdentity().fitExtent([[margin,margin],[width-margin,height-margin]],geojsonObject)
The margin doesn't need to be uniform, the format is [[left,top],[right,bottom]],geojsonObject
If using fitSize or fitExtent there is no need to set center, translate or scale manually and setting these afterwards will recenter or rescale the map.
Hello i am new to ThreeJS and texture mapping,
Let's say I have a 3D-Plane with the size of (1000x1000x1). When I apply a texture to it, it will be repeated or it will be scaled, to atleast filling the full plane.
What I try to achieve is, to change the scaling of the picture on the plane at runtime. I want the Image to get smaller and stop fitting the full plane.
I know there is a way to map each face to a part of a picture, but is it also possible to map it to a negative number in the picture, so it will be transparent?
My question is:
I UV-Mapped a Model in Blender and imported it with the UV-Coords into my ThreeJS-Code. Now i need to scale the texture down, like described before. Do I have to remap the UV-Cords or do i have to manipulate the image and add an transparent edge?
Further, will I be able on the same way to move the image on the picture?
I already achieved this kind of usage in java3d by manipulating bufferedImages and drawing them onto transparent ones. I am not sure this will be possible using javascript, so i want to know if it is possible by texture-mapping.
Thank you for your time and your suggestions!
This can be done using mapping the 3d -plane to a canvas ,where the image is drawn (fabric.js can be used for canvas drawings).Inshort set the canvas as texture for the 3d model
yourmodel.material.map = document.getElementById("yourCanvas");
Hope it helps :)
Yes. In THREE, there are some controls on the texture object..
texture.repeat, and texture.offset .. they are both Vector2()s.
To repeat the texture twice you can do texture.repeat.set(2,2);
Now if you just want to scale but NOT repeat, there is also the "wrapping mode" for the texture.
texture.wrapS (U axis) and texture.wrapT (V axis) and these can be set to:
texture.wrapS = texture.wrapT = THREE.ClampToEdgeWrapping;
This will make the edge pixels of the texture extend off to infinity when sampling, so you can position a single small texture, anywhere on the surface of your uv mapped object.
https://threejs.org/docs/#api/textures/Texture
Between those two options (including texture.rotation) you can position/repeat a texture pretty flexibly.
If you need something even more complex.. like warping the texture or changing it's colors, you may want to change the UV's in your modeller, or draw your texture image into a canvas, modify the canvas, and use the canvas as your texture image, as described in ArUns answer. Then you can modify it at runtime as well.
I've got a question about getting sprites to work with three.js using perspective and orthogonal cameras.
I have a building being rendered in one scene. At one location in the scene all of the levels are stacked on top of each other to give a 3D view of the building and an orthogonal camera is being used to view it. In another part of the scene, I have just the selected level of the building being shown and a perspective camera is being used. The screen is divided between the two views. The idea being the user selects a level from the building view and a more detailed map of that selected level is shown on the other part of the screen.
I played around with sprites for a little bit and as far as I understand it; if the sprite is being viewed with a perspective camera then the sprite's scale property is actual it's size property and if a sprite is being viewed with an orthogonal camera the scale property scales the sprite according to the view port.
I placed the sprite where both cameras can see it and this seems to be the case. If I scale the sprite by 0.5, then the sprite takes up half the orthogonal camera's view port and I can't see it with the perspective camera (presumably because for it, the sprite is 0.5px x 0.5px and is either rounded to 0px (not rendered, or 1px, effectively invisible). If I scale the sprite by say 50, the the perspective camera can see it (presumably because it's a 50px x 50px square) and the orthogonal camera is over taken by the sprite (presumably because it's being scaled by 50 times the view port).
Is my understanding correct?
I ask because in the scene I'm rendering, the building and detailed areas are ~1000 units apart on the x-axis. If I place a sprite somewhere on the detail map I need it to be ~35x35 pixels and when I do this it works fine for the detail view but building view is overtaken. I played with the numbers and it seems that if I scale the sprite by 4, it starts to show up on my building view, even though there's a 1000 unit distance between the views and the sprite isn't visible with the perspective camera.
So. If my understanding is correct then I need to either use separate scenes; have a much bigger gap between views; use the same camera type for both views; or not use sprites.
There are basically two different ways you can use sprites, either with 2D screen coordinates or 3D scene coordinates. Perhaps scene coordinates are what you need? For examples of both, check out the example at:
http://stemkoski.github.io/Three.js/Sprites.html
and in particular, when you zoom in and zoom out in that demo, notice that the sprites in-scene will change size, while the others do not.
Hope this helps!
I am developing a Map App for our school. Our school provide me its own map image and coordinate information. So I want use my map image as the source of map and accord to user's location to show a point in the map image. Can anybody gives me some advice?
Thanks in advance.
There are 2 ways:
It is possible to change the source of the map-tiles (e.g. from Bing to say Nokia or Google) of the Map Control. However, for this to work, it is important that map-tiles source implements mechanisms like quadkeys (e.g. see this). Therefore, to answer your question if you would like to use the Bing Map Control with your school's map so that you can leverage the positioning features of the control, it would require that you have a map-tile server properly designed in order to achieve this. AND, there might be some legal issue with altering the Bing Map control if i am not mistaken.
However, given that you are suggesting an image of the map and then doing positioning, then i would suggest that it can be as easy as you calibrating the pixel X-Y coordinate system on the map with that of the geo-coordinate provided by the geo-watcher. Then, in your code you could do a simple mapping between these 2 systems and then draw something on top of the image. For this part you could use a writeablebitmap or simply use the fact that you can overlay UI controls with silverlight. So, for the latter have a canvas with the an image of the map of your school and then on top of that canvas you can have an <image> representing the device and change its top-left coordinate wrt to the canvas.
So, in summary, as the geo-watcher gives geo x-y coordinates to your code, there is mapping function to the pixel X-Y (which you have pre-calculated) and use that XY to position an overlay <image> or draw some "pin" on a writeablebitmap where you have previously draw the image of the map of your school. Things get complicated with this approach when you want to have zooming as well but, this solution is easily scalable.
Does this help clear things a bit?
Answering 2nd question in comment below:
Yes you can zoom in and out of the canvas but, you would have to program it yourself. The control itself, the canvas does not have this capability. Hence, you would have to recognize the triggers for a zoom action (e.g. clicking on the (+) or (-) buttons or, pinch and stretch gestures) and react to that by re-drawing on the canvas a portion of the region on the canvas so that now that regions stretches over the entire canvas. That is, zooming. For instance for the zoom in case: you would have to determine a geometrical area which corresponds to the zoom factor and is in ratio to the dimensions of the canvas object. Then, you would have to scale that portion up so that edges and empty spaces representing walls and spaces between them grow proportionately. Also, you have to determine the center point of that region which your fix on the canvas so that everything grows away from it. Hence, you would be achieving a appropriate zooming effect. At this point you would have to re-adjust your mapping function of geo-coordinates to pixel XY so that the "pin" or object of interest can be drawn with precision and accurately on the newly rendered surface.
I understand that this can appear quite intensive but, it is straightforward once you appreciate for yourself the mechanics of what is required.
Another easier option could be to use SVG (Scalable Vector Graphics) in a Web-Browser control. Note that you would still require the geo-coordinate to pixel-xy system. However, with this approach you can get the zooming for free with the combination of SVG (which have transformation capabilities for the scale up and down operations) and Web-Browser which enables you to render the SVG and does the gesture handling of zooming in to the map. For that, i believe that the cost of work would be in re-creating the map of your school which is in bitmap to SVG. There are tools like Inkscape which you can use to load the image of your map and then trace the outlines over it. You can then save that outline document as an SVG. In fact, i would recommend this approach to your problem before tackling the Canvas method as i feel that it would be the easiest path for your needs.