Reposition texture in real time - three.js

Is it possible in THREE JS to re-position a texture in real time?
I have a model of a heart and I'm projecting "color maps"/"texture with colors" onto the model but the position of the maps can be a little different each time.
UPDATE
More info:
I have about 20 color maps. They are 80 by 160 pixels. I need to position them on the model. The position of the color maps may differ slightly. Currently I add all the color maps to a big texture and then I load the texture onto the model. That all works just fine.
But sometimes a surgeon feels like a color map needs to be moved over or rotated a little. I can't expect him to change the hard-coded locations in the code. I want him to be able to drag the color map to the right location.
I studied the THREE JS documentation and examples but I haven't found anything yet.

Related

Dimension changing of fbx model in unity 3D [duplicate]

This question already exists:
Change dimensions of Cubical Shower 3d model in unity 3D
Closed 2 years ago.
Is it possible to change width of any fbx model in 3D without changing its realistic look so that after changing its dimension, the model should not be stretched?
If 2 objects are placed beside each other then need to increase the size of one object and change position of other object with respect to first object.
Thanks in advance.
this breaks down to two problems, if you want to scale an object in just one dimension it will always stretch, for example your your table:
While the board looks fine the legs will get stretched and look unrealistic.
Now the question is what can you do?
It depends on your model.
First of all has your model only one mesh? or has every component a single mesh?
Preferably you want your components to have a independent mesh object. For your table it would be something like this:
This way you can only scale your board and then transform the position of your legs accordingly so that they fit to the new board size.
If you have only one mesh there is not a lot you can do in Unity. For that you would need to go into Blender or any other 3D modeling tool and split the components manually.
Now if you only stretched the board and your model has a texture you will notice that it will look stretched.
What can you do about that?
Go to your texture and first of all check the wrap mode
in this case we want it on repeat, after that we need to change the material setting
since we stretched the geometry we need to change the tiling, befor it was on y = 1 but we scaled the y dimensions so now we need to adapt this number aswell and make the texture repeat. For a table this is doable, if we for example work with more complex textures that have specific parts this will not work and you have to change the texture manually.
now the texture looks better but you probably will have abrupt color changes, this is because the texture is repeated, i "circled" it on the picture. For this problem you have to change the texture in a picture editing program and make it seamless.
I hope this helped a bit, i know this is only the basics and to get a perfect texture and image you have to put in a bit more work, but for that i would highly recommend to read a tutorial.

Make Mapbox GL JS canvas render at higher dpi, or scale without losing quality

I have a mapbox project in production where the street map the user customizes (location, zoom, and text) will ultimately be printed on a surface which has rather small dimensions (3.5" x 2.25" at 600dpi. keeping in mind that the zoom level affects the visibility of the different street types, The problem I am running into is this:
Since the canvas element renders at 72dpi, this means that in order to get an accurate representation of how the map will print, I actually have to make the map's div container real size # 72dpi (252px x 162px) which is of course quite small and far less detailed than the map will look when it's printed at 600dpi
In order to allow people to interact with the map at a reasonable size on the screen, the cheap solution is of course to scale up the canvas using css transforms: i.e. #mapContainer {transform: scale(2.5)}. However this results in a very pixelated map since, unlike svg vector graphics (as seen in the text and graphics overlays in the images below), the browser does not re-render the canvas when it scales up.
Unscaled canvas
Scaled Canvas
I have spent a lot of time searching for a solution to this problem, and at best it looks like I may have to utilize a method where I pull in mapbox data into tiling services like nextzen with data visualization libraries like D3.js but id like to make this one last ditch effort to see if there is any way to trick the browser into rendering this element in a higher size dpi without changing the map bounds or zoom.
I suspect the answer to this lies in a similar vein to this stack overflow question Higher DPI graphics with HTML5 canvas However when I attempt it, I get a null value for var ctx = canvas.getContext('2d') since the mapbox canvas is "webgl" not "2d"... looking into the "webgl" method of resizing a canvas for higher dpi here: https://www.khronos.org/webgl/wiki/HandlingHighDPI but I really am having a hard time understanding how exactly to redraw the canvas after the resize.

getting sprites to work with three.js and different camera types

I've got a question about getting sprites to work with three.js using perspective and orthogonal cameras.
I have a building being rendered in one scene. At one location in the scene all of the levels are stacked on top of each other to give a 3D view of the building and an orthogonal camera is being used to view it. In another part of the scene, I have just the selected level of the building being shown and a perspective camera is being used. The screen is divided between the two views. The idea being the user selects a level from the building view and a more detailed map of that selected level is shown on the other part of the screen.
I played around with sprites for a little bit and as far as I understand it; if the sprite is being viewed with a perspective camera then the sprite's scale property is actual it's size property and if a sprite is being viewed with an orthogonal camera the scale property scales the sprite according to the view port.
I placed the sprite where both cameras can see it and this seems to be the case. If I scale the sprite by 0.5, then the sprite takes up half the orthogonal camera's view port and I can't see it with the perspective camera (presumably because for it, the sprite is 0.5px x 0.5px and is either rounded to 0px (not rendered, or 1px, effectively invisible). If I scale the sprite by say 50, the the perspective camera can see it (presumably because it's a 50px x 50px square) and the orthogonal camera is over taken by the sprite (presumably because it's being scaled by 50 times the view port).
Is my understanding correct?
I ask because in the scene I'm rendering, the building and detailed areas are ~1000 units apart on the x-axis. If I place a sprite somewhere on the detail map I need it to be ~35x35 pixels and when I do this it works fine for the detail view but building view is overtaken. I played with the numbers and it seems that if I scale the sprite by 4, it starts to show up on my building view, even though there's a 1000 unit distance between the views and the sprite isn't visible with the perspective camera.
So. If my understanding is correct then I need to either use separate scenes; have a much bigger gap between views; use the same camera type for both views; or not use sprites.
There are basically two different ways you can use sprites, either with 2D screen coordinates or 3D scene coordinates. Perhaps scene coordinates are what you need? For examples of both, check out the example at:
http://stemkoski.github.io/Three.js/Sprites.html
and in particular, when you zoom in and zoom out in that demo, notice that the sprites in-scene will change size, while the others do not.
Hope this helps!

How to custom the map controller in Windows Phone to use my Map image

I am developing a Map App for our school. Our school provide me its own map image and coordinate information. So I want use my map image as the source of map and accord to user's location to show a point in the map image. Can anybody gives me some advice?
Thanks in advance.
There are 2 ways:
It is possible to change the source of the map-tiles (e.g. from Bing to say Nokia or Google) of the Map Control. However, for this to work, it is important that map-tiles source implements mechanisms like quadkeys (e.g. see this). Therefore, to answer your question if you would like to use the Bing Map Control with your school's map so that you can leverage the positioning features of the control, it would require that you have a map-tile server properly designed in order to achieve this. AND, there might be some legal issue with altering the Bing Map control if i am not mistaken.
However, given that you are suggesting an image of the map and then doing positioning, then i would suggest that it can be as easy as you calibrating the pixel X-Y coordinate system on the map with that of the geo-coordinate provided by the geo-watcher. Then, in your code you could do a simple mapping between these 2 systems and then draw something on top of the image. For this part you could use a writeablebitmap or simply use the fact that you can overlay UI controls with silverlight. So, for the latter have a canvas with the an image of the map of your school and then on top of that canvas you can have an <image> representing the device and change its top-left coordinate wrt to the canvas.
So, in summary, as the geo-watcher gives geo x-y coordinates to your code, there is mapping function to the pixel X-Y (which you have pre-calculated) and use that XY to position an overlay <image> or draw some "pin" on a writeablebitmap where you have previously draw the image of the map of your school. Things get complicated with this approach when you want to have zooming as well but, this solution is easily scalable.
Does this help clear things a bit?
Answering 2nd question in comment below:
Yes you can zoom in and out of the canvas but, you would have to program it yourself. The control itself, the canvas does not have this capability. Hence, you would have to recognize the triggers for a zoom action (e.g. clicking on the (+) or (-) buttons or, pinch and stretch gestures) and react to that by re-drawing on the canvas a portion of the region on the canvas so that now that regions stretches over the entire canvas. That is, zooming. For instance for the zoom in case: you would have to determine a geometrical area which corresponds to the zoom factor and is in ratio to the dimensions of the canvas object. Then, you would have to scale that portion up so that edges and empty spaces representing walls and spaces between them grow proportionately. Also, you have to determine the center point of that region which your fix on the canvas so that everything grows away from it. Hence, you would be achieving a appropriate zooming effect. At this point you would have to re-adjust your mapping function of geo-coordinates to pixel XY so that the "pin" or object of interest can be drawn with precision and accurately on the newly rendered surface.
I understand that this can appear quite intensive but, it is straightforward once you appreciate for yourself the mechanics of what is required.
Another easier option could be to use SVG (Scalable Vector Graphics) in a Web-Browser control. Note that you would still require the geo-coordinate to pixel-xy system. However, with this approach you can get the zooming for free with the combination of SVG (which have transformation capabilities for the scale up and down operations) and Web-Browser which enables you to render the SVG and does the gesture handling of zooming in to the map. For that, i believe that the cost of work would be in re-creating the map of your school which is in bitmap to SVG. There are tools like Inkscape which you can use to load the image of your map and then trace the outlines over it. You can then save that outline document as an SVG. In fact, i would recommend this approach to your problem before tackling the Canvas method as i feel that it would be the easiest path for your needs.

What are the pros and cons of a sprite sheet compared to an image sequence?

I come from a 2D animation background and so when ever I us an animated sequence I prefer to use a sequence of images. To me this makes a lot of sense because you can easily export the image sequence from your compositing/editing software and easily define the aspect.
I am new to game development and am curious about the use of a sprite sheet. What are the advantages and disadvantages. Is file size an issue? - to me it would seem that a bunch of small images would be the same as one massive one. Also, defining each individual area of the sprites seems time cumbersome.
Basically, I dont get why you would use a sprite sheet - please enlighten me.
Thanks
Performance is better for sprite sheets because you have all your data contained in a single texture. Lets say you have 1000 sprites playing the same animation from a sprite sheet. The process for drawing would go something like.
Set the sprite sheet texture.
Adjust UV's to show single frame of animation.
Draw sprite 0
Adjust UV's
Draw sprite 1
.
.
.
Adjust UV's
Draw sprite 998
Adjust UV's
Draw sprite 999
Using a texture sequence could result in a worst case of:
Set the animation texture.
Draw sprite 0
Set the new animation texture.
Draw sprite 1
.
.
.
Set the new animation texture.
Draw sprite 998
Set the new animation texture.
Draw sprite 999
Gah! Before drawing every sprite you would have to set the render state to use a different texture and this is much slower than adjusting a couple of UV's.
Many (most?) graphics cards require power-of-two, square dimensions for images. So for example 128x128, 512x512, etc. Many/most sprites, however, are not such dimensions. You then have two options:
Round the sprite image up to the nearest power-of-two square. A 16x32 sprite becomes twice as large with transparent pixel padding to 32x32. (this is very wasteful)
Pack multiple sprites into one image. Rather than padding with transparency, why not pad with other images? Pack in those images as efficiently as possible! Then just render segments of the image, which is totally valid.
Obviously the second choice is much better, with less wasted space. So if you must pack several sprites into one image, why not pack them all in the form of a sprite sheet?
So to summarize, image files when loaded into the graphics card must be power-of-two and square. However, the program can choose to render an arbitrary rectangle of that texture to the screen; it doesn't have to be power-of-two or square. So, pack the texture with multiple images to make the most efficient use of texture space.
Sprite sheets tend to be smaller
files (since there's only 1 header
for the whole lot.)
Sprite sheets load quicker as there's
just one disk access rather than
several
You can easily view or adjust multiple frames
at once
Less wasted video memory when you
load the whole lot into one surface
(as Ricket has said)
Individual sprites can be delineated by offsets (eg. on an implicit grid - no need to explicitly mark or note each sprite's position)
There isn't a massive benefit for using sprite sheets, but several small ones. But the practice dates back to a time before most people were using proper 2D graphics software to make game graphics so the artist workflow wasn't necessarily the most important thing back then.

Resources