This question already exists:
Change dimensions of Cubical Shower 3d model in unity 3D
Closed 2 years ago.
Is it possible to change width of any fbx model in 3D without changing its realistic look so that after changing its dimension, the model should not be stretched?
If 2 objects are placed beside each other then need to increase the size of one object and change position of other object with respect to first object.
Thanks in advance.
this breaks down to two problems, if you want to scale an object in just one dimension it will always stretch, for example your your table:
While the board looks fine the legs will get stretched and look unrealistic.
Now the question is what can you do?
It depends on your model.
First of all has your model only one mesh? or has every component a single mesh?
Preferably you want your components to have a independent mesh object. For your table it would be something like this:
This way you can only scale your board and then transform the position of your legs accordingly so that they fit to the new board size.
If you have only one mesh there is not a lot you can do in Unity. For that you would need to go into Blender or any other 3D modeling tool and split the components manually.
Now if you only stretched the board and your model has a texture you will notice that it will look stretched.
What can you do about that?
Go to your texture and first of all check the wrap mode
in this case we want it on repeat, after that we need to change the material setting
since we stretched the geometry we need to change the tiling, befor it was on y = 1 but we scaled the y dimensions so now we need to adapt this number aswell and make the texture repeat. For a table this is doable, if we for example work with more complex textures that have specific parts this will not work and you have to change the texture manually.
now the texture looks better but you probably will have abrupt color changes, this is because the texture is repeated, i "circled" it on the picture. For this problem you have to change the texture in a picture editing program and make it seamless.
I hope this helped a bit, i know this is only the basics and to get a perfect texture and image you have to put in a bit more work, but for that i would highly recommend to read a tutorial.
Related
Hello I'm trying to archive the effect in the image below (that is like shine light but only on top of the raw image)
Unfortunately I can not figure out how to do it, tried some shaders and assets from the asset store, but so far no one has worked, also I dont know much about shaders.
The raw image is an ui element, and renders a render texture that is being captured by a camera.
I'm totally lost here, any kind of help will be appreciated, how to make that effect?
Fresnel shaders use the difference between the surface normal and the view vector to detect which pixels are facing the viewer and which aren't. A UI plane will always face the user, so no luck there.
Solving this with shaders can be done in two ways - either you bake a normal map of the imagined "curvature" of the outer edge (example), or you create a signed distance field (example), or some similar method which maps the distance to the edge. A normal map would probably allow for the most complex effects, and i am sure that some fresnel shaders could work with that too. It does however require you to make a model of the shape and bake the normals from that.
A signed distance field on the other hand can be generated with script from an image, so if you have a lot of images, it might be the fastest approach. Getting the edge distance in real time inside the shader would not really work since you'd have to sample a very large amount of neighboring pixels, which might make the shader 10-20 times slower depending on how thick you need the edge to be.
If you don't need the image to be that dynamic, then maybe just creating an inner glow black/white texture in Photoshop and overlaying it using an additive shader would work better for you. If you don't know how to write shaders, then maybe the two above approaches are a bit of a tall order.
Is it possible in THREE JS to re-position a texture in real time?
I have a model of a heart and I'm projecting "color maps"/"texture with colors" onto the model but the position of the maps can be a little different each time.
UPDATE
More info:
I have about 20 color maps. They are 80 by 160 pixels. I need to position them on the model. The position of the color maps may differ slightly. Currently I add all the color maps to a big texture and then I load the texture onto the model. That all works just fine.
But sometimes a surgeon feels like a color map needs to be moved over or rotated a little. I can't expect him to change the hard-coded locations in the code. I want him to be able to drag the color map to the right location.
I studied the THREE JS documentation and examples but I haven't found anything yet.
I am building quite a complex 3D environment in Three.js (FPS-a-like). For this purpose I wanted to structure the loading of textures and materials in an object oriënted way. For example; materials.wood.brownplank is a reusable material with a certain texture and other properties. Below is a simplified visualisation of the process where models uses materials and materials uses textures.
loadTextures();
loadMaterials();
loadModels();
//start doing stuff in the scene
I want to use that material on differently sized objects. However, in Three.js you can't (AFAIK) set a certain texture scale. You will have to set the repeat to scale it appropiate to your object. But I don't want to do that for every plane of every object I use.
Here is how it looks now
As you can see, the textures are not uniform in size.
Is there an easy way achieve this? So cloning the texture and/or material every time and setting the repeat according to the geometry won't do :)
I hope someone can help me.
Conclusion:
There is no real easy way to do this. I ended up changing my loading methods, where things like materials.wood.brownplank are now for example getMaterial('wood', 'brownplank') In the function new objects are instantiated
You should be able to do this by modifying your geometry UV coordinates according to the "real" dimensions of each face.
In Three.js, UV coordinates are relative to the face and texture (as in, 0.0 = one edge, 1.0 = other edge), no matter what the actual size of texture or face is. But by modifying the UVs in geometry (multiply them by some factor based on face physical size), you can use the same material and texture in different sizes (and orientations) per face.
You just need to figure out the mapping between UVs, geometry scale and your desired working units (eg. mm or m). Sorry I don't have, or know a ready algorithm to do it, but that's the approach you probably need to take. Should be quite doable with a bit of experimentation and google-fu.
I'm trying and failing to work out how to achieve a quad-tree of materials (images) on a single plane, much like a Google Maps-style zoomable tile that gets more accurate the closer you get.
In short, I want to be able to have a 1x1 image texture (covering a plane that is 256 units wide and tall) that can then be replaced with a 2x2 texture, that can then be replaced with a 4x4 texture, and so on.
Like the image example below…
Ideally, I want to avoid having to create a different plane for each zoom level / number of segments. A perfect solution would allow me to break a single plane into 8x8 segments (highest zoom) and update the number of textures on the fly. So it would start with a 1x1 texture across all 64 (8x8) segments, then change into a 2x2 texture with each texture covering 4x4 segments, and so on.
Unfortunately, I can't work out how to do this. I explored setting the materialIndex for each face but you aren't able to update those after the first render so that wouldn't work. I've tried looking into UV coordinates but I don't understand how it would work in this situation, nor how to actually implement that in Three.js – there is little in the way of documentation / examples for this specific case.
A vertex shader is another option that came up in research, but again I don't know enough to understand how to construct that.
I'd appreciate any and all help with this, it will be a technique that proves valuable for other Three.js users I'm sure.
Not 100% sure what you are trying to do, whether you are talking about texture atlasing (looking up and different textures based on current setting/zooms) but if you are looking for quad-tree based texturing that increases in detail as you zoom in then this is essentially what mipmaping is and does.
(It can be also be used to do all sorts of weird things because of that, but that's another adventure entirely)
Generally mipmapping is automatic based on the filtering you use - however it sounds like you need more control over it.
I created an example hidden away in the three.js source tree which may help:
http://mrdoob.github.com/three.js/examples/webgl_materials_texture_manualmipmap.html
Which shows you how to load each mipmap level in manually, rather than have it just be automatically generated.
HTH
I have several images of a rotating object. Each image shows a different angle. Now I want to let the user rotate the object with his or her fingers. This works but there aren't enough frames to show a smooth rotation. It's too jumpy.
I want to make it smoother and by that I probably need to generate more "steps", generate images of different missing angles. Is there an existing algorithm or technique I could use?
I think that if you try to interpolate pixel values temporaly between two consecutive images it would result in poor results but it might worth the try.
A more interesting approach would be to make a 3d estimation of your object using stereoscopic technics and then to project a synthetic view of the estimated scene at an intermediary position. For this to work, you will need to now the precise angle of the object at each frame. Occlusion is also an issue with stereoscopy.