Generating UV coordinates for texture atlas - sprite-sheet

I'm trying to find a tool which will generate UV coordinates from the texture atlas i have.
For example, if the following is my image
i need an xml or something which will describe that image like the following
Is there any tool which will do this ? The tools i found were only creating such xmls ,only after adding individual frame images.
Please correct me if i'm in a wrong notion.

Related

Rendering images and voxelizing the images

I am using the shapenet dataset. From this dataset, I have 3d models in .obj format. I rendered the images of these 3d models using pyrender library which gives me an image like this :
Now I am using raycasting to voxelize this image. The voxel model I get is something like below :
I am not able to understand why I am getting the white or light brown colored artifacts in the boundary of the object.
The reason I could come up with was maybe the pixels at the boundary of the object contain two colors, so when I traverse the image as numpy array, I get an average of these two colors which gives me these artifacts. But I am not sure if this is the correct reason.
If anyone has any idea about what could be the reason, please let me know

How to flip image vertically in blender without dealing with UV?

I'm writing the script which imports model save in custom format. Textures are saved as vertically flipped DDS, so when i open this textures in Blender, the mapping doesn't look good. Is there any way to flip the image vertically without changing UV coordinates?
P.S. Sorry for bad english.
Using blender internal you can set the image mapping scale to -1
In python you can set that with material.texture_slots['Texture'].scale = (-1,-1,-1) note that it is a texture slot property not a texture property.
In cycles you do the same thing with a texture mapping node.
Using python you can set the mapping node properties map_node.scale = (-1,-1,-1)

Loading real terrain into three.js using free map data

Has anyone got any ideas on how to load real terrain data into a three.js scene.
I would like to have a 3D model on a the actual terrain , i.e the elevations and overlayed satellite imagery .
Create scene : ok
Load and animate models : ok
Terrain and satellite imagery : ???
Thanks in advance.
Jon
Three.js has an example on how to make a terrain, so that one's covered.
Regarding the satellite imagery, you'll use that as a texture on your terrain. The only thing that is important is to get the texture coordinates right, so that may end up being tricky.
This blog post gives a good example and its code is available online, too.
If you some how have, or able to calculate, the elevation data of the points needed in grid mode.
You can use plane geometry and javascript xml Loader to load your data to the planes' geometry vertices.
Use any type of material for the plane you need and define the "map" attribute to add the image texture loaded with ImageLoader
If you have random placed elevation data you can use face3 or other type of three.js geometry and an algorithm to create a TIN (triangulated irregular network) to visualize the terrain.
Also you might want to take a look at cesium library and cesium.js documentation as about the geospatial part of the question, about the terrain loading using this three.js method and this osg.js demo.

how use mapping uv in three.js

I use a wood texture image in my model. by default my texture is stretched on the model you see this on woodark. When I changed the repeat the texture is more stretching and I are not understand why. I search to undertand how to use right the mapping in my model with base explain but I have found only examples with colors pixels.
thank to answers
You should make sure your textures have power of two dimensions (ie. 256x256, 512x512 etc). Textures of arbitrary dimensions (NPOT) bring all kinds of mapping trouble in WebGL.
If you are unable to resize textures server-side, you can do it client-side. This link has some sample javascript code, as well as other relevant information: http://www.khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences

Using depth image data to construct 3D model in WebGL

I have an image that is a combination of the RGB and depth data from a Kinect camera.
I'd like to do two things, both in WebGL if possible:
Create 3D model from the depth data.
Project RGB image onto model as texture.
Which WebGL JavaScript engine should I look at? Are there any similar examples, using image data to construct a 3D model?
(First question asked!)
Found that it is easy with 3D tools in Photoshop (3D > New Mesh From Grayscale): http://www.flickr.com/photos/forresto/5508400121/
I am not aware of any WebGL framework that resolves your problem specifically. I think you could potentially create a grid with your depth data, starting from a rectangular uniform grid and moving each vertex to the back or to the front (Z-axis) depending on the depth value.
Once you have this, then you need to generate the texture array and from the image you posted on flickr I would infer that there is a mapping one-to-one between the depth image and the texture. So generating the texture array should be straightforward. You just map the correspondent coordinate (s,t) on the texture to the respective vertex. So for every vertex you have two coordinates in the texture array. Then you bind it.
Finally you need to make sure that you are using the texture to color your image. This is a two step process:
First step: pass the texture coordinates as an "attribute vec2" to the vertex shader and save it to a varying vec2.
Second step: In the fragment shader, read the varying vec2 that you created on step one and use it to generate the gl_FragColor.
I hope it helps.

Resources