Cannot map a texture to the object - opengl-es

hi am new to opengles and recently i started working with textures and now am facing a small problem with it.that is am not able to map the textures to the objects,to be clear am using a png file and creating a texture with it.till here everything goes fine but i dono what's the problem in texture mapping,only few files with .png formats are working fine and i can see them perfectly mapped onto the objects and when i use my own images which are of the same ".png" formats am unable to get the desired one... plz help me to fix this issue.thx in adv.

hi finally i found the solution for the issue i faced,it was regarding the pixel size.the pixel size of the image was not a power of 2.so i edited the pixel size of the images that i was using and it started working.....

Related

Conversion of .dae to .glb / .gltf

I am having success converting .usdz models to .dae using Xcode. However I eventually want the file formats to end up as .glb / .gltf.
I'm using blender to encode .dae into .glb, also a tool made by Khronos group here. https://github.com/KhronosGroup/COLLADA2GLTF
The problem is that the Xcode .dae output is a .dae file and a folder of .png texture files. Xcode can read this just fine and reconstitute the original .dae however blender seems to be incapable of using these texture files and same with the Khronos CLI converter. Using these tools, the .dae shows up without textures, colorless and only constituted in shape based on 3D coordinates.
Does anybody know how to use this folder of texture .png files to render color to a blender .dae?
Ultimately I want to convert .usdz to .glb / .gltf and this is the way I have found, but I'm running into this hiccup. Google search did not improve the hiccup, hence my question here.
I've been experimenting and have found another way you can possibly do the conversion to GLB outside of Blender. Basically, as you have found you convert the .usdz to .dae using Xcode - which results in a .dae file with a folder of textures (.png's). If you now zip the file and the folder together - rename the .zip file ending to .zae. Now take this .zae zip file and go to https://products.aspose.app/3d/conversion you can convert the zip file to GLB. Once you have the GLB file, test it out in Scene Viewer and you should now have the colours, textures etc. on the model. I have had a few issues on one model, namely not all the textures seem to be there, majority are but a few more detailed elements are missing. ie. I have a 3D Framed Artwork model, the black shiny frame, the brown paper backing, and the art colour background are fine but the details of the artwork are missing! Very weird. The Artwork itself is just in one texture file as a .png together with the other files for metallic, colours, etc. and it shows correctly in photoshop or viewer. Also the DAE file shows correctly in preview. Just when it's all brought together in GLB it is missing some detail. Kind of 80% sorted though. Hope this is of help!! And I also hope I have saved another fellow human being from tearing out the remainder of their hair...lol

why bodymovin result animation looks more stretched than the source frames

im making an animation of some product that listens to you and reacts accordingly,
however, i want to upload my animation to my webflow project
my animation resolution is 1080x720, however i export the keyframes as PNG images (like webflow tutorial recomends) and then i import those images inside a new After Effects project and then i export the animation (I would like to say that I follow each step of the tutorial exactly as it is) but the problem comes when i test my result json inside LottieFiles previewer, the animation looks stretched (i cant explain it so ill upload 2 images to show the problem)
the original frame is a png image used in the bodymovin sequence
the json output frame is a base64 image (the first frame of animation) stored in the bodymovin animation result data.json
the two images above are the same resolution but looks diferents, i want to know why and how to fix it
thanks in advance
link to the original webflow tutorial that i follow
sorry this was just a problem of configuration, i figured out how to fix this, i just have to set bodymovin settings > assets > "Copy original a Assets" turn on, in fact, bodymovin use a low-level AI that remove the white / transparent padding and expand the content, enabling original Copy forces bodymovin to avoid using that AI

Working PDF artwork for textures in SpriteKit

SK is supposed to be able to support vector artwork as long as it's in valid PDF format. But what the heck is "valid"?
I found a simple SVG and converted it to PDF using convert.io. I put it in an xcassets, set the scale to Single Scale, and off it went.
Then I got another SVG, converted it to PDF using convert.io, put it into the same xcassets, set the scale to Single Scale and got:
Error loading image resource: "tank"
The PDF seems perfectly ok, it loads fine in Preview and Gapplin and I can't see any difference between it any the one that does work.
Does anyone know how to debug this?

PILLOW enhance module messing up HTML Canvas destination out image

As a part of an experiment I'm trying to process images edited in . I am erasing parts of the image like a brush by drawing on it with
ctx.globalCompositeOperation = "destination-out";
I'm converting this canvas to image with ctx.toDataURL() and saving this in the server with a base64 conversion. So the saved image at this stage looks like this:
The white areas are actually transparent. Now I'm putting this same through the Pillow imageEnhance module:
path = imgName
imObj = Image.open(imgName)
enhObj = ImageEnhance.Contrast(imObj)
enhObj.enhance(factor).show()
Though the contrast adjustment has happened properly this is how the image looks:
Any idea why this is happening and how to tackle this?
Its a problem with the application show() is calling which in this case is some Imagemagick viewer. Once saved to disk the image rendered properly. The problem with lossy transparency however still exists with desaturation through ImageEngance.color() module.

How to subset raster image using gdal?

I have read pixel values of an raster image using GDAL libraries in visual studio 2010(vc++).
Next is , I have to crop the image (subset) according to the grid given in shape file.
Forget about the grid this time.
I just want to clip square or rectangular area and save to new file.
I have read some documents which suggest about gdal_translate and gdal_warp function to use but it can only be run in python where as i want to use c++.
Please help me as early as possible.
I have solved the problem of cropping the image using VC++ with gdal libraries. I have created VRTDataset of my desired size of raster to be cropped and then save it using CreateCopy().

Resources