I can see in the logger that Kudan is recognizing that the model has textures, it shows this in the log:
2016-04-29 19:34:01.645 MyApp[1313:525371] WARNING: Could not find file for texture texture_0001.png
2016-04-29 19:34:01.646 MyApp[1313:525371] WARNING: Could not find file for texture texture_0002.png
Is there a way to specify the path where Kudan should look for this textures?
Or is there a way to load and apply the textures automatically to the meshNodes?
What the framework does is check if the filename that was specified in the model exists in the [NSBundle mainBundle].
As such it shouldn't matter too much where you place your texture files, as long as they're part of the main bundle.
The key thing to really take note of is: when your model is being converted to the KudanAR compatible version, it will pass on the filenames of the textures that were assigned to the nodes. So when you're adding your textures to the project, they need to keep their original name.
Related
Anyone having issues with Reality Converter by Apple. Mainly, when I add an .obj file, it’s able to display the white object. However, when I go ahead and add a texture .png files into the Materials folder, nothing gets updated. I end up with a plain white 3d object (even after restarting/exporting).
The only way it works is if I upload a .gITF folder, where it will actually add in the textures/color.
Not sure if this is a glitch? Or if I’m doing something wrong?
In order to apply a texture to .obj file you need not only a texture file but also its inseparable companion .mtl file (Material Template Library) – a special material definitions file for .obj.
I started a new Xcode project with the ARKit template and simply replaced the "ship.scn" with my "test.scn" filename asset. The object is about 16.5mm wide and 4.8mm tall. The ship worked fine of course, but my test object that reads "test" does not rotate as I move around it, or scale when I move towards or away from it, yet it does track in one location.
I compared the ship and test attribute panels, and I can't find anything that is different about them, except that the ship is textured and my test text is not. What is inherently special about scn objects that would make them behave correctly in ARKit besides their size? I've read through the documentation about anchoring, but it seems like I wouldn't have to do this in code if it's already a scn object.
In case anyone wants to test the file I'm using in the ARKit template to see how it's behaving, the file is here: https://ufile.io/ey49t
I’m answering the question because I think it could help others that are making a similar mistake, but it can be closed if it doesn’t make sense.
The problem is that the size was much too large where I could not see scaling or rotation no matter how much I moved around the room. Compare the scaled size in the last attribute panel on the ship model - not its actual size. Then get the size of your own model scaled down enough so that it is actually resonable.
I was thinking that you need a deformer to read the clusters etc to be able to get the original (usually T pose) position of the skeleton.
Also FBX supports poses etc but never had a file that implemented it.
But my surprise was that importing an fbx file into 3dsmax without any mesh inside if I uncheck "animation" I get the T pose.
Any idea about it?
Thank you in adavnce
FbxCluster has GetTransformMatrix and GetTransformLinkMatrix. The former returns the original transform of the bone (that should be used to initialize the skinning), and the latter the corresponding orientation of the skinned node. Additionally, the skin node can also have "geometric rotation". I don't think there's anything more than that.
I have some JSON files that define geometries, as required by r42 of Three.js's JSONLoader class. The files are version 2 of the format.
Is it feasible to manually update these files between versions?
The first obvious difference is that the file is no longer JavaScript, but actual JSON. This was easy to correct for. However the new format has a metadata section, and thirty minutes of experimentation with is isn't getting me anywhere.
I'm seeing problems because the Geometry object's material properties has an empty materials array. The resulting geometry has multiple parts with different materials, as seen in r42.
Does anyone know how I might manually tweak these .json files to work with r55?
I have no complete knowledge how the internals of the format have changed, but here's a couple of hints:
If you have the source object, the best way would be to re-export/convert. There should now also be more converters and exporters to use if your source format is obscure. If the source is unknown, some Googling might be worth it.
metadata section doesn't matter, it isn't used for anything in the loader.
There is no more Geometry.materials. Instead, JSONLoader's callback returns the loaded materials as a separate parameter to the callback. See Migration Guide (r52->r53) for more on that. In fact, the loader interface has changed also in r46.
git log of some example model files (searching for changes there could be your way forward if you really need to manually migrate) suggests that there might be e.g. UV flipping which would be difficult to fix manually but could be worked around in code. But first you'd need to get something to display on screen.
Try dragging the file into the editor, then do File/Export Geometry.
Here is a fix for drag and drop into the editor. Add this code before the drop event in index.html. I tested this in Chrome ( 24.0.1312.56 ), Safari and Firefox in OSX.
document.addEventListener("dragover", function(e) {
e.preventDefault();
});
#mrdoob's answer worked after a few patches to the editor (here and here.)
However it's worth noting that upgrading the files by hand in a text editor is possible, especially if you don't have any UV coords.
Earlier versions did not use JSON -- they were JavaScript files. The conversion is straightforward.
You can either ignore the metadata section, or else port it from the comments into an object.
If you do have UV coords, then they must be mapped differently (I believe an axis is flipped)
I am trying to do animations on iPhone using OpenGL ES. I am able to do the animation in Blender 3D software. I can export as a .obj file from Blender to OpenGL and it works on iPhone.
But I am not able to export my animation work from Blender 3D to OpenGL. Can anyone please help me to solve this?
If you have a look at this article by Jeff LaMarche, you'll find a blender script that will output a 3D model to a C header file. There's also a followup article that improves upon the aforementioned script.
After you've run the script, it's as simple as including the header in your source, and passing the array of vertices through your drawing function. Ideally you'd want a method of loading arbitrary model files at runtime, but for prototyping this method is the simplest to implement.
Seeing as you already have a method of importing models (obj) then the above may not apply. However, the advantage of using a blender script is that you can then modify the script to suit your own needs, perhaps also exporting bone information or model keyframes.
Well first off, I wouldn't recommend .obj for this purpose since the obj file format doesn't support animation, only static 3D models. So you'll need to export the animation data as a separate file that you load at the same time as the obj.
Which file format I would recommend depends on what exactly your animations are. I don't remember off the top of my head what file formats Blender supports, but as I recall it does not export Collada files with animation, which would be the most general recommendation. Other options would be md2 for character animations, or 3ds for simple "rigid objects moving around" animations. I think Blender's FBX exporter will work, although that file format may be too complicated for your needs.
That said, and assuming you only need simple rigid object movements, you could use .obj for the 3D model shapes and then write a simple Python script to export a file from Blender that has at the keyframes listed, with the frame, position, and rotation for each keyframe. Then load that data in your code and play back those keyframes on the 3D model.
This is an old question and since then some new iOS frameworks have been released such as GLKit. I recommend relying on them as much as possible when you can, since they take care of many inherent conversions like this, though I haven't researched the specifics. Also, while not on iOS, the new Scene Graph technology for OS X (which will likely arrive on iOS) in the future, take all this quite a bit further and a crafty individual could do some conversions with that tool and then take the output to iOS.
Also have a look at SIO2.
I haven't used recent versions of Blender, but my understanding is that it supports exporting mesh animation as a sequence of .obj files. If you can already display a single .obj in your app, then displaying several of them one after another will achieve what you want.
Now, note that this is not the most efficient form to export this type of animation, since each .obj file will have a lot of duplicated info. If your mesh stays fixed over time (i.e. only the vertices move with the polygon structure, uv coords, etc. all fixed) then you can just import the entire first .obj and from the rest just read the vertex array.
If you wanted to optimize this even more, you could compress the vertex arrays so that you only store the differences from the previous frame of the animation.
Edit: I see that Blender 2.59 has export to COLLADA. According to the Blender manual, you can export object transformations, and you can also export baked animation for rigged objects. The benefit for you in supporting the COLLADA format in your iPhone app is that you are free to switch between animation tools, since most of them export this format.