3D Models in Scenekit - xcode

I purchased 3D Models to use in SceneKit, but I am having trouble making the model appear like the final product shown on the sites where I buy them from. I have been purchasing .obj files and converting them in Xcode. I was able to successfully complete one model, but I have 5 others all running into the same problem.
As you can see, I would like it to look like this (picture from the site I purchased it from) Image 1
But when I move the .obj file (came with many more as well) this is where it gets confusing. The model has a lot of materials (which I don't understand as well) and when I try to add one of the textures through "Diffuse" it doesn't work at all. This is the best I got.Image 2
The textures also don't seem right, these are all of them but I don't understand even if they linked up, how it would achieve the shiny metal look? Thanks.
Image 3
The materials look like this and there are tons that are repetitive (over 100)
Image 4
Any guidance will be appreciated. Thank you!

You will need to understand how a material is applied on a 3D object. A .obj file will not have a material applied on it, but will have image files, which would then be UV mapped around the 3D object. The diffuse image that you just added to the object is in simple terms, the colour of the surface of the material.There are different components that can be applied on the 3D object, like specular, normal, occlusion, etc. Of course just applying the diffuse component was not going to give you a good enough result.
This Unity doc is what made me understand what each of these components are and what they do when applied on an object.
https://docs.unity3d.com/Manual/StandardShaderMaterialParameters.html
This is pretty much similar to what we use in SceneKit, and you should be able to pick up how the map is to be applied on your 3D model.
Basically, this is what happens when you correctly apply the maps to the 3D model:
Another thing that you might want to look into is PBR(Physically Based Rendering)
Depending on the 3D Model you purchased, maybe you would find this helpful.
https://developer.apple.com/videos/play/wwdc2016/609/
This WWDC video should give you an understanding of how PBR works.
Also, https://medium.com/#avihay/amazing-physically-based-rendering-using-the-new-ios-10-scenekit-2489e43f7021

Related

THREE JS Imported Model Size and Performance

I was hoping to display in my app as many as 3,000 instances of the same model at the same time - but it’s really slowing down my computer. It's just too much.
I know InstancedMesh is the way to go for something like this so I’ve been following THREE.js’s examples here: https://threejs.org/docs/#api/en/objects/InstancedMesh
The examples are fantastic, but they seem to use really small models which makes it really hard to get a good feel for what the model size-limits should be.
For example:
-The Spheres used here aren't imported custom 3D models, they're just instances of IcosahedronGeometry
-The Flower.glb model used in this example is tiny: it only has 218 Vertices on it.
-And the “Flying Monkeys” here come from a ".json" file so I can’t tell how many vertices that model has.
My model by comparison has 4,832 Vertices - which by the way, was listed in the "low-poly" category where I found it, so it's not really considered particularly big.
In terms of file-size, it's exactly 222kb.
I tried scaling it down in Blender and re-exporting it - still came out at 222kb.
Obviously I can take some “drastic measures”, like:
-Try to re-design my 3D model and make it smaller - but that would greatly reduce it’s beauty and the overall aesthetics of the project
-I can re-imagine or re-architect the project to display maybe 1,000 models at the same time instead of 3,000
etc.
But being that I’m new to THREE.js - and 3D modeling in general, I just wanted to first ask the community if there are any suggestions or tricks to try out first before making such radical changes.
-The model I'm importing is in the .glTF format - is that the best format to use or should I try something else?
-All the meshes in it come into the browser as instances of BufferGeometry which I believe is the lightest in terms of memory demands - is that correct?
Are there any other things I need to be aware of to optimize performance?
Some setting in Blender or other 3D modeling software that can reduce model-size?
Some general rules of thumb to follow when embarking on something like this?
Would really appreciate any and all help.
Thanks!
GLTF is fine to transmit geometry and materials — I might say the standard right now. If there's only geometry, I'd see OBJ or PLY formats.
The model size is blocking, but only for the initial load if we employ instancing on its geometry and material. This way we simply re-use the already generated geometry and its material.
At the GPU level, instancing means drawing a single mesh with a single material shader, many times. You can override certain inputs to the material for each instance, but it sort of has to be a single material.
— Don McCurdy
Our biggest worry here would be the triangles or faces rendered. Lower counts of this are more performant, and thus, fewer models at a time are. For this, you can use some degree of LOD to progressively increase and decrease your models' detail until you stop rendering them at a distance.
Some examples/resources to get you started:
LOD
Instancing Models
Modifying Instances

Threejs and 3D Tiles (from Cesium)

I am currently in charge of exploring options to display large 3D geological models on a web page. They are built by the geologists with GeoModeller and exported using Cinema 4D to .DAE, or .OBJ. Once displayed, the model should be interactive and link to a database (this part is manageable from my side).
The issue: the models can be really big and I'm concerned that they could cause crashes and render slowly.
Solution considered so far: threejs + 3D Tiles (from cesium).
Questions: Is combining threejs and 3D Tiles actually doable? It is according to 3D Tiles presentation page but I am not a programmer and I have no idea how to implement it.
Is there another obvious solution to my problem?
Resources: What these 3D models look like: http://advancedgwt.com/wp-content/uploads/blog/63.jpg
What 3D Tiles does when combined with Cesium (but we don't want a globe here! ): http://cesiumjs.org/NewYork
ThreeJS has everything needed to implement a 3DTiles Viewer
Here's an implementation (by me) : https://github.com/ebeaufay/3DTilesViewer
Here's another one by NASA: https://github.com/NASA-AMMOS/3DTilesRendererJS
The viewer is not too difficult to implement but tiling and multi-leveling gigabytes of mesh data, there's a challenge. Luckily, I have code to do just that so hit me up if you're interested.

Make a mesh unprintable, but still viewable with three.js

Is there a way to make a mesh unprintable with a 3D printer, but still viewable with three.js.
Motivation is that I want to show users a preview of a mesh before he can buy it. But as the JS code is viewable he could download it without paying for it. Degrading the quality of the preview mesh would be a way, but as the quality of the mesh is a selling point I would like to avoid that.
My idea was to add some kind of triangulation defects which would prevent the printing of the mesh, but which would not prevent threejs from showing the mesh.
Tools like Netfabb or Meshlab should also not be able to automatically repair the mesh.
Is there something like a bad sector copy protection equivalent for 3d models?
Just a few ideas.
1) Augment your shaders to ignore some interval of vertices from the buffer (like every 3rd or something). In this way you can add "garbage" to the model file so it can not be lifted easily from the network.
2) Once in the buffer it can still be pulled out with a savvy user, unless you split the model up into many chunks and render out of order or only render the front half of the model making it less useful for 3D printing. One could also render in split views or using stereoscopic interlaced with a separation of zero.
3) Only render a none symmetrical half of your model with an camera control locked to that half :P
Kinda wonky, a ton of work to implement, and still someone will find a way I'm sure. But that's my two cents worth anyway, hope it helps.
I've seen some online shops preview with renders taken from each 10-30 degrees around the model. That way you only pass the resulting image, not the model.
why not show a detailed HD video of your model?
If the mesh is non-manifold it will not print.
a) Render serverside, stream results in an interactive video
b) destroy the mesh while still keeping the normals intact for shading. You can randomly flip faces, render with double sided. You can "extrude" edges to mess up topology. As long as you map the normals correctly, it will shade without any of these defects affecting it.

Which concepts should be used to render road lighting from a vehicle point of view using Blender?

I have never used Blender except for quick trials when I installed in Linux, but I wonder if I can used to solve a very specific problem.
I want to render some images showing a vehicle projecting light in a road with some objects (people, posts, signs). I need a bird's eye (superior, orthogonal) view, and the view from inside the vehicle (perspective, first-person) that is the image that would be seen by the driver or rider.
My question is: "Which CONCEPTS should I look for when searching Blender tutorials, in order to:
Select and use the proper rendering algorithm;
Modeling a scene with surfaces, materials, light sources and cameras;
Adding photorealistic behavior regarding light diffusion, reflection, etc.
Sorry if that is too obvious or too basic, but I am not even sure if Blender is able to model such a thing with an acceptable degree of photorealism (not super-realistic, that is not my intention).
Also, if there is another more appropriate StackExchange site to post this quesion, please let me know.
A nice First-Person viewport would be similar to this (without contour lines):
And a nice bird's eye viewport (witout color-mapping) would be this:
Cycles is blender's newer render engine that is fully raytraced and can easily create realistic results. On the other hand the older blender internal renderer can give you more control over lights, like length and angle from source but also the ability to subtract light from areas, it also supports volumetric rendering (if you want a foggy lit area) which is being worked on for cycles. This may be a key to the results you want. As you want to have control over the area that is lit I would run a couple of tests with lights over a plane to see whether cycles or blender internal can easily give the results your after.
As for the final render you can set the camera to perspective with control over focal length or orthographic and adjust scale as well as the option of a panoramic camera to get the final image you want.
Blender includes a ruler and protractor feature, there are also a couple of addons that may help. The scene settings offer metric or imperial display of measurements within blender.
For concepts, it sounds like your final scene would be fairly simple and any basic modelling and texturing would help. Blendswap could be a good resource for free models to help get you started.
For tutorials Blender Cookie is a great site for tutorials on specific tasks and has a good introduction to blender tutorial, while Blenderguru tutorials focus more on the final image.
Blender has also had it's own stackexchange site blender.stackexchange.com for a few months now.

Transform a set of 2d images representing all dimensions of an object into a 3d model

Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.

Resources