I'm writing an OpenGL ES 1.0 Android 2D game. There I want to create a rolling ball but I have no idea how to do this. Of course I can load a ball texture and move it along the x or y axis but the rolling effect is not there. How can I create such an effect? The game is in bird perspective so you are looking from above to the field.
The proper way to do this sort of animation with texture coordinates mapped to geometry is to use an authoring tool, such as Maya, 3DSMax or Blender. Blender is free and fantastic. I highly recommend it. You will also need some middleware to import the geometry, textures and animations created in Blender. For this, look into the PowerVR SDK or Ogre3D or Valve tools.
Related
I have a basic THREE.js scene, created in Blender, including cubes and rotated planes. Is there any way that I can automatically convert this THREE.js scene into a CANNON.js world ?
Thanks
Looking at the Three.js Blender exporter, it looks like it only exports mesh data, no information about mathematical shapes (boxes, planes, spheres etc) that Cannon.js needs to work. You could try to import your meshes directly into Cannon.js, using its Trimesh class, but this would sadly only work for collisions against spheres and planes.
What you need to feed Cannon.js is mathematical geometry data, telling it which of your triangles in your mesh that represent a box (or plane) and where its center of mass is.
A common (manual) workflow for creating 3D WebGL physics, is importing the 3D models into a WebGL-enabled game engine (like Unity, Goo Create or PlayCanvas). In the game engine you can add collider shapes to your models (boxes, planes, spheres, etc), so the physics engine can work efficiently. You can from there preview your physics simulation and export a complete WebGL experience.
Going to post another answer since there are a few new options to consider here...
I wrote a simple mesh2shape(...) helper that can convert (one object at a time) from THREE.Mesh to CANNON.Shape objects. It doesn't support certain features, such as heightmaps/terrain.
Example:
var shape = mesh2shape(object3D, {type: mesh2shape.Type.BOX})
There is a (experimental!) BLENDER_physics extension for the glTF format to include physics data with a model. You could add physics data in Blender, export to glTF, and then modify THREE.GLTFLoader to pass along the physics data to your application, helping you construct CANNON.js objects.
Does Three.JS have a function or capability of AI( Artificial intelligence )? Specifically let's say a FPS game. I want enemies to look for me and try to kill me, is it possible in three.js? Do they have a functionality or a system of such?
Webgl
create buffer
bind buffer
allocate data
set up state
issue draw call
run GLSL shaders
three.js
create a 3d context using WebGL
create 3 dimensional objects
create a scene graph
create primitives like spheres, cubes, toruses
move objects around, rotate them scale them
test for intersections between rays, triangles, planes, spheres, etc.
create 'materials' (rather than shaders)
javascript
write algorithms
I want enemies to look for me and try to kill me
Yes, three.js is capable of doing this, you just have to write an algorithm using three's classes. Your enemies would be 3d objects, casting rays, intersecting with other objects, etc.
You would be building a game engine, and you could use three.js as your rendering framework within that engine. Rendering is just one part of it. Think of a 2d shooter, you could make it using a 2d context, but you could also enhance it and make it 2.5d, by working with a 3d context. Everything else can stay the same.
any webgl engine that might have it ? or is it just not a webgl thing
Unity probably has everything you can possibly think of. Unity is capable of outputting WebGL, so it could be considered a 'webgl engine'.
Bablyon.js is more engine like.
Three Js is the best and most powerfull WebGL 3d engine that has no equal on the market , and its missing out on such an ability
Three.js isn't exactly a 3d engine. Wikipedia says:
Three.js is a lightweight cross-browser JavaScript library/API used to
create and display animated 3D computer graphics on a Web browser.
Three.js uses WebGL.
so if i need to just draw a car, or a spinning logo, i don't need them to come looking for me, or try to shoot me. I just need them to stay in one place, and rotate.
For a graphics demo you don't even need this - with a few draw instructions, you could render a full screen quad with a very elaborate pixel shader. Three gives you a ton of options, especially if you consider all the featured examples.
It works both ways, while you can expand three.js anyway you want, you can strip it down for just a very specific purpose.
If you need to build an app that needs to do image processing, and feature no '3d' graphics, you could still leverage webgl with three.js.
You don't need any vector, matrix, ray , geometry classes.
If you don't have vector3, you probably cant keep planeGeometry, but you would use bufferGeometry, and manually construct a plane. No transformations need to happen, so no need for matrix classes. You'd use shaders, and textures, and perhaps something like the EffectsComposer.
I’m afraid not. Three.js is just a engine for displaying 3d content.
Using it to create games only is one possibility. However few websites raise with pre-coded stuff like AI (among other things) to attract game creators, but using them is more restrictive than writing the exact code you need
Three.js itself doesn't however https://mugen87.github.io/yuka/ is a great AI engine that can work in collaboration with three to create AI.
They do a line if sight and a shooting game logic, as well as car logic which I've been playing around with recently, a React Three Fiber example here: https://codesandbox.io/s/loving-tdd-u1fs9o
I'm new to OpenGL-ES and looking for the best approach for creating a realistic model of an eye whose pupil can dilate and constrict so I have a plan in mind while running through tutorials.
I've made a mesh in blender that is basically a sphere with a hole (the 'pole' or central vertex is removed and a couple surrounding circle edges).
I plan to add an iris texture directly to the sphere's polys surrounding the hole.
To change pupil size, do I just need a function to reposition the vertices of the hole so the hole dilates or contracts?
I'm going to use OpenGL within an Objective-C app. I have Jeff Lamarche's Objective C export script. Is it standard to export only the mesh from blender, and add textures in code later in xcode? Or is it easier/better to setup the textures on the meshes in blender first and export the more finished product's data to xcode?
Your question is a bit old, so I'm not sure how much progress you've made, but as I've been climbing up the learning curve myself I thought I'd take a shot at answering.
If you want to animate the individual vertices of your model, I believe the method you'll want is Vertex Skinning. I can't speak much on that front as I haven't yet had reason to experiment with it, although it's a technique only available in OpenGL ES 2.0. (Which is probably where you want to start anyway, the increased flexibility over 1.1 is more than worth any additional incline to the learning curve.)
The answer to your texturing question is somewhat mixed. You'll need to actually apply the texture in OpenGL. But what Blender can do for you is determine the texture coordinates. Each vertex of your mesh will have a texture coordinate associated with it. The texture coordinate will be X, Y coordinates which map to a location on the texture image. The coordinates are in a range from 0.0 to 1.0 -- so, since your image texture is a rectangle, the texture coordinate {0, 0} maps to the bottom left corner; {1 , 1} maps to the top right corner; {0.5, 0.5} maps to the exact center of the image.
So in blender, you'd want to go ahead and texture the object with UV mappings. When you export, although your exported mesh won't contain any of the image content, it will retain the texture coordinates which map to your image content. This will allow you to apply the texture in OpenGL so that the texture is applied the same way it appeared in blender.
I've personally had some trouble getting Jeff Lamarche's script to spit out the texture coordinates, as Blender api seems to change significantly with each release. I've had more success with an .obj converter. So I've been exporting from blender to .obj, and using a command line tool to go from .obj to a C header file.
If you encounter similar problems with Lamarche's script, this post might help solve it: http://38leinad.wordpress.com/2012/05/29/blender-2-6-exporting-uv-texture-coordinates/
And this is a good resource for a .obj to .h script:
http://heikobehrens.net/2009/08/27/obj2opengl/
I'm searching a 3D physics / transforms animation editor. It should be able to import 3D meshes from OBJ or FBX, then it should be able to animate transforms. I need such a tool for my 3D Games, where many dynamic and inorganic elements appear, like: doors, traps, robots, lifts, and so on. WHere I can find such a tool? Thanks in advance for reply.
Try Blender. It's an open-source software for 3d modelling, texturing, animation, etc.
Blender supports physical simulations using well-known Bullet physics engine:
http://www.blender.org/development/release-logs/blender-240/bullet-physics/
I am trying to make a 3d car race in iphone using OPENGL ES 1.x.
I do not know how to draw the background sky in my scene. I tried using only planes for background but where should i placed that plane? I mean if i placed that plane outside the whole track then the frustum is not so big to show that planes in the scene.
Any suggestions will be of great help.
You can make a small skysphere or box, as suggested by Davido and turbovonce's link, which is centered around the viwer and fits into the frustum. You draw this first, without writing into the depth buffer. Then you draw the other stuff and as the skybox has not written to depth buffer it is just overwritten, except the parts where no scene objects are rendered, which are exactly the parts of the image where the sky should be visible.
You want a sky dome. Take a look at this website, it contains tons of references that should help you.
http://www.vterrain.org/Atmosphere/
Create a sphere in a 3d modeling app such as Maya or Blender and map a sky texture to the sphere. Export the model then load the model and its texture into the app, place in the scene. You should now have a background sky rendering in your game.