Lightweight 3D animation driven by external data - animation

I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.

Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)

Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.

Related

How to manage 2D data in a procedural game world

I’m building a Starflight-inspired 2D space exploration game with a procedural world. The gameplay is divided into different « scenes » (to use Godot terminology) to manage the different « depths » of the game. For example, interstellar flight is a scene where the star systems are simply represented by star sprites. When the player gets in range, the view is moved to the solar system scene, where the player moves his ship inside the actual solar system.
So far so good, I generate the universe (the solar systems) from a hard coded array of coordinates and seeds. Now I also want to make the universe generation procedural, but I’m guessing that loading a whole universe (there is no real limit to the number of solar systems once it becomes procedural) in memory won’t be efficient.
I’m thinking of generating the universe on the first run and saving the data to a file, but I’m wondering how to load the relevant data in an efficient way that would let me load only a certain « radius » of data around the player’s ship. I feel like it would be the way to go if I use my generation algorithms that generate « realistic » galaxy shapes, since it implies many steps of data processing (different cluster shapes are generated, arms, blobs, etc. and then stars are spinned around the center to simulate the galaxy rotation, etc.) that would be probably too long to calculate in realtime.
I’m wondering which approach I should take on this problem. It’s not really language or engine dependant, so references to generic articles and algorithms on the subject would suffice.
I also read a bit about QuadTrees and I think I’m getting to something there, but I’m not exactly sure how to use that with a file on disk.
Thanks in advance for your help!
I have some suggestions:
Do not generate the whole universe on the first run, generate only the areas that are somehow visible. Then, instead of loading the whole universe from disk, you just generate it whenever your spaceship (or whatever) come within view distance of that area. This makes game initialization much faster and allows an (almost) infinite universe.
If you want the universe to be modifiable, store only the `edits' that a player makes. So if you want to show a part of the universe, generate the part from your seed and then overlay the stored edits. This makes storage much smaller.
For storage on disk, have a look at R-Tree, especially R*Tree and R+Tree, they are designed for storing data in disk pages.
as TilmannZ suggested, you should not be generating the whole dataset for the galaxy when you start the game, because there is likely no need (unless the player needs to see/interact with all the data at once - e.g. all stars). If this is the case, for example for a starmap, then you may be better loading all the data once and saving the result in an image file.
Instead, you should only genereated the data as needed around the player. The most obvious way to do this would be to construct a grid around the player, and keep this grid centered on the player as they move around. As the player moves around, you only need to update the conceptual galaxy coordinates of each cell (not the rendered coordinates). Then for each cell you can then use the coordinates as the input into a value or gradient generator like Perlin to determine what features should spawn in that location.
As for 'shaping' the galaxy or universe, one effective way is to sample the pixel data of a greyscale image of a galaxy which has the shape you want. You could load the image's RGB data at run time, and use the coordinates of your grid as you generate the stars to get the RGB value, which you can use as a density factor for the star generation; the whiter the pixel, the higher the star density at this location and visa-versa for black pixels. This method lets you effectively draw the shape of the galaxy in paint.
Maybe think about different layers of abstractions. Each layer uses the parent layer, designer input, events & procedural generation algorithms to generate the needed data.
The Universe layer contains user or randomly placed galaxy polygons & types.
The Galaxy layers can add more details (number & density of spiral arms) or a density map.
A cluster of solar systems.
The solar system adds the stars & planets.
And only create the details for currently needed elements.

Where to find information on 3D algorithms?

I am interested in learning about 3D video game development, but am not sure where to start really.
Instead of just making it which could be done by various game makers, I am more interested in how it is done.
Ideally, I would like to know in which format general 3D models, etc. are stored.(coordinate format etc.) and information on how to represent the 3D data on the screen from a certain perspective such as in general free roaming 3D video games like Devil May Cry.
I have seen some links regarding 3D matrices but I really don't understand how they are used. Any help for beginners would be much appreciated.
Thanks
Video game development is a huge field requiring knowledge in game theory, computer science, math, physics and art. Depending on what you want to specialize on, there are different starting points. But as this is a site for programming questions, here some insights on the programming part of it:
File formats
Assets (models, textures, sounds) are created using dedicated 3rd party tools (think of Gimp, Photoshop, Blender, 3ds Max, etc), which offer a wide range of different export formats. These formats usually have one thing in common: They are optimized for simple communication between applications.
Video games have high performance requirements and assets have to be loaded and unloaded all the time during gameplay. So the content has to be in a format that is compact and loads fast. Often 3rd party formats do not meet the specific requirements you have in your game project. For optimal performance you would want to consider developing your own format.
Examples of assets and common 3rd party formats:
Textures: PNG, JPG, BMP, TGA
3D models: OBJ, 3DS, COLLADA
Sounds: WAV, MP3
Additional examples
Textures in Direct3D
In my game project I use an importer that converts my textures from one of the aforementioned formats to DDS files. This is not a format I developed myself, still it is one of the fastest available for loading with Direct3D (Graphics API).
Static 3D models
The Wavefront OBJ file format is a very simple to understand, text-based format. Most 3D modelling applications support it. But since it is text based the files are much larger than equivalent binary files. Also they require lots of expensive parsing and processing. So I developed an importer that coverts OBJ models to my custom high performance binary format.
Wave sound files
WAV is a very common sound file format. Additionally it is quite ideal for using it in a game. So no custom format is necessary in this case.
3D graphics
Rendering a 3D scene at least 30 times per second to an average screen resolution requires quite a lot calculations. For this purpose GPUs were built. While it is possible to write any kind of program for the GPU using very low level languages, most developers use an abstraction like Direct3D or OpenGL. These APIs, while restricting the way of communicating with the GPU, greatly simplify graphics related tasks.
Rendering using an API
I have only worked with Direct3D so far, but some of this should apply to OpenGL as well.
As I said, the GPU can be programmed. Direct3D and OpenGL both come with their own GPU programming language, a.k.a. Shading Language: HLSL (Direct3D) and GLSL. A program written in one of those languages is called a Shader.
Before rendering a 3D model the graphics device has to be prepared for rendering. This is done by binding the shaders and other effect states to the device. (All of this is done using the API.)
A 3D model is usually represented as a set of vertices. For example, 4 vertices for a rectangle, 8 for a cube, etc. These vertices consist of multiple components. The absolute minimum in this cases would be a position component (3 floating point numbers representing the X, Y and Z offsets in 3D space). Also, a position is just an infinitely small point. So additionally we need to define how the points are connected to a surface.
When the vertices and triangles are defined they can be written to the memory of the GPU. If everything is correctly set, we can issue a Draw Call through the API. The GPU then executes your shaders an processes all the input data. In the last step the rendered triangles are written to the defined output (the screen, for example).
Matrices in 3D graphics
As I said before, a 3D mesh consists of vertices with a position in 3D space. This positions are all embedded in a coordinate system called object space.
In order to place the object in the world, move, rotate or scale it, these positions have to be transformed. In other words, they have to be embedded in another coordinate system, which in this case would be called world space.
The simplest and most efficient way to do this transformation is matrix multiplication: From the translation, rotation and scaling amounts a 4x4 matrix is constructed. This matrix is then multiplied with each and every vertex. (The math behind it is quite interesting, but not in the scope of this question.)
Besides object and world space there is also the view space (coordinate system of the 'camera'), clip space, screen space and tangent space (on the surface of an object). Vectors have to be transformed between those coordinate systems quite a lot. So you see, why matrices are so important in 3D graphics.
How to continue from here
Find a topic that you think is interesting and start googling. I think I gave you quite a few keywords and I hope I gave you some idea of the topics you mentioned specifically.
There is also a Game Development Site in the StackExchange framework which might be better suited for this kind of questions. The top voted questions are always a good read on any SE site.
Basically the first decision is wether to use OpenGL or DirectX.
I suggest you use OpenGL because its Platform independent and can also be used for mobile devices.
For OpenGL here are some good tutorials to get you started:
http://www.opengl-tutorial.org/

Transform a set of 2d images representing all dimensions of an object into a 3d model

Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.

What are the standard techniques for removing a segmentation (such as a human or bird) from a video?

Let's say you are taking a video (with the camera in a steady position) and a bird flies through the view of the camera. It should be possible to do image segmentation and automatically remove this bird from the video.
What are these styles of algorithms called and how are they normally accomplished?
There's a technique called Simple Image Object Extraction (SIOX) - it uses a technique to identify foreground vs. background objects in still and video images. The open source GIMP editor has an implementation of it, and there's more information about it here.
From the overview:
SIOX stands for Simple Interactive Object Extraction and is a solution for extracting foreground from still images with very little user interaction. SIOX is fast, noise robust, and can therefore also be used for the segmentation of videos. It avoids many of the drawbacks of graph-based segmentation methods but performs about equally well on different benchmarks. SIOX is open and free (Apache License) and the authors have intentionally not patented any part of the technology. As a result, it has been integrated into several open-source image manipulation programs over the past years. SIOX is the underlying algorithm of the foreground extraction tool in the GNU Image Manipulation Program (GIMP) and is part of the tracer tool in Inkscape. SIOX originates from E-Chalk where an instructor standing in front of an electronic chalkboard is segmented. Variants of SIOX are being used for robotic vision and for improving 3D time-of-flight camera segmentation.
Here's a link to the Java Reference Implementation of SIOX.
Here's a link to the PDF with details about how a variation of the algorithm works.
You should be able to adapt it to use inter-frame interpolation to remove a specific foreground object from each frame of a video by using temporal data from surrounding frames.
If the camera is fixed and there isn't too much motion in the scene, then I would suggest a method based on background subtraction.
Step 1: Compute background for each frame of the video. There are complicated algorithms for doing this, but a very simple and effective one would be to compute the median value of every pixel in the image across a 3 second time window. Longer if the object in question is moving slowly. Incidentally, if you just perform this kind of filtering it will remove most moving objects from the video if the camera is fixed, hence my earlier question about all objects vs. one object.
Step 2: Mark the regions you want to remove in each frame with a brush tool, and replace them with the background pixels. Don't bother with a fine brush or lasso tool as any non-object pixels you mark will just be replaced with their filtered version. You could probably use the same brush marks for several frames since the boundary is not so important. If the object is the only thing moving in the scene, you could just mark the entire frame and have it replaced with the background.
Anyways, to answer your more general question, the topic you want to research is called inpainting for images and video. There is quite a bit of literature out there on the subject, what I described was just a super simple method you could implement in an hour or so with opencv.

Image recognition and 3d rendering

How hard would it be to take an image of an object (in this case of a predefined object), and develop an algorithm to cut just that object out of a photo with a background of varying complexity.
Further to this, a photo's object (say a house, car, dog - but always of one type) would need to be transformed into a 3d render. I know there are 3d rendering engines available (at a cost, free, or with some clause), but for this to work the object (subject) would need to be measured in all sorts of ways - e.g. if this is a person, we need to measure height, the curvature of the shoulder, radius of the face, length of each finger, etc.
What would the feasibility of solving this problem be? Anyone know any good links specialing in this research area? I've seen open source solutions to this problem which leaves me with the question of the ease of measuring the object while tracing around it to crop it out.
Thanks
Essentially I want to take a 2d image (typical image:which is easier than a complex photo containing multiple objects, etc.)
,
But effectively I want to turn that into a 3d image, so wouldn't what I want to do involve building a 3d rendering/modelling engine?
Furthermore, that link I have provided goes into 3ds max, with a few properties set, and a render is made.
It sounds like you want to do several things, all in the domain of computer vision.
Object Recognition (i.e. find the predefined object)
3D Reconstruction (make the 3d model from the image)
Image Segmentation (cut out just the object you are worried about from the background)
I've ranked them in order of easiest to hardest (according to my limited understanding). All together I would say it is a very complicated problem. I would look at the following Wikipedia links for more information:
Computer Vision Overview (Wikipedia)
The Eight Point Algorithm (for 3d reconstruction)
Image Segmentation
You're right this is an extremely hard set of problems, particularly that of inferring 3D information from a 2D image. Only a very limited understanding exists of how our visual system extrapolates 3D information from 2D images, one such approach is known as "Shape from Shading" and the linked google search shows how much (and consequently how little) we know.
Rob
This is a very difficult task. The hardest part is not recognising or segmenting the object from the image, but rather inferring the 3-D geometry of the object from the 2-D image. You will have more success if you can use a stereoscopic camera (or a laser scanner, if you have access to one ;).
For the case of 2-D images, try googling for "shape-from-shading". This is a method for inferring 3-D shape from a 2-D image. It does make assumptions about illumination conditions and surface properties (BRDF and geometry) that may fail in many cases, but if you are using it for only a predefined class of objects (e.g. human faces) it can work reasonably well.
Assuming it's possible, that would be extremely difficult, especially with only one image of the object. The rasterizer has to guess at the depth and distances of objects.
What you describe sounds very similar to Microsoft PhotoSynth.

Resources