Tools for 3D rendering - hadoop

It's known that 3D rendering is computation expensive.
And I want to use Apache Hadoop for distributed 3D rendering (rendering images or videos) to reduce rendering time. So after learning about Hadoop, I understand that I need 2 things:
Data, which will be visualized (probably it some kind of file, which contains instructions (like draw rectangle, set coordinates, set color etc.))
Some Tool/Program/Utility to render file described above. I want to invoke it from my program, pointing it to file with data. (it's good if this program has a command line API).
But I don't know anything about 3D rendering, so I need your help in suggesting tools (open-source) for render 3D images/videos. Also I don't know anything about input data. So it will be nice if you suggest me render tool + file format to render.
I heard about using Hadoop with .rib file format as data to visualize, and rndr program to render this data. So I need some analogue.
Please note, my goal is to more deeper learn about Hadoop and distributed computation, not about 3D rendering, so please suggest me simplest solution.
Thanks.

KISS: Gnuplot
If you really only care about using Hadoop, and your only requirement of the rendering is that it takes an input file and makes an image so you can show a completed animation, then I would suggest gnuplot. It is actually a graph-drawing program, but it takes scripts and produces image files, and most usefully for you you can enter mathematical formulae to draw rather than constructing 3d worlds to render.
You would simply prepare n files which are all the same except for an offset value for the time since start, and gnuplot would produce the appropriate frame.
This is the simplest option, and lets you concentrate on the Hadoop side. To show you how simple, this would produce a frame for an animation of a 3-blade fan spinning:
set xrange[-1:1]
set yrange[-1:1]
set polar
unset key
unset border
unset tics
set terminal png size 1000,1000
set output "frame_$FRAME.png"
plot cos(3*t+$FRAME/5)
A great thing about Gnuplot is that you type in commands in an interactive prompt to manipulate the graph, and these are the same commands you put in the script. So once you have something you're happy with, you can either do save 'newscript.gpt' or copy out the commands you used. You can recreate the graph by just running gnuplot newscript.gpt at a prompt.
Incidentally, it is easier to simulate hard-to-render scenes than to actually construct them, so just put a sleep command in the gnuplot script to make it take 15 seconds or however long.
The whole banana: Blender
Blender is a 3d rendering system. I believe it is used as a rendering system for some mainstream animations on server farms in exactly the manner you describe. It has quite a learning curve, but I think you should be able to pick up a tutorial or other pre-existing animation project files. From there you would need to work out how to invoke the rendering engine as a command for a specific frame. I've only ever done static rendering in blender, so I can't take you further there. This would be more impressive visually, but no more impressive academically, and you'll waste more time on that side of things.
My choice
Personally I would go with gnuplot. You can make 3d plots with the splot command rather than the 2d plot, and although it's not 3d scene rendering as you might be imagining, it achieves the purpose of making a picture using a script. You can begin with something totally basic like the above until you have your setup going, then introduce more complexity; from an implementation perspective, running a gnuplot script is easier than running a script that also requires a data file, but it's still easy to pre-generate the data and have gnuplot read that when you want to do command + script + data instead of command + script. The key is incrementally increasing the difficulty, not running before you can walk.
If you ultimately find you have spare time at the end and change it to using blender, then that's all free win, and you haven't jeopardised your hadoop project making it pretty.

Related

What is a good script-driven 3d image drawing program?

I want to use a Makefile to produce lots of images by script in the same manner as TeX tikz. It can be a different script language, but it should be either simple or worth the extra power.
I use graphviz and gnuplot and love them both. Both are driven by text scripts describing what is to be drawn. Absolutely no GUI should be necessary.
The script should be able to do the usual things like drawing a red line.
draw[line width=2px,color=red] (0,0,0) -- (1,1,1);
Are there any good choices for script-driven 3D image generating programs with the look and feel of graphviz or gnuplot?

Calculations and rendering in MATLAB, GUI in Anything Else

In the Hebrew University in Jerusalem there are a few MATLAB applications, consisting of both calculations and UI. Since the UI is becoming increasingly complex, it's getting very hard to maintain it.
What I'd like to do is keep the calculations and the rendering of 2D and 3D graphs in MATLAB, but control the entire UI from elsewhere. I know MATLAB exports a COM interface, which is OK for using MATLAB calculations, but I couldn't find a way to pass rendered data (MATLAB plots, basically) back through it.
Is there a way to do that?
The simplest thing for you to do would be to issue an instruction to MATLAB to create the plot (perhaps creating it offscreen, to avoid an unwelcome popup window), adjust its appearance and size, then save it to an image file. Pass the filename back, then load it in from your UI code and display it.
However, that will not of course get you a plot that is "live", so you won't be able to edit it, or click on it/interact with it, or even resize it nicely.
If you need that, I'm afraid there's no documented or supported way to do it. But if you're willing to go undocumented, then MATLAB also has a Java interface (jmi.jar) that you can call from Java, and you can embed a live MATLAB plot within a Java GUI, attaching MATLAB or Java callbacks to plot elements.
Note that that capability is completely undocumented, and may well change from release to release without warning. If you'd like to learn how to approach that, I'd recommend reading through the blog Undocumented MATLAB, and probably buying a copy of the book by that blog's author.

Camera calibration patterns

I would like to know if there is a process to generate camera calibration patterns.
We can use paint or any other graphic tool and set the precise measurements but then we need to hard-code the point positions or create a txt/xml file.
Is there a software that exports the data to a file that we can upload in our software.
What about 3D targets like boxes and/or cubes. Is there a method to generate the correct data points?
Cheers.
For 2D targets such as checkerboards, I used to do it like user469049 describes. Which was quite time consuming. In the end I gave up and created a web tool that does all of the leg work:
https://calib.io/pages/camera-calibration-pattern-generator
I'm using inkscape:
http://dominoc925.blogspot.co.uk/2012/06/create-camera-calibration-chess-board.html
I usually create a pdf file used to print and save files as LaTeX with PSTricks extensions.
The tex file has paths, so for a square it has a \moveto command to set the starting point and it has \line to command to set the next points.
In the dominoc925 example they define black and white squares but I just define the black squares to avoid repeated points.
I have a simple file loader in my code to get the points, just search for the \moveto and \line commands and workout the points from there.
For the 3D targets I treat each patter as one view because I don't have the tools to build a precise 3D target.
So instead of having different views of one patter like in the Matlab toolbox, I treat each detected pattern as a view.
In other words, if you have a 3D object then the target on each face is treated as a independent view.
There is probably a more professional way to do the job but this is my process :)
I hope this helps.

Lightweight 3D animation driven by external data

I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.
Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)
Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.

Restoring an old manuscript with image processing

Say i have this old manuscript ..What am trying to do is making the manuscript such that all the characters present in it can be perfectly recognized what are the things i should keep in mind ?
While approaching such a problem any methods for the same?
Please help thank you
Some graphics applications have macro recorders (e.g. Paint Shop Pro). They can record a sequence of operations applied to an image and store them as macro script. You can then run the macro in a batch process, in order to process all the images contained in a folder automatically. This might be a better option, than re-inventing the wheel.
I would start by playing around with the different functions manually, in order to see what they do to your image. There are an awful number of things you can try: Sharpening, smoothing and remove noise with a lot of different methods and options. You can work on the contrast in many different ways (stretch, gamma correction, expand, and so on).
In addition, if your image has a yellowish background, then working on the red or green channel alone would probably lead to better results, because then the blue channel has a bad contrast.
Do you mean that you want to make it easier for people to read the characters, or are you trying to improve image quality so that optical character recognition (OCR) software can read them?
I'd recommend that you select a specific goal for readability. For example, you might want readers to be able to read the text 20% faster if the image has been processed. If you're using OCR software to read the text, set a read rate you'd like to achieve. Having a concrete goal makes it easier to keep track of your progress.
The image processing book Digital Image Processing by Gonzalez and Woods (3rd edition) has a nice example showing how to convert an image like this to a black-on-white representation. Once you have black text on a white background, you can perform a few additional image processing steps to "clean up" the image and make it a little more readable.
Sample steps:
Convert the image to black and white (grayscale)
Apply a moving average threshold to the image. If the characters are usually about the same size in an image, then you shouldn't have much trouble selecting values for the two parameters of the moving average threshold algorithm.
Once the image has been converted to just black characters on a white background, try simple operations such as morphological "close" to fill in small gaps.
Present the original image and the cleaned image to adult readers, and time how long it takes for them to read each sample. This will give you some indication of the improvement in image quality.
A technique call Stroke Width Transform has been discussed on SO previously. It can be used to extract character strokes from even very complex backgrounds. The SWT would be harder to implement, but could work for quite a wide variety of images:
Stroke Width Transform (SWT) implementation (Java, C#...)
The texture in the paper could present a problem for many algorithms. However, there are technique for denoising images based on the Fast Fourier Transform (FFT), an algorithm that you can use to find 1D or 2D sinusoidal patterns in an image (e.g. grid patterns). About halfway down the following page you can see examples of FFT-based techniques for removing periodic noise:
http://www.fmwconcepts.com/misc_tests/FFT_tests/index.html
If you find a technique that works for the images you're testing, I'm sure a number of people would be interested to see the unprocessed and processed images.

Resources