I would like to know if there is a process to generate camera calibration patterns.
We can use paint or any other graphic tool and set the precise measurements but then we need to hard-code the point positions or create a txt/xml file.
Is there a software that exports the data to a file that we can upload in our software.
What about 3D targets like boxes and/or cubes. Is there a method to generate the correct data points?
Cheers.
For 2D targets such as checkerboards, I used to do it like user469049 describes. Which was quite time consuming. In the end I gave up and created a web tool that does all of the leg work:
https://calib.io/pages/camera-calibration-pattern-generator
I'm using inkscape:
http://dominoc925.blogspot.co.uk/2012/06/create-camera-calibration-chess-board.html
I usually create a pdf file used to print and save files as LaTeX with PSTricks extensions.
The tex file has paths, so for a square it has a \moveto command to set the starting point and it has \line to command to set the next points.
In the dominoc925 example they define black and white squares but I just define the black squares to avoid repeated points.
I have a simple file loader in my code to get the points, just search for the \moveto and \line commands and workout the points from there.
For the 3D targets I treat each patter as one view because I don't have the tools to build a precise 3D target.
So instead of having different views of one patter like in the Matlab toolbox, I treat each detected pattern as a view.
In other words, if you have a 3D object then the target on each face is treated as a independent view.
There is probably a more professional way to do the job but this is my process :)
I hope this helps.
Related
I have a graph that I've written down as a DOT file. I picked this because it's pretty easy to read and write programmatically, and I have a fair amount of tooling that uses the DOT file as input.
Graphviz does a decent job drawing it, but not a great job. (And that's all it's really meant to do, as far as I know.)
I am looking for, and cannot find, a tool that will read in the DOT file and let me manually drag around the vertices and edges I've already described in the DOT file similar to https://www.draw.io.
The thing that I really do not want to do is manually re-enter the graph I've already written down (or computed as output from a program or whatever) into draw.io and then have two different files that may or may not have the same set of edges and vertices because of transcription errors.
Ideally, I want something that will write its own file of only the metadata about where things are drawn, without adding a bunch of cruft to the DOT file, so that the tooling I have there still works and I can still use it as the unified representation of the graph between a bunch of different tasks.
You can run dot requesting output as another dot file with the command dot -Tdot. dot will then calculate the layout but instead of outputting a pictorial representation, it will output another dot file with exactly the same information as the input, with the addition of layout information as additional attributes. You can then edit the layout information by hand and run it through dot a second time to obtain the pictorial representation.
If your tools process a dot file properly, they should be able to process a dot file with layout attributes.
If you want a WYSIWYG tool to aid in the hand-layout process, take a look at dotty.
Here is what I have done when I want to manually adjust the node positioning and edges that are drawn by dot:
tell dot to generate an SVG image:
dot -Tsvg < graph_text.dot > output_image.svg
use an SVG editor such as Inkscape or Sketch (two that I have used) to open the SVG
When you open the SVG in an image editor that can handle the SVG format, each node and edge in the graph will be an independent draggable component.
It's known that 3D rendering is computation expensive.
And I want to use Apache Hadoop for distributed 3D rendering (rendering images or videos) to reduce rendering time. So after learning about Hadoop, I understand that I need 2 things:
Data, which will be visualized (probably it some kind of file, which contains instructions (like draw rectangle, set coordinates, set color etc.))
Some Tool/Program/Utility to render file described above. I want to invoke it from my program, pointing it to file with data. (it's good if this program has a command line API).
But I don't know anything about 3D rendering, so I need your help in suggesting tools (open-source) for render 3D images/videos. Also I don't know anything about input data. So it will be nice if you suggest me render tool + file format to render.
I heard about using Hadoop with .rib file format as data to visualize, and rndr program to render this data. So I need some analogue.
Please note, my goal is to more deeper learn about Hadoop and distributed computation, not about 3D rendering, so please suggest me simplest solution.
Thanks.
KISS: Gnuplot
If you really only care about using Hadoop, and your only requirement of the rendering is that it takes an input file and makes an image so you can show a completed animation, then I would suggest gnuplot. It is actually a graph-drawing program, but it takes scripts and produces image files, and most usefully for you you can enter mathematical formulae to draw rather than constructing 3d worlds to render.
You would simply prepare n files which are all the same except for an offset value for the time since start, and gnuplot would produce the appropriate frame.
This is the simplest option, and lets you concentrate on the Hadoop side. To show you how simple, this would produce a frame for an animation of a 3-blade fan spinning:
set xrange[-1:1]
set yrange[-1:1]
set polar
unset key
unset border
unset tics
set terminal png size 1000,1000
set output "frame_$FRAME.png"
plot cos(3*t+$FRAME/5)
A great thing about Gnuplot is that you type in commands in an interactive prompt to manipulate the graph, and these are the same commands you put in the script. So once you have something you're happy with, you can either do save 'newscript.gpt' or copy out the commands you used. You can recreate the graph by just running gnuplot newscript.gpt at a prompt.
Incidentally, it is easier to simulate hard-to-render scenes than to actually construct them, so just put a sleep command in the gnuplot script to make it take 15 seconds or however long.
The whole banana: Blender
Blender is a 3d rendering system. I believe it is used as a rendering system for some mainstream animations on server farms in exactly the manner you describe. It has quite a learning curve, but I think you should be able to pick up a tutorial or other pre-existing animation project files. From there you would need to work out how to invoke the rendering engine as a command for a specific frame. I've only ever done static rendering in blender, so I can't take you further there. This would be more impressive visually, but no more impressive academically, and you'll waste more time on that side of things.
My choice
Personally I would go with gnuplot. You can make 3d plots with the splot command rather than the 2d plot, and although it's not 3d scene rendering as you might be imagining, it achieves the purpose of making a picture using a script. You can begin with something totally basic like the above until you have your setup going, then introduce more complexity; from an implementation perspective, running a gnuplot script is easier than running a script that also requires a data file, but it's still easy to pre-generate the data and have gnuplot read that when you want to do command + script + data instead of command + script. The key is incrementally increasing the difficulty, not running before you can walk.
If you ultimately find you have spare time at the end and change it to using blender, then that's all free win, and you haven't jeopardised your hadoop project making it pretty.
I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.
Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)
Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.
I am working on a project in which I need to highlight the difference between pair of scanned images of text.
Example images are here and here.
I am building a webapp based on HTML,JS for this.
I found that openCV does support highlighting differences between 2 images.
Also I saw that imageMagick also has such support.
Does openCV has support for doing automatic registration of images?
And is there a JS module for openCV?
Which one is more suited for my purpose?
1. Simplistic way:
Suppose the images are perfectly aligned and similarly illuminated: subtract one image from another pixel by pixel, then threshold the result, filter out noisy blobs, and select the biggest ones. Good for a school project
2. A bit more complicated:
Align the images, then find a way to uniform the illumination, then apply the simplistic way.
How to align:
Find the text area in two images, as being a darker than the file color.
Find its corners
Use getPerspectiveTransform() to find the transform between images.
warpPerspective() one image to another.
Another way to register the two images is by feature matching. It has quite an extensive support in OpenCV. And findHomography() will estimate the pose between two images from a bigger set of matching points.
3. Canonical answer:
Align the image.
Convert it to text with an OCR engine.
Compare the text in the two images.
Well, besides the great help given by vasile, you also need the web app answer.
In order to make it work in a server, you will probably need a file upload form, as well as an answer from the server with the applied algorithm. There are several ways you can do it depending on the server restrictions you have. If you can run command line arguments, you would probably need to implement the highlight algorithm in opencv and pass the two input files a an output one for the program. A php script should be used for uploading the files, calling the command line program, and outputting the result to the user.
Another approach could be using java and JavaCV in a web container like Apache Tomcat, for instance.
Best regards,
Daniel
Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.