I am working on a project where I need to process around 10,000 3D Images. So can you suggest me how can I do this using HADOOP MapReduce so that I can achieve parallelism and get the result as quick as possible.
Thank you!
while working with images you can use HIPI(Hadoop Image Processing Interface).
Also some tools and example programs from HIPI.
You can get started with this.
And yes it is entirely up to you how you want to process your images.
I believe stereoscopy or 3D imaging stereoscopic imaging is a technique used to record and display 3D (three dimensional) images or an illusion of depth in an image. Stereoscopic images provide spatial information that trick a user's brain into believing and seeing depth in the images.
So their must be sets of images you can process.
Related
I have some images that their file name are corresponding GPS data (Long/Lat), and I'm trying to use preferably MATLAB to put them together and make a bigger image! In other words, I want to creat a local map by having several aerial pictures.
anyone has any idea or knows any software that really works to concatenate some images to make a bigger image?
Thanks
It depends on what type of images you have.
If you have georeferenced images, in GeoTiFF or other format, the Matlab Mapping Toolbox is very good for loading geographic imagery, manipulating it, and displaying it.
If you just have images that aren't georeferenced, you can use the Matlab Computer Vision Toolbox to register the images and stitch them together like a panorama. Photoshop also has some powerful panorama stitching tools if you don't need to do it programmatically.
I have set of images in which I want to separate graphs from others. I am looking at OpenCV but since I don't have any experience in image processing I don't know what technique or set of techniques should be applied in order to get this done.
I need to know the techniques of image processing for this task.
Sample Graphs are follows:
Other images can be of any type.. for e.g.
Thanks,
Jawad.
I want to implement an image processing method in order to distinguish scanner images from camera images. How can I collect images from scanners and cameras? The images should be very different in scale and content and capture devices and rgb images or gray-scale images.
thanks a lot.
The most famous database for machine learning is the UCI Machine Learning Repository, where you can find tons of open source datasets organized by learning objective, data type, number of instances and more.
I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.
Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)
Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.
I am working on a project in which I need to highlight the difference between pair of scanned images of text.
Example images are here and here.
I am building a webapp based on HTML,JS for this.
I found that openCV does support highlighting differences between 2 images.
Also I saw that imageMagick also has such support.
Does openCV has support for doing automatic registration of images?
And is there a JS module for openCV?
Which one is more suited for my purpose?
1. Simplistic way:
Suppose the images are perfectly aligned and similarly illuminated: subtract one image from another pixel by pixel, then threshold the result, filter out noisy blobs, and select the biggest ones. Good for a school project
2. A bit more complicated:
Align the images, then find a way to uniform the illumination, then apply the simplistic way.
How to align:
Find the text area in two images, as being a darker than the file color.
Find its corners
Use getPerspectiveTransform() to find the transform between images.
warpPerspective() one image to another.
Another way to register the two images is by feature matching. It has quite an extensive support in OpenCV. And findHomography() will estimate the pose between two images from a bigger set of matching points.
3. Canonical answer:
Align the image.
Convert it to text with an OCR engine.
Compare the text in the two images.
Well, besides the great help given by vasile, you also need the web app answer.
In order to make it work in a server, you will probably need a file upload form, as well as an answer from the server with the applied algorithm. There are several ways you can do it depending on the server restrictions you have. If you can run command line arguments, you would probably need to implement the highlight algorithm in opencv and pass the two input files a an output one for the program. A php script should be used for uploading the files, calling the command line program, and outputting the result to the user.
Another approach could be using java and JavaCV in a web container like Apache Tomcat, for instance.
Best regards,
Daniel