3D image comparison in TestComplete - testcomplete

HOw ot compare 3D image files in testcomplete. My application processes some 3D images i want it to be compared with the reference. Image file types are .spt, .vtk, .mdb ,.dcm.
Someone help me.

You can probably use checkpoints for this purpose. For example:
To verify an image displayed on screen, use a region checkpoint.
To verify the actual file that holds the image data, use a file checkpoint.

Well, for DICOM images you could think about converting those into bitmaps and have TestComplete compare the bitmaps. Admitted, there is one additional step that you have to take care of, and this is the choice of a (command line) tool that does the conversion for you. I think IrfanView does the job. Give it a try and post your results.

Related

Identify logos on PNG without AI Services

So, I don't believe that exists someway to "read" a PNG through its binary code, or something like that, but I have zero knowledge on Image Processing or Computer Vision, and so, I can't be sure if that are ways to do it or not.
To be clear: I want to know if there are ways to identify the image of a Logo using an image of the Logo as reference but through methods that use only the binary of the image.
Thanks in advance
If the logo is known, and not distorted a lot (logo printed on a scarf is not as good as on a flat surface), there is technologies that can achieve that:
It is a template matching problem, see https://docs.opencv.org/4.5.2/d4/dc6/tutorial_py_template_matching.html.

I want to batch extract gps data (exif) then convert to address and save that text to a jpg

I have 1500 pictures that need the address where they were taken to be shown in the corner of the picture. I have the pictures geo-tagged.
I need help extracting the GPS data and converting that to an address.
Then getting that address and saving it into the picture in the bottom right corner. Can anyone help or point me in the right direction please?
You're going to need two things. First you need an application that will extract the EXIF data that you are interested in. You should be able to write this yourself as it is fairly simple to do. You will need the JPEG standard and just need enough of it to identify the markers; specifically the APPn markers. You are also going to need the EXIF and (possibly the) TIFF standards to figure out how to extract the data you need form the EXIF APPn marker.
Writing the information to the corner of the image is the tough part. There are probably command line applications that will allow you to do that already. If worst comes to worst, there are various language API's that will allow you to read a JPEG stream into a buffer; draw text to the buffer; then write the buffer back to a JPEG stream.
You will most likely need to use a programming language for this; I think Python would be suitable as it's easy to get started and has libraries needed for your task.
For example, in order to extract the location (coordinates) from the JPEG files you can use pyexiv2.
To transform those coordinates to addresses you need to use a geocoding service such as Google's Geocoding API - you can use their Python library directly or code your own using something like requests.
Now that you have the address data you can overlay that data onto images using Python's pillow library.
If you're looking for some code to get started let me shamelessly plug my own project called photomap; you can find code to read GPS information from images here: https://github.com/iticus/photomap/blob/master/handlers.py#L170

Detect image that only contains text

I have a database with two kinds of images:
Photos with text integrated or not
Images that only contains a background color and text over it
I have a delphi webservice and I want to send to the clients only the photos. Does exist any simple and fast algorithm to detect if a image is only a background with text over it?
What type of approach should use?
Thanks in advance
You could probably do it faster if you count image colors.
See the CountColors function in the ImageProcessingPrimitives.PAS unit.
Since the background is one color.
You can use an OCR (Optical Character Recognition) library. Take a look at this question.

Very large images in web browser

We would like to display very large (50mb plus) images in Internet Explorer. We would like to avoid compression as compression algorithms are not what CSI would have us believe that they are and the resulting files are too lossy.
As a result, we have come up with two options: Silverlight Deep Zoom or a Flash based solution (such as Zoomify). The issue is that both of these require conversion to a tiled output and/or conversion to a specific file type (Zoomify supports a single proprietary file type, PFF).
What we are wondering is if a solution exists which will allow us to view the image without a conversion before hand.
PS: I know that you can write an application to tile the images (as needed or after the load process) and output them; however, we would like to do this without chopping up the file.
The tiled approach really is the right way to do it.
Your users don't want to download a 50mb file before they can start viewing the image. You don't want to spend the bandwidth to serve 50 megs to every user who might only view a fraction of your image.
If you serve the whole file, users will eventually be able to load and view it, but it won't run smoothly for most of them.
There is no simple non-tiled way to serve just a portion of an image unless you want to use a server-side library like imagemagik or PIL to extract a specific subset of the image for each user. You probably don't want to do that because it will place a significant load on your server.
Alternatively, you might use something like google's map tool to provide zooming and scaling. Some comments on doing that are available here:
http://webtide.wordpress.com/2008/08/27/custom-google-maps/
Take a look at OpenSeadragon. To make a image can work with OpenSeadragon, you should generate a zoomable image format which mentioned here. Then follow starting guide here
The browser isn't going to smoothly load a 50 meg file; if you don't chop it up, there's no reasonable way to make it not lag.
If you dont want to tile, you could have the server open the file and render a screen sized view of the image for display in the browser at the particular zoom resolution requested. This way you arent sending 50 meg files across the line when someone only wants to get an overview of the image. That is, the browser requests a set of coordinates and an output size in pixels, the server opens the larger image and creates a smaller image that fits the desired view, and sends that back to the web browser.
As far as compression, you say its too lossy, but if thats what you are seeing you are probably using the wrong compression algorithm or setting for the type of image you have. The jpg format has quality settings to control lossiness, and PNG compression is lossless (the pixels you get after decompressing are the exact values you had prior to compression). So consider changing what you are using as compression, and dont just rely on the default settings in an image editor.

Save matrix of double values in OpenCV

I have an OpenCV matrix of double (CV_32F) values. I'd like to save it to the disk. I know, I could convert it to an 1-Channel 8-bit IplImage and save it. But that way, I loose precision. Is there a way to save it directly in the 32-bit format, without having to convert it first? It also would be nice, if the resulting file would have an image format, so I can view the result as an image.
You can always save any "object" (CvMat, IplImage, anything..) from OpenCV "as is" by using cvSave() and loading it back with cvLoad(). As to my experience, most floating-point image stuff does not work correctly, I usually save my floating point data this way.
However, you cannot directly view the stored data.
Another possibility we have used frequently is including an own built of OpenEXR. You can easily store full precision floating point images using this library and many third party applications are able to open EXR files. Note that OpenCV includes OpenEXR, if i am not mistaken, but the last time i've tried, saving/loading floating point images did not work correctly. However, you should first try to save an fp image as *.exr, maybe that already does the magic with recent versions.
You could always iterate over the matrix and write it out yourself. If you want it to be viewable as an image, you can use a variant of PPM. I'm not sure what programs would be able to natively read your image format if you use values out of the 0-255 range though.
This is old, but thought I'd throw in my two cents.
If you just want to save float images to disk, and you don't need to view them, you may want to look at Portable Float Map (PFM) image format. Very simple format, just saves floats to disk, no compression, minimal header. You can write your own read/write code for this very quickly. That's what I'm using for HDR research.
As the others pointed out, to "view" float images you need to ask yourself some questions about their contents and how to sensibly scale them back into an 8-bit range you can display on your monitor. You might consider Matlab's image viewer (imshow function) which offers some double scaling functionality.
You might also consider saving to either EXR or HDR format and using Photomatix's built-in HDR image viewer which gives you a little separate window that shows you a real-time tonemapped window around your current cursor position. It's a good way to navigate an HDR or floating point image to get a sense of "what's really there" without tonemapping the whole thing.

Resources