Qt widgets for HDR image display - image

I'm currently developing a consrol software for a camera that delivers 14 bit/sample grayscale images (specifically it's a scientific x-ray camera).
So far I just used the upper 8 bits and passed those to a QImage, so that I could see something. However now I need to show all detail, a widget that supports HDR and tone/pseudocolour mapping is needed.
Before I start developing such a widget and subclass QImage for HDR support I'd like to know if someone already did this for Qt and published the source under LGPL.

VTK has really good Qt widgets and supports everything what you may need for medical imaging.
Currently we use ParaView. ParaView is based on VTK and Qt, fully open source and easily extensible.

Related

Run time AR image target creation for Unity

Is there an SDK that supports run time image target creation for Unity?
I have tried User Defined Target from Vuforia but it does not give me the option to save the image for future use.
8th Wall SDK has image target support. You can provide an image at runtime by simply provide the RGB pixels to the engine configure call.
8th Wall calls into your phone available native support. So if you have ARKit, it uses ARKit image detection. If you have ARCore, it will call into ARCore (arcore 1.2 has image detection support but it just got released so it will take the 8th wall team a bit of time to roll out its support). The nice thing is you write your code once and it will just work.
Note: I worked on this product as my day job and was involved with this specific feature. Feel free to add comment to this answer if you would like more info.
Creating your own with tools like OpenCV is an option, and relatively easy. You can do this using either packages like OpenCV for Unity on the Unity Asset Store or EmguCV (paid).
This requires more work and a bit more understanding of computer vision, but there's useful tutorials available and I believe both packages provide examples that would cover what you're after.

Direct3D 11 issues in Windows 10(D3DX11_IMAGE_LOAD_INFO is removed)

As i catch it structure D3DX11_IMAGE_LOAD_INFO is deprecated in DX 11 for Windows 8.1 and up, what kind of structure can i use for replacement for this structure.
D3DX11_IMAGE_LOAD_INFO is part of the D3DX11 utility library from the DirectX SDK.
D3DX9, D3DX10, and D3DX11 are all deprecated along with the legacy DirectX SDK. See MSDN for the full details here.
Depending on what exactly you were wanting to do with D3DX11 here, there are a number of different options (all of which are open source under the MIT license).
The DirectXTex library provides the functionality in D3DX for loading bitmaps, resizing and converting them, generating mipmaps, compressing, and then writing them out as .DDS files. This is usually overkill for most applications to do at run-time, and not a particularly good use of end-user's time anyhow, but it's great for writing custom content tool pipelines for texture processing. The DirectXTex package includes a 'sample' which is the venerable texconv command-line tool written to use DirectXTex instead of D3DX.
The DDSTextureLoader module is intended to handle efficient loading of .DDS files and creating Direct3D 11 resources from them. It does not perform any runtime conversions, so some legacy files with pixel formats that do not directly map to a DXGI format will fail to load and in some cases the DXGI format of the file is not supported by the device and will also fail to load. For these cases, you will want to use DirectXTex to convert them offline to something that you can rely on being able to load on your target machine. This code supports the full range of Direct3D 11 resources including 1D, 2D, 3D, cubemaps, and texture arrays with mipmaps. The DDSTextureLoader module is included in both the DirectXTK library and in the DirectXTex package.
For very simple cases, there is also a WICTextureLoader module which can load standard bitmap files, does some runtime conversions and resizing, and then creates a Direct3D 11 texture 2D from it. It can optionally enable the 'auto-gen mipmaps' feature of Direct3D 11 to provide some basic mipmap support as well (standard bitmap files can't store mipmaps with the base image the way a .DDS file can). This makes use of the Windows Imaging Component (WIC), but is much more 'heavyweight' than DDSTextureLoader. This gives you less control over the quality of the filtering (particularly mipmaps), and does not support complex textures like volume maps, cubemaps, or texture arrays. The WICTextureLoader module is also included in both the DirectXTK library and in the DirectXTex package.
The ScreenGrab module is intended as a light-weight texture saver for creating 'screen shot' bitmap files from render target textures. The ScreenGrab module is included in the DirectXTK library and DirectXTex package.
-- excerpt from this post
For a complete catalog of replacements for legacy D3DX, see this post. There are similar posts for samples, tools, and the DirectX components.
Since you've marked this question with the VS 2013 tag, I'm assuming you are using Visual Studio 2013. You should read about the Windows 8.1 SDK that comes with it. There's a NuGet package for DirectX Tool Kit that works with VS 2013 Update 5, as well as a "Direct3D Game" template package for VS 2013 that you might want to check out.

Is there a similar library to PixTools for MAC?

Is there a similar library to PixTools for capturing images, images from a scanner and then treat them with image recognition for OSX operating systems?
PixTools / Scan developers programmatic Gives Control of the Entire scanning process and every scanner feature
I am programming a system in MAC that use a scanner,i am programming in swift.
I require is a library that allows me to manipulate the images and perform OCR as pixtools for .Net
Yes; what you're looking for are the ImageKit and ImageCaptureCore frameworks.
Note that ImageKit is a bit more general than PixTools; it handles transferring images from cameras as well as running scanners. However, it does not support some of the more complex image enhancement and recognition features supported by PixTools.

Is there some easy use image processing/editing library for Cocoa?

Like OpenCV
I hope the library can do several simple image edit operation, like DrawLine(UiImage, startPoint, endPoint), or ConvertToGray(UiImage)
CoreImage is the built-in image manipulation library in Cocoa.
For example: What is the best Core Image filter to produce black and white effects?
I'd suggest using OpenCV , which is a great algorithms and image processing library.
Choosing Opencv would give you more future option.
Try this
OpenCV is not meant for image editing. You can do that, but it's like buying a big track to carry your groceries from the market.
The best way to do it is to look into some already integrated image editing libraries. And as I know, in Cocoa there are several of them. CoreImage, mentioned by Dor, is one of them.
And there are some specialized image editing / UI toolkits that may help you better than OpenCV. You may check whether ImageMagik or QT are available for Mac/iOS

hardware acceleration / performance and linkage of different macosx graphics apis, frameworks and layers

the more i read about the different type of views/context/rendering backends, the more i get confused.
regarding to http://en.wikipedia.org/wiki/Quartz_%28graphics_layer%29
MacOSX offers Quartz (Extreme) as a render-backend which itself is part of Core Graphics.
in the Apple docs and in some books too they say that in any case somehow you use OpenGL (obviously since this operating system uses OpenGL to render all its UI).
i currently have an application that should capture real-time video from a camera (via QTKit which is based on Quicktime but is Cocoa) and i would like to further process the frames (via Core Image, GLSL shaders, etc.).
so far so good. now my question is - does it matter performancewise if you
a) draw the captured frame via Quartz and implicitely via OpenGL or
b) if you setup an OpenGL context and a DisplayLink and draw the buffered image explicitely via OpenGL?
what would be the advantages or disadvantages of going either way?
i've looked at the different examples (especially CoreImage101 and CoreVideo101) and documents from apple's developer pages but i can't see why they go (or have to go) that way?!?
and i really don't get where Core Video and Core Animation come into play?
does going way b) automatically mean i use Core Video? and with which way can i use Core Animation?
additional info:
http://developer.apple.com/leopard/overview/graphicsandmedia.html
http://theocacao.com/document.page/306
http://lists.apple.com/archives/quartz-dev/2007/Jun/msg00059.html
p.s.: btw, i am on Leopard, so no QuicktimeX confusion yet :)
Generally speaking OpenGL just gives you more flexibility than the higher level APIs. If the higher level APIs do not offer a feature you need then it is very likely that you will need to drop down to the OpenGL layer.
If they do offer everything you need then you should comparable speed. Perhaps a small (almost negligible) degradation given the Objective-C overhead.

Resources