Qt - Modify the qsvg plugin to support animation with AnimatedImage, possible? - image

In a Qt Quick Controls 2 project, I need to draw several animated SVG. As the AnimatedImage doesn't support the SVG animation, I could find a solution to achieve that by creating my own c++ component and by using a timer to refresh the animation every x time. For that reason I know that my animated SVG are supported correctly by the QtSvg renderer.
However I want a solution supported natively by the AnimatedImage qt quick control. Actually I'm able to compile and modify the code of the native QtSvg module, as well as its qsvg plugin, which is the bridge between the QtSvg module and the QImageIOHandler interface, used by the AnimatedImage to draw the content.
But even if I add the necessary code in the plugin to support the animation (Animation property support, functions related to the animation, ...), the AnimatedImage component continue to draw a static SVG.
I dug a lot in the plugin's caller source code, and I deduced some conclusions:
The plugin caller cache the frames in several QPixmap, and read the animation loop only once (then the cache is replayed)
The read() function is called several times in the plugin if, an only if, the source image file is divided in frames, like it's the case e.g in GIF images, or in cursor (.cur) files, so a frame is read and cached, then the file cursor is moved to the next frame, and the next frame is read and cached, and so on
For that reason the way the SVG images are animated may be incompatible with the Qt animated image engine, because the whole SVG data are read only once, which notifies the engine that the image contains only one frame, instead of considering that the frame count should be determined by the duration and the FPS
For that reason I have the following questions:
Are my above conclusions correct? If not, how is working the AnimatedImage and the Qt image plugins under the hood?
Is there a way to create a Qt image plugin which supports the animated SVG images? If yes, how can I do that?
I could not find a good, well explained document about how to create animated image plugin, can someone advise me one?
Why Qt didn't plan to support animated SVG natively, and is there anything planned in the future to fix a such situation?
NOTE I know that I may use the QtWebEngine to support animated SVGs, but it's not an option for me, because:
QtWebEngine adds ~300Mo to my application. As its current size is ~10Mo, a such increase isn't relevant to just show several animated images.
QtWebEngine brings several side effects in my case, which have no response, like e.g: https://forum.qt.io/topic/114326/webengine-how-to-draw-an-animated-svg-above-a-transparent-web-view
So please don't propose a such alternative as a solution.

Related

How to prevent rendering artifacts in PDFKit/NSImage?

I'm trying to create a tool to rasterise vector images—stored in PDF files—on macOS, but the resulting images contain artifacts around the edges of some shapes. Preview.app, on the other hand, always renders the PDF flawlessly, as shown in this example:
I've tried:
Loading the PDF document using PDFKit, and rendering the page using both draw(with:to:) and thumbnail(of:for:)
Loading the PDF document into an NSImage (which creates an NSPDFImageRep), and using cgImage(forProposedRect:context:hints:)
In both cases I get these aliasing-like artifacts as seen on the left-hand-side of the image above. The PDF file is out of my control, so can't be changed to fix any issues it might have. I'm currently trying to migrate away from Cairo (which renders correctly) to Apple's PDF rendering for performance reasons (PDFKit renders it much more quickly, albeit with these artifacts).
Is there anything I've missed which would fix the output?
So it looks like the issue was caused due to me rasterising PDFs on multiple threads (specifically my tool rasterises PDFs in multiple resolutions, so I thought why not simultaneously).
Performing the operations sequentially on the main thread instead fixed it. I thought that I had come up with a way to use it concurrently by initialising the CGContext manually (instead of using NSImage's lockFocus()/unlockFocus() and NSGraphicsContext.current), but alas, as I soon as I add a context.scaleBy (to generate the images at different sizes), it fails again.
So for now I'm just doing it on the main thread until another solution comes along.

What is the best way to display and apply filters to RAW images on macOS?

I am creating a simple photo catalogue application for macOS to see whether the latest APIs can significantly improve performance of loading directories with large numbers of images.
So far it looks pretty promising and loading around 600 45MB RAW image thumbnails using QLThumbnailGenerator and CGImageSourceCreateWithURL is super fast allowing thumbnail images and image metadata to be displayed almost instantly.
Displaying these images in a NSCollectionView using a CALayer in the NSCollectionViewItem's view also appears to be extremely fast and scrolling is very smooth.
I did find that QLThumbnailGeneratorseems to start failing after a few hundred images and starts returning error code 108 if I call the api in a continuous loop - I fixed that by calling CGImageSourceCopyPropertiesAtIndex immediately after the thumbnail generator api call - so maybe there is a timing issue or not enough file handles or something if the api is called to quickly and for too long.
However I am still having trouble rendering a full sized image to the display - here I am using a NSScrollView with a layer backed NSView documentView. Everything is super fast until the following call:
view.layer.contents = cgImage
And at this point the entire main thread hangs until the image has loaded - and this may take a few seconds.
Once it has loaded it's fine and zooming in and out by changing the documentView frame size is very fast - scrolling around the full size image is also super smooth without any of the typical hiccups.
Is there a way of loading these images without causing the UI to freeze ?
I've seen the recent WWDC2020 session where they demonstrate similar scrolling of large numbers of images but I haven't been able to find anything useful on loading large images other than CATiledLayer - but it's not really clear if that is the right answer for this problem.
The old Apple sample RawExpose seemed to be an option but most of that code is deprecated and it seems one has to use MetalKit not instead of GLKit - unfortunately there is no example of using MetaKit with Core Image that I can find.
FYI - I tried using some the new SwiftUI CollectionView and List but they seem to be significantly slower than AppKit and I found some of the collection view items never render - of course these could just be bugs in the macOS 11 beta.
OK - well I finally figured it out and it's complicated but simple. It's complicated because there are so many options to choose from and so many outdated sample apps to look at. In any event I think I have solved most if not all the issues related to using metal backed CALayers and rendering realtime updates of the images as CIFilter adjustments are applied. There are many pieces to the puzzle and happy to share if anyone is looking for help.
Some key pointers:
I am using CAMetalLayer and NSView
I override the CAMetalLayer.display(layer:) method and call the layer.setNeedsDisplay() when the user slides an adjustment slider.
I chain together all the CIFilters, including the RAW filter created with CIFilter(imageUrl:)
Most importantly I use the RAW filters scaleFactor parameter to size the image - encountered major performance issues using any other method to resize the image for the views size
Don't expect high performance if the image is zoomed right in - 50% is seems to be the limit for 45megapixel RAW imaged from Nikon D850.
A short video of the result is here https://youtu.be/5wp0CIWAoIM

Is there any way to increase resolution of the image generated via toDataURL?

This is more of a feature request. I am easily able to generate the PNG by calling toDataURL on the returned canvas. But the quality of the image is rather blurry/poor. I did some googling & found out that by default it just returns an image at 96 dpi. And there doesnt seem to be a standard way of improving this. Also the toDataURLHD is experimental and does not work any ways.
Is there any way html2canvas can return an image at a higher resolution? Or even if it can provide a way to get the DOM being rendered, I can use some library that uses the DOM (with all the computed styles applied to it) and then generate whatever image I want.
I am unable to comment so writing answer. #shiv probably you should look the quesiton setting canvas toDataURL jpg quality

How to render and retrieve image data on Google Project Tango?

I'd like to render the live image data on a GL surface (as shown in various Project Tango samples), and at the same time record (encode) it via a MediaCodec.
(On an Android Lollipop device, I've accomplished that using the camera2 interface and multiple surface targets, which works fine, but thus far Tango is pre-Lollipop...)
From other answers, it appears that you have to use the C API to access the image data.
The C API provides two camera frame functions -- TangoService_connectTextureId() and TangoService_connectOnFrameAvailable(). However, the documentation states "Use either TangoService_connectTextureId() or TangoService_connectOnFrameAvailable() but not both."
Why not both?
How do I best render and retrieve the image data?
The Pythagoras release now allows for simultaneous use of color and color texture callbacks now. That said, you want to use the connectOnFrameAvailable if you want to process the image, you'd end up doing extra unnecessary work if you try and peel it out of the texture.

Render css3 transformed elements to canvas / image

Basically, I want to save a certain DOM element of my page as an image, and store this on a server (and also allow the user to save the image to a local disk). I reckon the only way of doing this currently is to render a canvas, which allows me to send the image data via AJAX and also make image elements in the DOM. I found a promising library for this, however my DOM element has
multiple transparent backgrounds
css 3d tranforms
And html2canvas simply fails there. Is there currently any way to neatly save an image representation of the current state of a DOM element, with all its CSS3 glory?
Browsers may never allow a DOM element to be truly rendered as it is to a canvas, because there are very serious security concerns around being able to do that.
Your best bet is html2canvas plus your own hacks. You may simply need to implement your own render code in a customised way. Multiple backgrounds should be doable with drawImage calls, and you may be able to work in css3 transforms when canvas 2D gets setTransform() (which I think is only in the next version of the spec).
In this stage of CSS3 development and crossbrowser support is this probably not possible without writing your own html2canvas extension.
You can try dig into Google Chrome bug reporter as they allow you to send current snapshot of web page. But I think it's some internal function which isn't available in JS api.
Also, I think this can be easily abused for spying on users, so don't expect any official support from browser development teams.

Resources