I'm trying to create a tool to rasterise vector images—stored in PDF files—on macOS, but the resulting images contain artifacts around the edges of some shapes. Preview.app, on the other hand, always renders the PDF flawlessly, as shown in this example:
I've tried:
Loading the PDF document using PDFKit, and rendering the page using both draw(with:to:) and thumbnail(of:for:)
Loading the PDF document into an NSImage (which creates an NSPDFImageRep), and using cgImage(forProposedRect:context:hints:)
In both cases I get these aliasing-like artifacts as seen on the left-hand-side of the image above. The PDF file is out of my control, so can't be changed to fix any issues it might have. I'm currently trying to migrate away from Cairo (which renders correctly) to Apple's PDF rendering for performance reasons (PDFKit renders it much more quickly, albeit with these artifacts).
Is there anything I've missed which would fix the output?
So it looks like the issue was caused due to me rasterising PDFs on multiple threads (specifically my tool rasterises PDFs in multiple resolutions, so I thought why not simultaneously).
Performing the operations sequentially on the main thread instead fixed it. I thought that I had come up with a way to use it concurrently by initialising the CGContext manually (instead of using NSImage's lockFocus()/unlockFocus() and NSGraphicsContext.current), but alas, as I soon as I add a context.scaleBy (to generate the images at different sizes), it fails again.
So for now I'm just doing it on the main thread until another solution comes along.
Related
I am searching for a performant way to generate a PNG based on a layout. These layouts will mostly consist of text and a 1-2 icons. The Datasource for these informations is JSON. However, the JSON won't be normalized to fit the Layout/Screen size. Let me clarify: The JSON will contain an attribute "Title". The title may be too long, so the font size has to be decreased. Or the description has too many attributes and only some of them need to be displayed, and so on.
We currently have a system in place for creating these layouts and generating a PNG, but creating new layouts is very time consuming and frankly speaking, a pain. However, the current solution is extremely performant, as it can generate a PNG in around 1-2ms. For my PoC to be deemed successful, i need to reach 10ms or lower. If there is a solution that takes slightly longer to generate, but can be scaled horizontally, that's fine as well.
TL;DR:
I'm searching for a way to generate a PNG based on a layout i create. The PNG generation needs to be performant (< 10ms) and the implementation of new layouts should be as hassle free as possible.
What technologies are suited for this use case?
Here is an example, of what a layout might look like:
Edit: I can't post images yet, but please search for "electronic shelf labeling" on google images.
Also:
I've already made a similar question yesterday, but it was pointed out that my way of trying to achieve this, probably won't lead to success. Original Post
I am creating a simple photo catalogue application for macOS to see whether the latest APIs can significantly improve performance of loading directories with large numbers of images.
So far it looks pretty promising and loading around 600 45MB RAW image thumbnails using QLThumbnailGenerator and CGImageSourceCreateWithURL is super fast allowing thumbnail images and image metadata to be displayed almost instantly.
Displaying these images in a NSCollectionView using a CALayer in the NSCollectionViewItem's view also appears to be extremely fast and scrolling is very smooth.
I did find that QLThumbnailGeneratorseems to start failing after a few hundred images and starts returning error code 108 if I call the api in a continuous loop - I fixed that by calling CGImageSourceCopyPropertiesAtIndex immediately after the thumbnail generator api call - so maybe there is a timing issue or not enough file handles or something if the api is called to quickly and for too long.
However I am still having trouble rendering a full sized image to the display - here I am using a NSScrollView with a layer backed NSView documentView. Everything is super fast until the following call:
view.layer.contents = cgImage
And at this point the entire main thread hangs until the image has loaded - and this may take a few seconds.
Once it has loaded it's fine and zooming in and out by changing the documentView frame size is very fast - scrolling around the full size image is also super smooth without any of the typical hiccups.
Is there a way of loading these images without causing the UI to freeze ?
I've seen the recent WWDC2020 session where they demonstrate similar scrolling of large numbers of images but I haven't been able to find anything useful on loading large images other than CATiledLayer - but it's not really clear if that is the right answer for this problem.
The old Apple sample RawExpose seemed to be an option but most of that code is deprecated and it seems one has to use MetalKit not instead of GLKit - unfortunately there is no example of using MetaKit with Core Image that I can find.
FYI - I tried using some the new SwiftUI CollectionView and List but they seem to be significantly slower than AppKit and I found some of the collection view items never render - of course these could just be bugs in the macOS 11 beta.
OK - well I finally figured it out and it's complicated but simple. It's complicated because there are so many options to choose from and so many outdated sample apps to look at. In any event I think I have solved most if not all the issues related to using metal backed CALayers and rendering realtime updates of the images as CIFilter adjustments are applied. There are many pieces to the puzzle and happy to share if anyone is looking for help.
Some key pointers:
I am using CAMetalLayer and NSView
I override the CAMetalLayer.display(layer:) method and call the layer.setNeedsDisplay() when the user slides an adjustment slider.
I chain together all the CIFilters, including the RAW filter created with CIFilter(imageUrl:)
Most importantly I use the RAW filters scaleFactor parameter to size the image - encountered major performance issues using any other method to resize the image for the views size
Don't expect high performance if the image is zoomed right in - 50% is seems to be the limit for 45megapixel RAW imaged from Nikon D850.
A short video of the result is here https://youtu.be/5wp0CIWAoIM
I am able to utilize imagemagick to properly generate animated images and I can post them to twitter without issue, however upon posting them, it seems as though Twitter is somehow destroying the animation component I'm guessing with it's reencoding of the image.
This is less than desirable in my situation as I need to post a statistical compilation of images daily to an account and need the animation to retain integrity. I am supposing that this is a function of Photobucket that they're now using or such.
How does one encode an image and upload it such that it retains its integrity? I have wondered about uploading directly to TwitPic or other options or perhaps exploring more fully the imagemagick encoding options so that they line up precisely with Twitter requirements in order to produce an image that requires no reencoding, however I'm looking for help in this regard.
We would like to display very large (50mb plus) images in Internet Explorer. We would like to avoid compression as compression algorithms are not what CSI would have us believe that they are and the resulting files are too lossy.
As a result, we have come up with two options: Silverlight Deep Zoom or a Flash based solution (such as Zoomify). The issue is that both of these require conversion to a tiled output and/or conversion to a specific file type (Zoomify supports a single proprietary file type, PFF).
What we are wondering is if a solution exists which will allow us to view the image without a conversion before hand.
PS: I know that you can write an application to tile the images (as needed or after the load process) and output them; however, we would like to do this without chopping up the file.
The tiled approach really is the right way to do it.
Your users don't want to download a 50mb file before they can start viewing the image. You don't want to spend the bandwidth to serve 50 megs to every user who might only view a fraction of your image.
If you serve the whole file, users will eventually be able to load and view it, but it won't run smoothly for most of them.
There is no simple non-tiled way to serve just a portion of an image unless you want to use a server-side library like imagemagik or PIL to extract a specific subset of the image for each user. You probably don't want to do that because it will place a significant load on your server.
Alternatively, you might use something like google's map tool to provide zooming and scaling. Some comments on doing that are available here:
http://webtide.wordpress.com/2008/08/27/custom-google-maps/
Take a look at OpenSeadragon. To make a image can work with OpenSeadragon, you should generate a zoomable image format which mentioned here. Then follow starting guide here
The browser isn't going to smoothly load a 50 meg file; if you don't chop it up, there's no reasonable way to make it not lag.
If you dont want to tile, you could have the server open the file and render a screen sized view of the image for display in the browser at the particular zoom resolution requested. This way you arent sending 50 meg files across the line when someone only wants to get an overview of the image. That is, the browser requests a set of coordinates and an output size in pixels, the server opens the larger image and creates a smaller image that fits the desired view, and sends that back to the web browser.
As far as compression, you say its too lossy, but if thats what you are seeing you are probably using the wrong compression algorithm or setting for the type of image you have. The jpg format has quality settings to control lossiness, and PNG compression is lossless (the pixels you get after decompressing are the exact values you had prior to compression). So consider changing what you are using as compression, and dont just rely on the default settings in an image editor.
I am a scriptmonkey working with a lot of graphic designers who know not a thing about the web.
Despite my objections I frequently find myself with problems such as a 100Kb background image, several textual items they have made into glossy images, and 3 separate lengthy FLVs loading into a page etc etc.
I would really like to define a stack to control the flow of items loading. Eg, render the background, then the HTML, then the page images, then load the FLVs.
I assume this exists and I have been searching badly.
Can anyone point me to good resources on this?
For the images you can use the load event of the images to know when they are done. So you can chain loading this way. Look this: https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-5214317.html