Silverlight canvas freedraw underperforming - performance

I'm making a silverlight website which includes paint-like features including freedraw. To achieve this I used the technique described on the following website: http://codeding.com/articles/freehand-drawing-in-silverlight .
The problem is, when I run the demo project it will start to lag extremely after just a few seconds of drawing. I realise that that is probably caused by the amount of shapes this techniue requires, however, and this is my main question:
How on earth does the demo on the website not lag nomatter howmuch I draw, while my local project which should have the EXACT same code lags right away?
I tried finding something about improving canvas performance overall, but the only thing I found was turning the drawing into a static image, which is not really ideal since I use undo/redo functionality.

The number of shapes added to the Canvas shouldn't be the reason for the lagging, there must be something else like converting the drawing into image for undo/redo functionality. For undo/redo, you can save the strokes-information instead of images. Creating & storing images during each undo/redo operation will consume too much memory.
A stroke is nothing but a set of points from the start (mousedown event) to end (mouseup event), and a set of strokes forms a complete drawing. You can always recreate the drawing using the saved strokes-information (just like you can recreate using images). You can use simple data-structures like List<List<Point>> to store a complete drawing, this is very memory efficient instead of creating & storing the image itself.

Related

What is the best way to display and apply filters to RAW images on macOS?

I am creating a simple photo catalogue application for macOS to see whether the latest APIs can significantly improve performance of loading directories with large numbers of images.
So far it looks pretty promising and loading around 600 45MB RAW image thumbnails using QLThumbnailGenerator and CGImageSourceCreateWithURL is super fast allowing thumbnail images and image metadata to be displayed almost instantly.
Displaying these images in a NSCollectionView using a CALayer in the NSCollectionViewItem's view also appears to be extremely fast and scrolling is very smooth.
I did find that QLThumbnailGeneratorseems to start failing after a few hundred images and starts returning error code 108 if I call the api in a continuous loop - I fixed that by calling CGImageSourceCopyPropertiesAtIndex immediately after the thumbnail generator api call - so maybe there is a timing issue or not enough file handles or something if the api is called to quickly and for too long.
However I am still having trouble rendering a full sized image to the display - here I am using a NSScrollView with a layer backed NSView documentView. Everything is super fast until the following call:
view.layer.contents = cgImage
And at this point the entire main thread hangs until the image has loaded - and this may take a few seconds.
Once it has loaded it's fine and zooming in and out by changing the documentView frame size is very fast - scrolling around the full size image is also super smooth without any of the typical hiccups.
Is there a way of loading these images without causing the UI to freeze ?
I've seen the recent WWDC2020 session where they demonstrate similar scrolling of large numbers of images but I haven't been able to find anything useful on loading large images other than CATiledLayer - but it's not really clear if that is the right answer for this problem.
The old Apple sample RawExpose seemed to be an option but most of that code is deprecated and it seems one has to use MetalKit not instead of GLKit - unfortunately there is no example of using MetaKit with Core Image that I can find.
FYI - I tried using some the new SwiftUI CollectionView and List but they seem to be significantly slower than AppKit and I found some of the collection view items never render - of course these could just be bugs in the macOS 11 beta.
OK - well I finally figured it out and it's complicated but simple. It's complicated because there are so many options to choose from and so many outdated sample apps to look at. In any event I think I have solved most if not all the issues related to using metal backed CALayers and rendering realtime updates of the images as CIFilter adjustments are applied. There are many pieces to the puzzle and happy to share if anyone is looking for help.
Some key pointers:
I am using CAMetalLayer and NSView
I override the CAMetalLayer.display(layer:) method and call the layer.setNeedsDisplay() when the user slides an adjustment slider.
I chain together all the CIFilters, including the RAW filter created with CIFilter(imageUrl:)
Most importantly I use the RAW filters scaleFactor parameter to size the image - encountered major performance issues using any other method to resize the image for the views size
Don't expect high performance if the image is zoomed right in - 50% is seems to be the limit for 45megapixel RAW imaged from Nikon D850.
A short video of the result is here https://youtu.be/5wp0CIWAoIM

Monogame Extended Tiled

I'm making an isometric city builder using Monogame Extended and Tiled. I've got everything set-up and now i need to somehow access the specific tiles so i can change them at runtime as the user clicks on a tile to build an object. The problem is, i can't seem to find a "map.GetLayer("Layername").GetTile(x,y) or .SetTile(x,y) function or something similar.
Now what i can do is edit the xml(.tmx) file which has a matrix in it that represents the map and it's drawn tiles. The problem with this is that i need to build the map in the content pipeline again after editing for the changes to be displayed. I can't really build at runtime or can i?
Thanks in advance!
Something like this will get you part way there.
var tileLayer = map.GetLayer<TiledMapTileLayer>("layername");
TiledMapTile tile;
if(tileLayer.TryGetTile(x, y, out tile))
{
// do something with tile
}
However, there's only a limited amount of things you can actually do with the tile once you've got it from the map.
There's no such thing as a SetTile method because changing tile data at runtime is not currently supported. This is a limitation of the renderer, which has been optimized for rendering very large maps by building static geometry that can't be changed once it's loaded into the graphics card.
There has been some discussion about building another renderer that would handle dynamic map changes but at this stage nothing like that has been implemented in the library. You could always have a go at implementing a simple renderer yourself, a really basic one is not as hard as you might think.
An alternative approach to dealing with this kind of problem might be to pre-process the map data before giving it to the renderer. The idea would be to effectively separate the layers of the map that are static from those that are dynamic and render the dynamic tiles as normal sprites. Just a thought, I'm not sure about the details of how this might work.
I plan to eventually revisit the Tiled API in the next major version of MonoGame.Extended. Don't hold your breath, these things can take a lot of time, but I am paying attention to the feedback and kinds of problems people are experiencing with the existing API.
Since the map data is stored in a XML (or csv) file which runs through the Content Pipeline you can not change it at runtime.
Anyways, in a city builder you usually do not change existing tiles but you place object on top of existing tiles.

Best Practice for laying out images for printing in a WYSIWYG Mac app?

I'm in the concept phase of a Mac application that should let the user easily select and layout images for printing. It's a document-based app and a document can have multiple pages with lots of pictures in different sizes and rotations on it. The UI would kind of be like the UI of Pages.app.
Those pictures can possibly be large hi-res images. The user should also be able to print them in the best quality that the images offer.
I have re-watched some WWDC sessions about Quartz, 2D drawing optimization and NSView.
I know that there are a few different ways of accomplishing what I want to do, namely:
Use a custom view for a "page" and draw the images in drawRect: with Core Graphics/Quartz. Use CG transforms to rotate and scale images.
Also use a custom view for a "page", but use NSImageView-subviews to display the images. Use Core Animation and layer transforms to scale/rotate images.
What is the best practice for this? Drawing with Core Graphics or using NSViews? Why?
Thank you so much!
Johannes
Depends on how interactive these pages should be. If there is a lot of mouse interaction, e.g. dragging, selecting etc. I'd go with views. If you want fluid animations I'd even use plain CALayers with their content set to one image. This would also let you zPosition the images in case they overlap. A view based solution makes z-ordering hard.
The drawRect method should be fastest but you have hard times integrating user interaction and you must z-order manually.
This is a reply I got from opening one of my two Apple Technical Support Incidents:
Hi Johannes,
Thanks for contacting Apple DTS regarding your question about printing
and the different ways to construct your applications general UI (with
views).
There is a trend toward using layer-backed views in OS X (utilizing
Core Animation layers) which is motivated by the ability to easily
animate your application's user interface, with little work, when
needed. However in terms of printing, you would be better off to
implement drawRect for custom views so that the view contents can be
drawn at "full resolution" when rendered into the context for
printing.
If instead you use layer backed views at as those layers to
"renderInContext" the layer contents would be used to render, which
commonly will not be set to the full resolution of your source
documents/images. This is because layer backed views take additional
memory to store those bitmaps (cached layer contents), and because of
that, they are recommended to be sized appropriately for the screen
(which may not necessarily be sized appropriately for the printed
page).
Does this help guide your application architecture? Please let me
know.
So basically this means that using layer-backed views might result in sub-optimal printing quality. I've replied with some follow-up questions ("How setting wantsLayer = NO on the rootView right before printing help?") and will post the answers as soon as I get them.
All three approaches should work. Since you should be using scaled-down representations of large images anyway, I don't think there will be much difference. Do what you feel most comfortable doing.
My guess is just using layer-backed NSViews (one par draggable image) will probably work best for starters. If you find performance lacking, you can always micro-optimize. Note that you may have to make your views a tad larger than the images so you can draw the selection handles outside them.
This is all assuming that you will never want to do a more complex drawing.

WP7: IsolatedStorage vs. WriteableBitmap

I have a scenario that I need some good solid advice on. The question is really about speed of WriteableBitmap vs. images in IsolatedStorage on the Windows Phone.
I have an app that displays a UserControl (#1) which is a little graphically heavy. When the user swipes it, it transitions in a push-left type of transition to bring in a new UserControl (#2) which is also a little graphically heavy. If the user swipes the other way, control #1 is brought in in the same type of push-transition, this time from the right.
What I do today is take a snapshot of #1, load #2 off screen and take a snapshot of it, put both side-by-side in a Canvas control and animate that control either left or right. One of the reasons I don't just use the controls and animate them is they may have animation that starts when they are loaded - my current technique allows me to capture a screen shot of pre-animation and post-animation, depending on which direction they go in.
What I'm wondering, however, if it would be better/faster to just do the above the first time and send the writeablebitmap to IsolatedStorage with Extenstions.SaveJPEG and just use that instead in subsequent tranistion animations.
Would load/render/WriteableBitmap each time generally be faster or load jpeg from IsolatedStorage be faster each time? I see that the Transitions control in the SDK doesn't really do either of these, so I'm open to suggestions that are different that also might improve performance.
I expect this to be very depended on the hardware and application. So it is pretty hard to give an answer based on this input. It doesn't look to hard to test (on actual hardware and with the actual application) so my advice is to build both and test.
The applications I have been working with use both approaches and to be honest I haven't noticed much difference.
Also you might try and enable bitmap caching on the controls. This will give you a writeable bitmap implementation that is very fast.

Windows Phone 7 Image Looping

I would like to loop through a sequence of images. I have tried using a Pivot control, but I don't like the blank space in between image transitions. I would prefer to use something that will animate between images smoothly. I also looked at the LoopingSelector control, but I can't seem to set the orientation to horizontal.
I'm assuming you're interested in a kind of image viewer like iOS offers, swiping right or left to navigate through the photos. If that's the case, I hate to say it but i think you're looking at building your own control.
I think to implement it properly these are the essential things you need to think about and address:
For performance' sake, load all the images you have into memorystream objects and store the binary data (you can get creative with this and only store the first 10-15 images, depending on how large the images are, doing this would enable your control to support thousands of images and still perform like a champ).
Once an image is about to be on-screen set the source of the image to the saved memorystream object that has the bytes loaded into it (this will minimize the work that the UI thread does, keeping the control performant and responsive)
Use Manipulation events to track the delta x of the motion someone uses when swiping left to right in order to actually perform the moving of the items
Move the images by changing their Canvas.Left property (you can go negative I think, otherwise just make your canvas the width of all the images you have combined)
Look up some of the available libraries to support momentum so you can have a natural smooth transition between images

Resources