Implementing Document-based Applications with CoreAnimation Layers - macos

I am working on a diagram drawing application that is based on CoreAnimation framework. I have a common functionality set that includes: creating diagram objects, editing their geometrical properties, moving them around, etc. Each object is represented as a separate CALayer (or a group of layers).
My application is also document-based which means I am following the design imposed by Cocoa for document management.
Here is an example of what the application looks like:
image http://guitar.rizo.me/views/main.view/image3.png
Although I do understand the fundamental principles of how the things should work I can't figure out how to make a clear design separation between the model/view implementations.
Is CALayer class a view class or I can also view it as a model (since its properties are the only part of the application's data)?
What would be the ideal organization for such an application given the document-based architecture?
I can't see any clean way to solve this design problem, what would you recommend?
Thanks in advance.

Related

Nested MVC communication patterns

This is entirely a best practices type question, so the language is irrelevant. I understand the basic principles of MVC, and that there are different, subtle flavors of it (i.e. views having a direct reference to models vs. a data delegate off the controller).
My question is around cross MVC communication, when those MVCs are nested. An example of this would be a drawing program (like Paint or something). The Canvas itself could be an MVC, but so could each drawn entity (e.g. Shapes, Text). From a model perspective, it makes sense for the CanvasModel to have a collection of entities, but should the CanvasView and CanvasController have corresponding collections of entity views and controllers respectively?
Also, what's the best/cleanest way to add a new drawn entity? Say the user has the CircleTool active, they click in the Canvas view and start drawing the shape. The CanvasView could fire relevant mouse down/move/up events that the CanvasController could listen to. The controller could then basically proxy those events to the CircleTool (state pattern). On mouse down, the CircleTool would want to create a new Circle. Should the Tool create a new CircleEntityController outright and call something like canvasController.addEntity(circleController)? Where should the responsibility of creating the Circle's model and view then lie?
Sorry if these questions are somewhat nebulous :)
--EDIT--
Here's a pseudo-codish example of what I'm talking about:
CircleTool {
...
onCanvasMouseDown: function(x, y) {
// should this tool/service create the new entity's model, view, and controller?
var model = new CircleModel(x, y);
var view = new CircleView(model);
var controller = new CircleController(model, view);
// should the canvasController's add method take in all 3 components
// and then add them to their respective endpoints?
this.canvasController.addEntity(model, view, controller);
}
...
}
CanvasController {
...
addEntity: function(model, view, controller) {
// this doesn't really feel right...
this.entityControllers.add(controller);
this.model.addEntityModel(model);
this.view.addEntityView(view);
}
...
}
Wow, well I have perhaps a surprising answer to this question: I have a long-standing rant about how MVC is considered this beatific symbol of perfection in programming that no one sees any issues with. A favorite interview question is 'what are some problems, or challenges that you might encounter in MVC?' It's amazing how often the question is greeted with a puzzled, queasy look.
The answer is really quite simple: MVC relies on the notion of multiple consumers having their needs met from a single shared model object. Things really start to go to hell when the various views make different demands. There was an article a few years ago where the authors advanced the notion of Hierarchical MVC. Someone else came on in the comments and told them that what they thought they were inventing already existed: a pattern called Presentation-Abstraction-Controller (PAC). Probably the only place you see this in the pattern literature is in the Buschmann book, sometimes referred to as the Gang of Five, or POSA (Pattern-Oriented Software Architecture). Interestingly, whenever I explain PAC, I use a paint program as the perfect example.
The main difference is that in PAC, the presentation elements tend to have their own models, that's the A of the PAC: for each component, you don't have to, but can have an abstraction. Per some of the other responses here, then what happens is you have a coordinating controller, in this case, that would rule over the canvas. Let's say we want to add a small view to the side of the canvas that shows the count of various shapes (e.g. Squares 3, Circles 5, etc.). That controller would register with the coordinating controller to listen on a two events: elementAdded and elementRemoved. As it received each notification, it would simply update a map it had in its own Abstraction. Imagine how absurd it would be to alter a shared model that a bunch of components are using to add support for such a thing. Furthermore, the person who did the ShapesSummary component didn't have to learn about anything, but the event protocols, and of course all its interactions with collaborators are immutable.
The hierarchical part is that there can be layers of controllers in PAC: for instance, the coordinating one at the Canvas level would not know about any of the various components with specialized behaviors. We might make a shape that can contain other things, which would need logic for accepting drags, etc. That component would have its own Abstraction and Controller, and that Controller would coordinate its activities with the CanvasController, etc. It might even become necessary at some point for it to communicate with its contained components.
Here's the POSA Book.
There are many different ways to attack this. I don't disagree with the other answers posted here.
However, one way that I've done this is by using the observer pattern. And let me explain why.
The observer pattern is necessary here because these drawing tools are nothing without the canvas. So, if you have no canvas, you can't (or shouldn't) invoke the circle tool. So instead, my canvas has a host of observers on it.
Each tool that can be used on the canvas is added as an observable event. Then, when an event is fired - like "begin draw" - the tool is sent as the context (in this case 'circle'). From there, the actions of the circle tool execute.
Another way to imagine this is that each layer has its own service, model, and view. The controller really is at the exterior level and associated with the canvas. So, services are only called by other services or by a controller. There are no circle tool controllers - so only another service (in our case the observed event) can call it. That service is responsible for aggregating the data, building the model, and supplying a view.
The RenderingService (for lack of better name - the thing that manages the interaction of shapes) would instantiate new Circle domain object and informs about it (either directly or when view is requesting new data) the view.
It seems that you are still in a habit of dumping all your application logic (interaction between storage abstractions and domain objects) in the presentation layer (in your case - the controllers).
P.S. I am assuming that you are not talking about HMVC.
If I were you I would opt for Composite pattern when working with shapes. Regardless of whether you have circles, squares, rectangles, triangles, letters, etc, I would treat everything as a shape. Some shapes can be simple shapes like lines, other shapes can be more complex composite shapes like graphs, pie charts, etc. A good idea would be to define a base class that has a reference to basic shapes and to advanced (complex) shapes. Both basic shapes and advanced shapes are the same type of object, its just that advanced shapes can have children that help define this complex object.
By following this logic, you can use the logic where each shape draws itself and each shape knows its location, and based on that you would apply certain logic and algorithm to ask which shape was clicked and each shape could respond to your event.
According to the GoF book:
Intent
Compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly.
Motivation
Graphics applications like drawing editors and schematic capture systems let users build complex diagrams out of simple components. The user can group components to form larger components. [...]
The key to composite pattern is an abstract class that represents both primitives and their containers. For the graphics system, this class is Graphics. Graphic declares operations like Draw that are specific to graphical objects. It also declares operations that all composite objects share, such as operations for accessing and managing its children.
Now, back to your problem. One of the base functions, as previously mentioned, is a Draw method. This method could be implemented as an abstract method in the base class with a following signature:
public virtual void Draw(YourDrawingCanvas canvas);
Each shape would implement it's own version of Draw. The argument that you pass them is a reference to the drawing canvas where each shape will draw itself. Each shape could store a reference of itself in its internal structure and it can be used to compare mouse click location with these coordinates to inform you which shape was clicked.
From my point of view using nested MVC components is kind of an overkill here: At each point in time the model contains multiple elements (different circles, squares etc., which may be nested constructs using the Composite pattern, as mentioned in another answer). However, the canvas that displays the elements is just a single view!
(And corresponding to the single view there would only a single controller be needed.)
One case of having several views could be a list of elements (which is shown, e.g., next to the canvas) - then you could implement the canvas and the element list as two distinct views on one and the same model.
Regarding the question of how to "best" implement adding an element: I would consider the following sequence of events:
The view notifies its listeners that a new circle element has been drawn (with the middle point and an initial radius as parameters, for example).
The controller is registered as listener to the view, so the controller's "draw-circle(point, radius)" listener method is invoked.
The controller creates a new circle instance in the model (either directly, or via a factory class which is part of the model - I think there are lots of different ways of implementing the creation of new elements). The controller is "in control" (literally), so I believe that it's the controllers responsibility to instantiate a new element (or at least trigger the instantiation).
In the model, some kind of "add element" method is invoked by the previous step.
The model raises a "new element created" notification to all of its listeners (probably passing on a reference to the newly created element).
The canvas is registered as listener to the model, so the canvas' "new element created (element)" listener method is invoked.
As response to the latter notification, the canvas draws the circle (on itself).
This is just an idea, but consider if the mediator pattern is applicable.
From the gang of four:
Intent
Define an object that encapsulates how a set of objects interact.
Mediator promotes loose coupling by keeping objects from referring to
each other explicitly, and it lets you vary their interaction
independently.
Applicability
Use the Mediator pattern when
a set of objects communicate in well-defined but complex ways. The resulting interdependencies are unstructured and difficult to
understand.
reusing an object is difficult because it refers to and communicates with many other objects.
a behavior that's distributed between several classes should be customizable without a lot of subclassing.
Consequences
The Mediator pattern has the following benefits and drawbacks:
It limits subclassing. A mediator localizes behavior that otherwise would be distributed among several objects. Changing this
behavior requires subclassing Mediator only; Colleague classes can be
reused as is.
It decouples colleagues. A mediator promotes loose coupling between colleagues. You can vary and reuse Colleague and Mediator classes independently.
It simplifies object protocols. A mediator replaces many-to-many interactions with one-to-many interactions between the mediator and
its colleagues. One-to-many relationships are easier to understand,
maintain, and extend.
It abstracts how objects cooperate. Making mediation an independent concept and encapsulating it in an object lets you focus
on how objects interact apart from their individual behavior. That can
help clarify how objects interact in a system.
It centralizes control. The Mediator pattern trades complexity of interaction for complexity in the mediator. Because a mediator
encapsulates protocols, it can become more complex than any individual
colleague. This can make the mediator itself a monolith that's hard to
maintain.
I am assuming MSPaint-like behavior, where the active tool creates a vector graphic glyph that the user can manipulate until he's satisfied. When the user is satisfied, the glyph gets written to the image, which is a raster of pixels.
When the Circle Tool gets selected, I'd have the CanvasController deselect the previously active tool's MVC trio (if another tool was active) and create a new CircleToolController, CircleModel and CircleView. The previously active glyph becomes final and draws itself to the CanvasModel.
The CanvasView will need to be passed a reference to the CircleView so it can draw the CanvasModel's pixels to the screen before the Circle gets drawn. The actual drawing of the circle to the screen, I'd delegate to the CircleView.
The CircleView will therefore need to know and observe other, more general, model classes besides the CircleModel, I'm thinking of a color selection / palette model, and a model for fill style and line thickness, etc. These other models live as long as the application does and have their own View and Controller. They are quite separate of the rest of the application after all.
As a sidenote: You could actually split off the drawing of the CanvasModel (the raster of pixel colors) by the CanvasView from the coordination of the updating of the entire screen. Have a higher level PaintView which knows the CanvasView and the active GlyphView (for example the CircleView) coordinate the drawing between the CanvasView and the GlyphView.

Best Practice for laying out images for printing in a WYSIWYG Mac app?

I'm in the concept phase of a Mac application that should let the user easily select and layout images for printing. It's a document-based app and a document can have multiple pages with lots of pictures in different sizes and rotations on it. The UI would kind of be like the UI of Pages.app.
Those pictures can possibly be large hi-res images. The user should also be able to print them in the best quality that the images offer.
I have re-watched some WWDC sessions about Quartz, 2D drawing optimization and NSView.
I know that there are a few different ways of accomplishing what I want to do, namely:
Use a custom view for a "page" and draw the images in drawRect: with Core Graphics/Quartz. Use CG transforms to rotate and scale images.
Also use a custom view for a "page", but use NSImageView-subviews to display the images. Use Core Animation and layer transforms to scale/rotate images.
What is the best practice for this? Drawing with Core Graphics or using NSViews? Why?
Thank you so much!
Johannes
Depends on how interactive these pages should be. If there is a lot of mouse interaction, e.g. dragging, selecting etc. I'd go with views. If you want fluid animations I'd even use plain CALayers with their content set to one image. This would also let you zPosition the images in case they overlap. A view based solution makes z-ordering hard.
The drawRect method should be fastest but you have hard times integrating user interaction and you must z-order manually.
This is a reply I got from opening one of my two Apple Technical Support Incidents:
Hi Johannes,
Thanks for contacting Apple DTS regarding your question about printing
and the different ways to construct your applications general UI (with
views).
There is a trend toward using layer-backed views in OS X (utilizing
Core Animation layers) which is motivated by the ability to easily
animate your application's user interface, with little work, when
needed. However in terms of printing, you would be better off to
implement drawRect for custom views so that the view contents can be
drawn at "full resolution" when rendered into the context for
printing.
If instead you use layer backed views at as those layers to
"renderInContext" the layer contents would be used to render, which
commonly will not be set to the full resolution of your source
documents/images. This is because layer backed views take additional
memory to store those bitmaps (cached layer contents), and because of
that, they are recommended to be sized appropriately for the screen
(which may not necessarily be sized appropriately for the printed
page).
Does this help guide your application architecture? Please let me
know.
So basically this means that using layer-backed views might result in sub-optimal printing quality. I've replied with some follow-up questions ("How setting wantsLayer = NO on the rootView right before printing help?") and will post the answers as soon as I get them.
All three approaches should work. Since you should be using scaled-down representations of large images anyway, I don't think there will be much difference. Do what you feel most comfortable doing.
My guess is just using layer-backed NSViews (one par draggable image) will probably work best for starters. If you find performance lacking, you can always micro-optimize. Note that you may have to make your views a tad larger than the images so you can draw the selection handles outside them.
This is all assuming that you will never want to do a more complex drawing.

in a Drawing application I want to use MVC approach but I have some doubts about Model's task

I want to develop a software with MVC approach . I am familiar with MVC and how to implementing it specially with database programs but here is my doubt:
I want to create a graphical application in iPhone which in this case I don't have any other choice except MVC but implementing a 100% MVC sometimes is hard and rules easily can be violated.
I have put my drawing function(calculation) inside View.
I have a controller as usual which is responsible to call the subview(V) and my main class(M)
And my main class(M) doesn't do much for me only storing some numbers and variables in it.
this is where my doubt started :
do I need to transfer the calculation part of drawing to model? the calculation part right now resides in view and the reason for that is I need to access some properties of View like height and width , etc ....
so I decided to put the calculation and drawing inside the view.
please help me to clarify this problem , because I want to practice software engineering using MVC and this is like a self training.
I see this as design issue that you can decide yourself. You can either say that image width/height are part of the picture, and all image attributes would be then returned as absolute X and Y coordinates. Or then you could say that the image is 100% scalable, and the view determines the size it is drawn into, and keep the calculation in the view.

How are the classes in the Sketch example AppKit application separated into Models/Views/Controllers?

I am a little confused as to the definition of classes as Models or Views in the Sketch example AppKit application (found at /Developer/Examples/AppKit/Sketch). The classes SKTRectangle, SKTCircle etc. are considered Model classes but they have drawing code.
I am under the impression that Models should be free of any view/drawing code.
Can someone clarify this?
Thanks.
The developer of Sketch outlines his/her rationale a bit in the ReadMe file:
Model-View-Controller Design
The Model layer of Sketch is mainly the SKTGraphic class and its subclasses. A Sketch Document is made up of a list of SKTGraphics. SKTGraphics are mainly data-bearing classes. Each graphic keeps all the information required to represent whatever kind of graphic it is. The SKTGraphic class defines a set of primitive methods for modifying a graphic and some of the subclasses add new primitives of their own. The SKTGraphic class also defines some extended methods for modifying a graphic which are implemented in terms of the primitives.
The SKTGraphic class defines a set of methods that allow it to draw itself. While this may not strictly seem like it should be part of the model, keep in mind that what we are modelling is a collection of visual objects. Even though a SKTGraphic knows how to render itself within a view, it is not a view itself.
I don't know if that is a satisfying answer for you or not. My personal experience with MVC is that while separation of model, view and controller is a "good thing" oftentimes in practice the lines between layers become blurred. I think compromises of design are frequently made for convenience.
For Sketch in particular, it makes sense to me that the model knows how to draw itself inside a view. The alternative to having each SKTGraphic subclass knowing how to draw itself would be to have a view class with knowledge of each SKTGraphic subclass and how to render it. In this case, adding a new SKTGraphic subclass would require editing the view class (adding a new clause to an if/else or switch statement, likely). WIth the current design, an SKTGraphic subclass can be added with no changes required to the view classes to get things working.
I'm not aware of the particular example that you're refering to, but here's my guess on the reason for that design:
Perhaps the Models (the SKTRectangle, SKTCircle that you mention) know enough to draw themselves but not enough to actually perform the drawing to the screen. The drawing to the screen is handled by the View, where the View will call the Models to find out how to draw them on the screen.
By taking this approach, the View won't have to know how to draw every single Model that it may encounter -- the View only needs to know how to ask the Model to draw itself on the screen.
I'm thinking that it's a trade-off between the MVC model and the object-oriented programming model -- strictly separating along the line of MVC will mean that the View will become extremely large and not very flexible when it comes to adding support for other Models that need to be displayed. With object-orientated design, we'd want the Models themselves to be able to be able to draw themselves on the screen, and we'd want the View to be able to handle new types of Models through facilities such as interfaces.

How to make my applications "skinnable"?

Is there some standard way to make my applications skinnable?
By "skinnable" I mean the ability of the application to support multiple skins.
I am not targeting any particular platform here. Just want to know if there are any general guidelines for making applications skinnable.
It looks like skinning web applications is relatively easy. What about desktop applications?
Skins are just Yet Another Level Of Abstraction (YALOA!).
If you read up on the MVC design pattern then you'll understand many of the principles needed.
The presentation layer (or skin) only has to do a few things:
Show the interface
When certain actions are taken (clicking, entering text in a box, etc) then it triggers actions
It has to receive notices from the model and controller when it needs to change
In a normal program this abstraction is done by having code which connects the text boxes to the methods and objects they are related to, and having code which changes the display based on the program commands.
If you want to add skinning you need to take that ability and make it so that can be done without compiling the code again.
Check out, for instance, XUL and see how it's done there. You'll find a lot of skinning projects use XML to describe the various 'faces' of the skin (it playing music, or organizing the library for an MP3 player skin), and then where each control is located and what data and methods it should be attached to in the program.
It can seem hard until you do it, then you realize it's just like any other level of abstraction you've dealt with before (from a program with gotos, to control structures, to functions, to structures, to classes and objects, to JIT compilers, etc).
The initial learning curve isn't trivial, but do a few projects and you'll find it's not hard.
-Adam
Keep all your styles in a separate CSS file(s)
Stay away from any inline styling
It really depends on how "skinnable" you want your apps to be. Letting the user configure colors and images is going to be a lot easier than letting them hide/remove components or even write their own components.
For most cases, you can probably get away with writing some kind of Resource Provider that serves up colors and images instead of hardcoding them in your source file. So, this:
Color backgroundColor = Color.BLUE;
Would become something like:
Color backgroundColor = ResourceManager.getColor("form.background");
Then, all you have to do is change the mappings in your ResourceManager class and all clients will be consistent. If you want to do this in real-time, changing any of the ResourceManager's mappings will probably send out an event to its clients and notify them that something has changed. Then the clients can redraw their components if they want to.
Implementation varies by platform, but here are a few general cross-platform considerations:
It is good to have an established overall layout into which visual elements can be "plugged." It's harder (but still possible) to support completely different general layouts through skinning.
Develop a well-documented naming convention for the assets (images, HTML fragments, etc.) that comprise a skin.
Design a clean way to "discover" existing skins and add new ones. For example: Winamp uses a ZIP file format to store all the images for its skins. All the skin files reside in a well-known folder off the application folder.
Be aware of scaling issues. Not everyone uses the same screen resolution.
Are you going to allow third-party skin development? This will affect your design.
Architecturally, the Model-View-Controller pattern lends itself to skinning.
These are just a few things to be aware of. Your implementation will vary between web and fat client, and by your feature requirements. HTH.
The basic principle is that used by CSS in web pages.
Rather than ever specifying the formatting (colour / font / layout[to some extent]) of your content, you simply describe what kind of content it is.
To give a web example, in the content for a blog page you might mark different sections as being an:
Title
Blog Entry
Archive Pane
etc.
The Entry might be made of severl subsections such as "heading", "body" and "timestamp".
Then, elsewhere you have a stylesheet which specifies all the properties of each kind of element, size, alignment, colour, background, font etc. When rendering the page or srawing / initialising the componatns in your UI you always consult the current stylesheet to look up these properties.
Then, skinning, and indeed editing your design, becomes MUCH easier. You simple create a different stylesheet and tweak the values to your heat's content.
Edit:
One key point to remember is the distinction between a general style (like classes in CSS) and a specific style (like ID's in CSS). You want to be able to uniquely identify some items in your layout, such as the heading, as being a single identifiable item that you can apply a unique style to, whereas other items (such as an entry in a blog, or a field in a database view) will all want to have the same style.
It's different for each platform/technology.
For WPF, take a look at what Josh Smith calls structural skinning: http://www.codeproject.com/KB/WPF/podder2.aspx
This should be relatively easy, follow these steps:
Strip out all styling for your entire web application or website
Use css to change the way your app looks.
For more information visit css zen garden for ideas.
You shouldn't. Or at least you should ask yourself if it's really the right decision.
Skinning breaks the UI design guidelines. It "jars" the user because your skinned app operates and looks totally different from all the other apps their using. Things like command shortcut keys won't be consistent and they'll lose productivity. It will be less handicapped accessible because screen readers will have a harder time understanding it.
There are a ton of reasons NOT to skin. If you just want to make your application visually distinct, that's a poor reason in my opinion. You will make your app harder to use and less likely that people will ever go beyond the trial period.
Having said all that, there are some classes of apps where skinning is basically expected, such as media players and imersive full screen games. But if your app isn't in a class where skinning is largely mandated, I would seriously consider finding other ways to make your app better than your competition.
Depending on how deep you wish to dig, you can opt to use a 'formatting' framework (e.g. Java's PLAF, the web's CSS), or an entirely decoupled multiple tier architecture.
If you want to define a pluggable skin, you need to consider that from the very beginning. The presentation layer knows nothing about the business logic but it's API and vice versa.
It seems most of the people here refer to CSS, as if its the only skinning option.
Windows Media Player (and Winamp, AFAIR) use XML as well as images (if neccesary) to define a skin.
The XML references hooks, events, etc. and handles how things look and react. I'm not sure about how they handle the back end, but loading a given skin is really as simply as locating the appropriate XML file, loading the images then placing them where they need to go.
XML also gives you far more control over what you can do (i.e. create new components, change component sizes, etc.).
XML combined with CSS could give wonderful results for a skinning engine of a desktop or web application.

Resources