How can I use CGAffineTransformMakeScale in Cocoa? On iPhone I do like this:
something.transform = CGAffineTransformMakeScale(2, 2);
But how can I do it on MAC?
CGAffineTransform, including all the related helper functions, works the same on Mac OS X as on iOS.
You do need to be linking against the Core Graphics or Application Services framework, and importing the header for whichever one you link against.
An NSView doesn't have a transform property like a UIView has, so if something was a UIView in your example, you will have to send your NSView a scaleUnitSquareToSize: message instead.
Your question is unclear. The CGAffineTransformMakeScale function is available on the Mac and it works in exactly the same way as on iOS. It is part of the Application Services framework, so you will need to add that framework to your project and import it with:
#import <ApplicationServices/ApplicationServices.h>
If on the other hand you're referring to view transforms, then on the Mac, NSView objects are not layer-backed by default and do not have a transform property.
If you make the view layer-backed then you can access the view's CALayer object via thelayer property of the view, and you can then apply a transform to that:
aView.layer.transform = CATransform3DMakeScale(2.0, 2.0, 0.0);
Note that both iOS and Mac OS X use CATransform3D structures for their layer transform property. If you want to set the layer transform to a CGAffineTransform then you need to use the ‑setAffineTransform: method of CALayer.
Related
I am wondering what type of drawing canvas is used by SketchApp, PaintCode or Monodraw ...
Image View, OpenGL View, a Custom View ?
I like the fact that we can zoom, translate and select object in this canvas (but I guess it's handmade features).
So, what do you think is the best way to achieve this in Cocoa ?
As for PaintCode, we use NSScrollView with OpenGL view inside for custom multithreaded tiling. The actual content is drawn using CoreGraphics, so what you see in PaintCode is what you get in your app.
I directly asked Monodraw developers and they told me :
The grid is an NSScrollView with custom rulers.
:)
You can check their ASCII art editor here : https://monodraw.helftone.com/
I have a UITextfield in a UIToolbar and it doesn't seem to want to stretch from end to end like it does on an iphone. So this is what I have for the w:any h:any size class
That works just fine on iphone devices. But when I align it on the w:regular h:regular size class for ipad it completely ignores it on the device.
In the preview it looks fine but once it is on a device (ipad) the UITextfield is tiny, its as wide as the one on an iphone.
Judging by the information you provided in your comment, it sounds like you've set the constraints to a set number. I would suggest setting your constraints up using ratios and multipliers. This will make sure the constrained objects are based on the SuperView's size (or rather a percentage of it).
Check out the following links for more information:
This SketchyTech Tutorial
This Make App Pie Tutorial
This Ray Wenderlich Tutorial
Im developing a plugin that implements a ImageViewer application using FireBreath in MAC OSX, the image is located in the local filesystem. I have the following code snippet:
help me in implementing the getDrawingPrimitive function.
`FB::PluginWindowMac *wnd = dynamic_cast<FB::PluginWindowMac*>(win);
wnd->getDrawingPrimitive();
/* code related to openGL*/
CALayer* layer = [CALayer new];
[layer setContents:(id)[ImageHandler setImageWithURL:#"somePath.jpg"]];`
setContents receives the argument of type (CGImageRef).
Is this the right way to set the image to a CALayer object?
On calling wnd->StartAutoInvalidate(sometimelapse); It makes a call to onDraw function depending on Drawing Model and Event Model selected. The window context of type CGContextRef is obtained. This context can be used further to draw whatever is required. In order to draw an image, the image is converted to CGImageRef and using CGContextDrawImage the image is drawn on to the context.
Hey all, I'm am trying to make a very interactive UI with lots of animations and effects.
But I don't know if:
Core graphics can support user interaction (touches, drags, etc)
Core graphics supports object rotation
Core graphics can interact with UIKit and Core Animation in any way
Thanks!
Assuming you are talking about iPhone not Mac (because of mention of touches).
1) CoreGraphics is mostly associated with drawing of images. User interaction is via your view and the touches* member functions. Within the view's drawRect function, you can use CoreGraphics to do custom drawing.
2) Yes, you can get rotation, but easiest way is to set the transform property using a CGAffineTransformMakeRotation. Drop down to the layer and you can even use 3d transforms (which is I think how they do stuff like cover flow).
3) See #1.
In core animation or in App kit When we say layer-backed view or simply add a layer in the view,then actually what we mean by the layer.
A simple Google search:
The CALayer is the canvas upon which everything in Core Animation is painted. When you define movement, color changes, image effects, and so on, those are applied to CALayer objects. From a code perspective, CALayers are a lightweight representation similar to an NSView. In fact, NSView objects can be manipulated via their CALayer. This is referred to as being layer-backed.
A CALayer is an object which manages and draws upon a GL surface, and can manipulate that surface's location in three dimensions, without needing to redraw its contents.