Using the plethora of drawing functions in Cocoa or Quartz it's rather easy to draw paths, and fill them using a gradient. I can't seem to find an acceptable way however, to 'stroke'-draw a path with a line width of a few pixels and fill this stroke using a gradient. How is this done?
Edit: Apparently the question wasn't clear enough. Thanks for the responses so far, but I already figured that out. What I want to do is this:
(source: emle.nl)
The left square is NSGradient drawn in a path followed by a path stroke message. The right is what I want to do; I want to fill the stroke using the gradient.
If you convert the NSBezierPath to a CGPath, you can use the CGContextReplacePathWithStrokedPath() method to retrieve a path that is the outline of the stroked path. Graham Cox's excellent GCDrawKit has a -strokedPath category method on NSBezierPath that will do this for you without needing to drop down to Core Graphics.
Once you have the outlined path, you can fill that path with an NSGradient.
I can't seem to find an acceptable way however, to 'stroke'-draw a path with a line width of a few pixels and fill this stroke using a gradient. How is this done?
[Original answer replaced with the following]
Ah, I see. You want to apply the gradient to the stroke.
To do that, you use a blend mode. I explained how to do this in an answer on another question. Here's the list of steps, adapted to your goal:
Begin a transparency layer.
Stroke the path with any non-transparent color.
Set the blend mode to source in.
Draw the gradient.
End the transparency layer.
According to Peter Hosey's answer I've managed to do a simple gradient curve, which looks like this:
I've done this in drawRect(_:) method of UIView class by writing the code below:
override func drawRect(rect: CGRect) {
let context = UIGraphicsGetCurrentContext()
CGContextBeginTransparencyLayer (context, nil)
let path = createCurvePath()
UIColor.blueColor().setStroke()
path.stroke()
CGContextSetBlendMode(context, .SourceIn)
let colors = [UIColor.blueColor().CGColor, UIColor.redColor().CGColor]
let colorSpace = CGColorSpaceCreateDeviceRGB()
let colorLocations :[CGFloat] = [0.0, 1.0]
let gradient = CGGradientCreateWithColors(colorSpace, colors, colorLocations)
let startPoint = CGPoint(x: 0.0, y: rect.size.height / 2)
let endPoint = CGPoint(x: rect.size.width, y: rect.size.height / 2)
CGContextDrawLinearGradient(context, gradient, startPoint, endPoint, CGGradientDrawingOptions.DrawsBeforeStartLocation)
CGContextEndTransparencyLayer(context)
}
Function createCurvePath() returns an UIBezierPath object. I've also set path.lineWidth to 5 points.
Related
I'm developing a magnifying glass like application for mac. My goal is to be able to pinpoint individual pixels when zoomed in. I'm using this code in mouseMoved(with event: NSEvent):
let captureSize = self.frame.size.width / 9 //9 is the scale factor
let screenFrame = (NSScreen.main()?.frame)!
let x = floor(point.x) - floor(captureSize / 2)
let y = screenFrame.size.height - floor(point.y) - floor(captureSize / 2)
let windowID = CGWindowID(self.windowNumber)
cgImageExample = CGWindowListCreateImage(CGRect(x: x, y: y, width: captureSize,
height: captureSize), CGWindowListOption.optionOnScreenBelowWindow, windowID,
CGWindowImageOption.bestResolution)
The creation of the cgImage takes place in the CGWindowListCreateImage method. When I later draw this in an NSView, the result looks like this:
It looks blurred / like some anti-aliasing was applied during the creation of the cgImage. My goal is to get a razor sharp representation of each pixel. Can anyone point me in the right direction?
Ok, I figured it out. It was a matter of setting the interpolation quality to none on the drawing context:
context.interpolationQuality = .none
Result:
On request some more code:
//get the context
guard let context = NSGraphicsContext.current()?.cgContext else { return }
//get the CGImage
let image: CGImage = //pass the result from CGWindowListCreateImage call
//draw
context.draw(image, in: (CGRect of choice))
I've tried shooting the dark with all the GL functions, to no avail on what I'm sure has to be pretty simple. I'm new to it.
I want to take a sprite and simply darken it. My thoughts on this were to load the sprite, add a grey layer overtop of it (using CCLayerColor with a grey color), apply some kind of GL function, and then grab the output and display it onscreen. However, in every variant I tried where my source image was correctly darkened, the transparency around it was also affected, showing the grey. I need the darkening effect to be masked to the shape of the source sprite.
Here's the code I have so far. How can I correctly mask the darkening effect?
CCSprite* sprite = [CCSprite spriteWithSpriteFrameName: mod.backgroundImagePath];
CCLayerColor* tint = [CCLayerColor node];
[tint setColor:ccc3(205, 205, 205)];
//[tint setBlendFunc: (ccBlendFunc){GL_SRC_COLOR, GL_ONE_MINUS_DST_ALPHA}]; Need help here?
[sprite addChild:tint];
CCRenderTexture* rt = [CCRenderTexture renderTextureWithWidth:sprite.boundingBox.size.width height:sprite.boundingBox.size.height];
[rt begin];
[sprite visit];
[rt end];
You can just set the color property of CCSprite (which is inherited from CCNode):
CCSprite* sprite = [CCSprite spriteWithSpriteFrameName:#"foobar"];
sprite.color = [CCColor colorWithRed:r green:g blue:b alpha:a];
Where the r,g,b,a variables are the color you want to use, normalized between 0 and 1.
I.e.:
Setting to 0,0,0,1 will make the object completely black.
I found a class called ClippingNode that I can use on sprites to only display a specified rectangular area: https://github.com/njt1982/ClippingNode
One problem is that I need to do exactly the opposite, meaning I want the inverse of that. I want everything outside of the specified rectangle to be displayed, and everything inside to be taken out.
In my test I'm using a position of a sprite, which will update frame, so that will need to happen to meaning that new clipping rect will be defined.
CGRect menuBoundaryRect = CGRectMake(lightPuffClass.sprite.position.x, lightPuffClass.sprite.position.y, 100, 100);
ClippingNode *clipNode = [ClippingNode clippingNodeWithRect:menuBoundaryRect];
[clipNode addChild:darkMapSprite];
[self addChild:clipNode z:100];
I noticed the ClippingNode class allocs inside but I'm not using ARC (project too big and complex to update to ARC) so I'm wondering what and where I'll need to release too.
I've tried a couple of masking classes but whatever I mask fits over the entire sprite (my sprite covers the entire screen. Additionally the mask will need to move, so I thought glscissor would be a good alternative if I can get it to do the inverse.
You don't need anything out of the box.
You have to define a CCClippingNode with a stencil, and then set it to be inverted, and you're done. I added a carrot sprite to show how to add sprites in the clipping node in order for it to be taken into account.
#implementation ClippingTestScene
{
CCClippingNode *_clip;
}
And the implementation part
_clip = [[CCClippingNode alloc] initWithStencil:[CCSprite spriteWithImageNamed:#"white_board.png"]];
_clip.alphaThreshold = 1.0f;
_clip.inverted = YES;
_clip.position = ccp(self.boundingBox.size.width/2 , self.boundingBox.size.height/2);
[self addChild:_clip];
_img = [CCSprite spriteWithImageNamed:#"carrot.png"];
_img.position = ccp(-10.0f, 0.0f);
[_clip addChild:_img];
You have to set an extra flag for this to work though, but Cocos will spit out what you need to do in the console.
I once used CCScissorNode.m from https://codeload.github.com/NoodlFroot/ClippingNode/zip/master
The implementation (not what you are looking for the inverse) was something :
CGRect innerClippedLayer = CGRectMake(SCREENWIDTH/14, SCREENHEIGHT/6, 275, 325);
CCScissorNode *tmpLayer = [CCScissorNode scissorNodeWithRect:innerClippedLayer];
[self addChild:tmpLayer];
So for you it may be like if you know the area (rectangle area that you dont want to show i.e. inverse off) and you know the screen area then you can deduct the rectangle are from screen area. This would give you the inverse area. I have not did this. May be tomorrow i can post some code.
I am using a kineticjs regular polygon (a hexagon in this case) and I am filling it with an image using setFillPatternImage. This is working. I'm creating a dynamic implementation so I need to scale the source image depending on the current size of the polygon. This involves calculating the setFillPatternOffset and the setFillPatternScale since the dimensions of a regular polygon are relative to the center. There is no clear documentation that I can find regarding the reference point for the fill image, nor whether the scaling factor should use the radius as a proxy for the width and height ratios or not. The following code results in a misplaced image on the polygon. Anyone know what the alignment rules are for fillPatternImage?
imageObj.onload = function() {
var whex = hexagon.getRadius() * 2;
var xratio = whex / imageObj.width;
var yratio = whex / imageObj.height;
hexagon.setFillPatternImage(imageObj);
hexagon.setFillPatternOffset(-whex/2,-whex/2);
hexagon.setFillPatternScale( [ xratio, yratio ] );
};
Thanks!
Looks like I was over-thinking this. Rather than using the width of the destination polygon when setting the offset, kineticjs handles the scaling of that offset for you. As a result you simply set the offset with:
hexagon.setFillPatternOffset(-imageObj.width/2, -imageObj.height/2);
I want to change the perspective of a UIView that is in my Viewcontroller. I think that I have to transform this UIView layer, but I don't know how.
I've tried the following code but it is not working
UIView *myView = [[self subviews] objectAtIndex:0];
CALayer *layer = myView.layer;
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;
rotationAndPerspectiveTransform.m34 = 1.0 / -500;
rotationAndPerspectiveTransform = CATransform3DRotate(rotationAndPerspectiveTransform, 45.0f * M_PI / 180.0f, 0.0f, 1.0f, 0.0f);
layer.transform = rotationAndPerspectiveTransform;
I've also tried with the following code:
-(void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
transform.b = -0.1;
transform.a = 0.9;
CGContextConcatCTM(ctx,transform);
// do drawing on the context
}
and this too:
CALayer *layerA = [[self.view.layer sublayers] objectAtIndex:0];
layerA.transform = CATransform3DConcat(layerA.transform, CATransform3DMakeRotation(DEGREES_TO_RADIANS(45),1.0,0.0,0.0)
Neither of them worked. How can I change the perspective of a UIView?
In other words, I will put an example. Image this sample code, a rotation Pie RotationPie sample. I would like to change the perpective of it, in the x or z asis.
Your first solution works on my end. It appears like this:
Can You show your whole class code, if it doesn't work the same on your end?
EDIT
Ok, I've reconfigured provided code example, to show how it is possible:
(download here updated code example :http://www.speedyshare.com/dz469/download/Wheel-demo.zip)
And it looks like this:
I am only applying transformation to base subview. All views that are as subviews to that view, will be transformed as well. If You want corresponding subview to have different transformation - it will be harder, because, then You must take in consideration parent view transformation, to calculate new one - it can get really difficult.
But I've done some simple - multi-view level transformations. For example - to achieve effect, that view scales, moves, and rotates:
I've applied movement transformation to parentView
I've applied rotation transformation to parentViews first subview;
I've applied scale transformation to parentViews first subviews subview.
EDIT
Ok, I've reconfigured provided code example, to show how it is possible, in order to leave wheel in transformed position:
(download here updated code example :
http://www.speedyshare.com/5d8Xq/download/Wheel-demo2.zip )
Problem was - in this case, I was adding transformation to wheel itself - and it appears, that Wheel is based on transformations also. Therefore- when You touched it - it replaced existing transformations and applied it's own (to rotate arrows when user swipes wheel).
So - to leave it in perspective while we interact with it - we need another view layer.
I created a new View (lets call it parent view), and added wheel as a subview to this view.
Then I apply transformation to parent View instead of wheel. And it works !
I Hope it helps and You understand now more about transformations :)