I generate the scene above using the OnDraw method below:
protected override void OnDraw(SKCanvas canvas, int width, int height)
{
int i = 0;
int step = 0;
List<SKRect> rects = new List<SKRect>();
// get the 2D equivalent of the 3D matrix
var rotationMatrix = rotationView.Matrix;
// get the properties of the rectangle
var length = Math.Min(width / 6, height / 6);
canvas.Clear(EffectMedia.Colors.XamarinLightBlue);
foreach (var n in numbers)
{
var rect = new SKRect(0 + step, 0, 100 + step, 100);
rects.Add(rect);
step += 120;
}
//var sideHoriz = rotationMatrix.MapPoint(new SKPoint(0, 1)).Y > 0;
var sideVert = rotationMatrix.MapPoint(new SKPoint(1, 0)).X > 0;
var paint = new SKPaint
{
Color = sideVert ? EffectMedia.Colors.XamarinPurple : EffectMedia.Colors.XamarinGreen,
Style = SKPaintStyle.Fill,
IsAntialias = true
};
// first do 2D translation to the center of the screen
canvas.Translate((width - (120 * numbers.Count)) / 2, height / 2);
// The following line is disabled because it makes the whole canvas rotate!
// canvas.Concat(ref rotationMatrix);
foreach (var n in numbers)
{
canvas.RotateDegrees((float)-3);
canvas.DrawRoundRect(rects[i], 30, 30, paint);
var shadow = SKShader.CreateLinearGradient(
new SKPoint(0, 0), new SKPoint(0, length * 2),
new[] { paint.Color.WithAlpha(127), paint.Color.WithAlpha(0) },
null,
SKShaderTileMode.Clamp);
var paintShadow = new SKPaint
{
Shader = shadow,
Style = SKPaintStyle.Fill,
IsAntialias = true,
BlendMode = SKBlendMode.SoftLight
};
foreach (var r in rects)
{
r.Offset(0, 105);
canvas.DrawRoundRect(r, 30, 30, paintShadow);
}
i++;
}
}
The idea is to make all those rounded boxes rotate (vertically) around their own axis.
I tried using SKPath + Transform, saving&restoring the rotationMatrix and/or the canvas but I can't find a way to have 6 rotating boxes ( canvas.Concat(ref rotationMatrix); makes the whole canvas rotate [*]).
Do you have any hint on how that can be achieved?
Note [*]: there's a call to rotationView.RotateYDegrees(5) every X milliseconds to update the rotationMatrix used by OnDraw.
This is what I'd like to achieve, any hints / directions would be really appreciated... :-)
The following piece of code rotates those shapes around their Z-axis:
canvas.Save();
canvas.RotateDegrees(degrees, rects[i].MidX, rects[i].MidY);
canvas.DrawRoundRect(rects[i], 30, 30, paint);
canvas.Restore();
Thanks
Related
I have an Xamarin project where I am using Skiasharp. I am relatively new to the drawing utility. Ive spent a few days trying to figure out this issue with no luck. After scaling and transforming the canvas, when I touch the skcanvas view on the phone screen and look at the 'location' point in the touch event, its not the same location that the canvas drew something. I need the exact location I drew the rectangle.
Its a lot of code below and granted its not all the code, but its the important parts. I am absolutely baffled why I draw in one (X,Y) location yet when I touch the screen the touch event for the canvas gives me a completely different location than what than the (X,Y) the widget was drawn at.
'''
public static void DrawLayout(SKImageInfo info, SKCanvas canvas, SKSvg svg,
SetupViewModel vm)
var layout = vm.SelectedReticleLayout;
float yRatio;
float xRatio;
float widgetHeight = 75;
float widgetWidth = 170;
float availableWidth = 720;
float availableHeight = 1280;
var currentZoomScale = getScale();
canvas.Translate(info.Width / 2f, info.Height / 2f);
SKRect bounds = svg.ViewBox;
xRatio = (info.Width / bounds.Width) + ((info.Width / bounds.Width) * currentZoomScale);
yRatio = (info.Height / bounds.Height) + ((info.Height / bounds.Height) *
currentZoomScale);
float ratio = Math.Min(xRatio, yRatio);
canvas.Scale(ratio);
canvas.Translate(-bounds.MidX, -bounds.MidY);
canvas.DrawPicture(svg.Picture, new SKPaint { Color = SKColors.White, Style =
SKPaintStyle.Fill });
// now set the X,Y and Width and Height of the large Red Rectangle
float imageCenter = canvas.LocalClipBounds.Width / 2;
layout.RedBorderXOffSet = imageCenter - (imageCenter / 2.0f) +
canvas.LocalClipBounds.Left;
float redBorderYOffSet = (float)(svg.Picture.CullRect.Top +
Math.Ceiling(.0654450261780105f * svg.Picture.CullRect.Bottom));
layout.RedBorderYOffSet = (float)(canvas.LocalClipBounds.Top +
Math.Ceiling(.0654450261780105f * canvas.LocalClipBounds.Bottom));
layout.RedBorderWidth = canvas.LocalClipBounds.Width / 2.0f;
layout.RedBorderWidthXOffSet = layout.RedBorderWidth + layout.RedBorderXOffSet;
layout.RedBorderHeight = (float)(canvas.LocalClipBounds.Bottom -
Math.Ceiling(.0654450261780105f * canvas.LocalClipBounds.Bottom * 2)) -
canvas.LocalClipBounds.Top;
layout.RedBorderHeightYOffSet = layout.RedBorderYOffSet + layout.RedBorderHeight;
// draw the large red rectangle
canvas.DrawRect(layout.RedBorderXOffSet, layout.RedBorderYOffSet, layout.RedBorderWidth,
layout.RedBorderHeight, RedBorderPaint);
// clear the tracked widgets, tracked widgets are updated every time we draw the widgets
// base widgets contain the default size and location relative to the scope. base line
widgets
// will need to be multiplied by the node scale height and width
layout.TrackedWidgets.Clear();
var widget = new widget
{
X = layout.RedBorderXOffSet + 5;
Y = layout.RedBorderYOffSet + layout.TrackedReticleWidgets[0].Height + 15;
Height = layout.RedBorderHeight * (widgetHeight / availableHeight);
Width = layout.RedBorderWdith * (widgetWidth / availableWidth);
}
// define colors for text and border colors for small rectangles (widgets)
public static SKPaint SelectedWidgetColor => new SKPaint { Color = SKColors.LightPink,
Style = SKPaintStyle.StrokeAndFill, StrokeWidth = 3 };
public static SKPaint EmptyWidgetBorder => new SKPaint { Color = SKColors.DarkGray,
Style = SKPaintStyle.Stroke, StrokeWidth = 3 };
public static SKPaint EmptyWidgetText => new SKPaint { Color = SKColors.Black, TextSize
= 10, FakeBoldText = false, Style = SKPaintStyle.Stroke, Typeface =
SKTypeface.FromFamilyName("Arial") };
public static SKPaint DefinedWidgetText => new SKPaint { Color = SKColors.DarkRed,
FakeBoldText = false, Style = SKPaintStyle.Stroke };
// create small rectangle (widget) and draw the widget
var widgetRectangle = SKRect.Create(widget.X, widget.Y, widget.Width, widget.Height);
canvas.DrawRect(widgetRectangle, widget.IsSelected ? SelectedWidgetColor :
EmptyWidgetBorder);
// now lets create the text to draw in the widget
string text = EnumUtility.GetDescription(widget.WidgetDataType);
float textWidth = EmptyWidgetText.MeasureText(text);
EmptyWidgetText.TextSize = widget.Width * GetUnscaledWidgetWith(widget) *
EmptyWidgetText.TextSize / textWidth;
SKRect textBounds = new SKRect();
EmptyWidgetText.MeasureText(text, ref textBounds);
float xText = widgetRectangle.MidX - textBounds.MidX;
float yText = widgetRectangle.MidY - textBounds.MidY;
canvas.DrawText(text, xText, yText, EmptyWidgetText);
'''
I'm trying to implement a simple turn-around-and-move feature with Three.js. On mouse click, the object is supposed to first turn around and then move to the clicked location.
Codepen
The rotation is achieved with raycasting and lookAt(). It works by itself and it always works on the first click. If you remove the translation, it works continuously. The issue occurs when rotation and translation are implemented together. If you click a second time, after the object has moved to the previous clicked location, it doesn't rotate as expected. Depending on the mouse location it can flip to the other side without rotating at all.
Clarification: When you click the first time, notice how the object slowly and steadily turns around to face that direction? But the second time, after the object has moved, the rotation is quicker and/or flimsier or it simply flips over and there is no rotation at all. It depends on where you click in relation to the object.
I believe the issue stems from trying to implement lookAt while being located at the current lookAt location? If I stop the translation half way, the next rotation will work better. But of course I need it to go all the way.
I'm somewhat lost on how to proceed with this issue. Any help would be appreciated.
/*** Setup scene ***/
let width = 800
let height = 600
let scene
let renderer
let worldAxis
let box
let angle
let boxAxes
scene = new THREE.Scene()
worldAxis = new THREE.AxesHelper(200);
scene.add(worldAxis);
// Setup renderer
renderer = new THREE.WebGLRenderer({alpha: true, antialias: true})
renderer.setPixelRatio(window.devicePixelRatio)
renderer.setSize(width, height)
document.body.appendChild(renderer.domElement)
// Setup camera
const camera = new THREE.OrthographicCamera(
width / - 2, // left
width / 2, // right
height / 2, // top
height / - 2, // bottom
0, // near
1000 ); // far
camera.position.set(0, 0, 500)
camera.updateProjectionMatrix()
// Setup box
let geometry = new THREE.BoxGeometry( 15, 15, 15 );
let material = new THREE.MeshBasicMaterial( { color: "grey" } );
box = new THREE.Mesh( geometry, material );
box.position.set(100, 150, 0)
box.lookAt(getPointOfIntersection(new THREE.Vector2(0, 0)))
addAngle()
boxAxes = new THREE.AxesHelper(50);
box.add(boxAxes)
scene.add(box)
renderer.render(scene, camera);
/*** Setup animation ***/
let animate = false
let currentlyObservedPoint = new THREE.Vector2();
let rotationIncrement = {}
let translationIncrement = {}
let frameCount = 0
document.addEventListener('click', (event) => {
let mousePosForRotate = getMousePos(event.clientX, event.clientY)
rotationIncrement.x = (mousePosForRotate.x - currentlyObservedPoint.x)/100
rotationIncrement.y = (mousePosForRotate.y - currentlyObservedPoint.y)/100
let mousePosForTranslate = getMousePosForTranslate(event)
translationIncrement.x = (mousePosForTranslate.x - box.position.x)/100
translationIncrement.y = (mousePosForTranslate.y - box.position.y)/100
animate = true
})
function animationLoop() {
if (animate === true) {
if (frameCount < 100) {
rotate()
} else if (frameCount < 200) {
translate()
} else {
animate = false
frameCount = 0
}
frameCount++
renderer.render(scene, camera)
}
requestAnimationFrame(animationLoop)
}
function rotate() {
currentlyObservedPoint.x += rotationIncrement.x
currentlyObservedPoint.y += rotationIncrement.y
let pointOfIntersection = getPointOfIntersection(currentlyObservedPoint)
box.lookAt(pointOfIntersection)
addAngle()
}
function translate() {
box.position.x += translationIncrement.x
box.position.y += translationIncrement.y
}
function getMousePos(x, y) {
let mousePos = new THREE.Vector3(
(x / width) * 2 - 1,
- (y / height) * 2 + 1,
0)
return mousePos
}
function getMousePosForTranslate(event) {
let rect = event.target.getBoundingClientRect();
let mousePos = { x: event.clientX - rect.top, y: event.clientY - rect.left }
let vec = getMousePos(mousePos.x, mousePos.y)
vec.unproject(camera);
vec.sub(camera.position).normalize();
let distance = - camera.position.z / vec.z;
let pos = new THREE.Vector3(0, 0, 0);
pos.copy(camera.position).add(vec.multiplyScalar(distance));
return pos
}
function getPointOfIntersection(mousePos) {
let plane = new THREE.Plane(new THREE.Vector3(0, 0, 1), 0);
let pointOfIntersection = new THREE.Vector3()
const raycaster = new THREE.Raycaster();
raycaster.setFromCamera(mousePos, camera)
raycaster.ray.intersectPlane(plane, pointOfIntersection)
return pointOfIntersection
}
function addAngle() {
let angle = box.rotation.x - 32
box.rotation.x = angle
}
animationLoop()
<script src='https://cdnjs.cloudflare.com/ajax/libs/three.js/105/three.min.js'></script>
I'm trying to piece together a sphere with individual slices. Basically, I have multiple SphereGeoemtery slices that form a sphere and used to project a panorama. Slices are used for lazy loading very large panoramas.
With the default texture wrapping mode (THREE.ClampToEdgeWrapping) on these slices, from far away the panorama looks fine but if you zoom in it's very clear the edges of the meshes are stretching, causing visible seams. It make sense since it's stretching the last pixel at the edge..
I also tried changing wrapping mode to THREE.RepeatWrapping, however, the seam becomes completely visible:
So my question is, what's the best method here for piecing together meshes? Or is this just unavoidable?
Off the top of my head you'd have to make each texture contain one border row and border column in each direction that's a repeat of the its neighbor, then adjust the UV coordinates appropriately
For example if the big image is 8 pixels wide and 6 pixels tall
ABCDEFGH
IJKLMNOP
QRSTUVWX
YZ123456
789abcde
fghijklm
And you want to divide it into into 4 parts (each 4, 3)
then you'd need these 4 parts
ABCDE DEFGH
IJKLM LMNOP
QRSTU TUVWX
YZ123 23456
QRSTU TUVWX
YZ123 23456
789ab abcde
fghij ijklm
Also to make it easy repeat the edges so
AABCDE DEFGHH
AABCDE DEFGHH
IIJKLM LMNOPP
QQRSTU TUVWXX
YYZ123 234566
QQRSTU TUVWXX
YYZ123 234566
7789ab abcdee
ffghij ijklmm
ffghij ijklmm
Repeating the edges is because I'm assuming you're splitting into more than 2x2 so technically if you were going to split something 50 pixels wide into 5 parts you could do parts that are 11, 12, 12, 12, 11 in width. The edges being only 11 pixels instead of 12 would need a different UV adjustment. But, by repeating the edges we can make them all 12, 12, 12, 12, 12 so everything is consistant.
testing, left is normal split showing the seam. Right is the fixed one, no seam.
body {
margin: 0;
}
#c {
width: 100vw;
height: 100vh;
display: block;
}
<canvas id="c"></canvas>
<script type="module">
import * as THREE from 'https://threejsfundamentals.org/threejs/resources/threejs/r115/build/three.module.js';
function main() {
const canvas = document.querySelector('#c');
const renderer = new THREE.WebGLRenderer({canvas});
const fov = 75;
const aspect = 2; // the canvas default
const near = 0.1;
const far = 5;
const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.z = 1;
const scene = new THREE.Scene();
// make our texture using a canvas to test
const bigImage = document.createElement('canvas');
{
const ctx = bigImage.getContext('2d');
const width = 32;
const height = 16;
ctx.canvas.width = width;
ctx.canvas.height = height;
const gradient = ctx.createLinearGradient(0, 0, width, height);
gradient.addColorStop(0, 'red');
gradient.addColorStop(0.5, 'yellow');
gradient.addColorStop(1, 'blue');
ctx.fillStyle = gradient;
ctx.fillRect(0, 0, width, height);
}
const forceTextureInitialization = function() {
const material = new THREE.MeshBasicMaterial();
const geometry = new THREE.PlaneBufferGeometry();
const scene = new THREE.Scene();
scene.add(new THREE.Mesh(geometry, material));
const camera = new THREE.Camera();
return function forceTextureInitialization(texture) {
material.map = texture;
renderer.render(scene, camera);
};
}();
// bad
{
const ctx = document.createElement('canvas').getContext('2d');
// split the texture into 4 parts across 4 planes
const across = 2;
const down = 2;
const pixelsAcross = bigImage.width / across;
const pixelsDown = bigImage.height / down;
ctx.canvas.width = pixelsAcross;
ctx.canvas.height = pixelsDown;
for (let y = 0; y < down; ++y) {
for (let x = 0; x < across; ++x) {
ctx.clearRect(0, 0, pixelsAcross, pixelsDown);
ctx.drawImage(bigImage,
x * pixelsAcross, (down - 1 - y) * pixelsDown, pixelsAcross, pixelsDown,
0, 0, pixelsAcross, pixelsDown);
const texture = new THREE.CanvasTexture(ctx.canvas);
// see https://threejsfundamentals.org/threejs/lessons/threejs-canvas-textures.html
forceTextureInitialization(texture);
const geometry = new THREE.PlaneBufferGeometry(1 / across, 1 / down);
const material = new THREE.MeshBasicMaterial({map: texture});
const plane = new THREE.Mesh(geometry, material);
scene.add(plane);
plane.position.set(-1 + x / across, y / down - 0.25, 0);
}
}
}
// good
{
const ctx = document.createElement('canvas').getContext('2d');
// split the texture into 4 parts across 4 planes
const across = 2;
const down = 2;
const pixelsAcross = bigImage.width / across;
const pixelsDown = bigImage.height / down;
ctx.canvas.width = pixelsAcross + 2;
ctx.canvas.height = pixelsDown + 2;
// just draw the image at all these offsets.
// it would be more efficient to draw the edges
// 1 pixel wide but I'm lazy
const offsets = [
[ 0, 0],
[ 1, 0],
[ 2, 0],
[ 0, 1],
[ 2, 1],
[ 0, 2],
[ 1, 2],
[ 2, 2],
[ 1, 1],
];
for (let y = 0; y < down; ++y) {
for (let x = 0; x < across; ++x) {
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
let srcX = x * pixelsAcross - 1;
let srcY = (down - 1 - y) * pixelsDown - 1;
let dstX = 0;
let dstY = 0;
let width = pixelsAcross + 2;
let height = pixelsDown + 2;
ctx.drawImage(bigImage,
srcX, srcY, width, height,
dstX, dstY, width, height);
// handle edges
if (srcX < 0) {
// repeat left edge
ctx.drawImage(bigImage,
0, srcY, 1, height,
0, dstY, 1, height);
}
if (srcY < 0) {
// repeat top edge
ctx.drawImage(bigImage,
srcX, 0, width, 1,
dstX, 0, width, 1);
}
if (srcX + width > bigImage.width) {
// repeat right edge
ctx.drawImage(bigImage,
bigImage.width - 1, srcY, 1, height,
ctx.canvas.width - 1, dstY, 1, height);
}
if (srcY + height > bigImage.height) {
// repeat bottom edge
ctx.drawImage(bigImage,
srcX, bigImage.height - 1, width, 1,
dstX, ctx.canvas.height - 1, width, 1);
}
// TODO: handle corners
const texture = new THREE.CanvasTexture(ctx.canvas);
texture.minFilter = THREE.LinearFilter;
// offset UV coords 1 pixel to skip the edge pixel
texture.offset.set(1 / ctx.canvas.width, 1 / ctx.canvas.height);
// only textureSize - 2 of the pixels in the texture
texture.repeat.set(pixelsAcross / ctx.canvas.width, pixelsDown / ctx.canvas.height);
// see https://threejsfundamentals.org/threejs/lessons/threejs-canvas-textures.html
forceTextureInitialization(texture);
const geometry = new THREE.PlaneBufferGeometry(1 / across, 1 / down);
const material = new THREE.MeshBasicMaterial({map: texture});
const plane = new THREE.Mesh(geometry, material);
scene.add(plane);
plane.position.set(1 + x / across - 0.5, y / down - 0.25, 0);
}
}
}
function resizeRendererToDisplaySize(renderer) {
const canvas = renderer.domElement;
const width = canvas.clientWidth;
const height = canvas.clientHeight;
const needResize = canvas.width !== width || canvas.height !== height;
if (needResize) {
renderer.setSize(width, height, false);
}
return needResize;
}
function render(time) {
time *= 0.001;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
camera.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
}
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
}
main();
</script>
I have a canvas element on which I am drawing a number of images and overlaying them with text. Unfortunately the problem requires that some of these images and corresponding text be rotated. Added to this the problem that there must be a corresponding background color on some of the images (images are simple outlines of desks for a floorplan)
Here is the function I have built to handle adding a single desk to the plan. The problem I am having is that when I use the rotate neither the text nor the background colors show up, while the appear correctly if I do not rotate the image, except that they are not rotated and the background fillRect() is oriented 90 degrees off.
function redrawDesk(desk, ctx, color) {
var rotate = desk.rotation == 90 || desk.rotation == 270;
if (rotate) {
ctx.save();
ctx.rotate(Math.PI / 2);
ctx.clearRect(desk.left, desk.top, desk.width, desk.height);
ctx.restore()
}
var img = $("#desk_" + desk.rowID)[0];
ctx.drawImage(img, desk.left, desk.top, desk.height, desk.width);
var x = desk.left;
var y = desk.top;
var h = desk.height;
var w = desk.width;
if (rotate) {
//ctx.save()
ctx.rotate(Math.PI / 2);
var tmp=x;
x=y;
y=tmp;
tmp=h;
h=w;
w=tmp;
}
ctx.textAlign = "center";
ctx.fillText(desk.deskID, x + w / 2,y + h/ 2);
if (color) {
ctx.fillStyle = color;
ctx.fillRect(x, y, w, h);
}
//ctx.restore();
if (rotate) {
ctx.rotate(Math.PI / -2);
}
}
Thank you
The main problem is that you are defining the desk and text as absolute coordinates.
Define objects in there local coordinate system. Eg the desk has a height and width but not a position. Its draw relative to its self (around 0,0)
const desk = {
w : 10, h : 10,
color : "blue",
draw() {
ctx.fillStyle = this.color;
ctx.fillRect(-this.w / 2, -this.h / 2, this.w, this.h);
}
};
You can then position the desk into the world coordinate system (the canvas) by defining where its center will be.
function drawObj(obj, x, y) { // what to draw and where
ctx.setTransform(1,0,0,1,x,y); // Same as ctx.translate if 2D API is in default context
// The means you do not have to use ctx.save and
// ctx.restore in this function
obj.draw(); // draw desk
}
For a full transform its much the same
function drawObj(obj, x, y, scale, rotate) { // rotate is in radians
ctx.setTransform(scale, 0, 0, scale, x, y);
ctx.rotate(rotate);
obj.draw();
}
To add text you can add it as an object to the desk and draw it to its own local coordinate system
desk.name = {
text : "Desk",
color : "black",
font : "bold " + 20 + "px Calibri",
draw() {
ctx.font = this.font;
ctx.textAlign = "center";
ctx.fillStyle = this.color;
ctx.fillText(this.text, 0,0);
}
};
You can now draw the desk and name using the draw object function
drawObj(desk,200, 200, 1, Math.PI / 2); // Draw at 200,200 rotated 90deg CW
drawObj(desk.name, 200, 200, 1, Math.PI / 2); // draw the text rotated same and centered over desk
// Or if the text should be above and not rotated
drawObj(desk.name, 200, 200 - 30, 1, 0);
As the above functions use setTransform you may need to restore the transform. There are two ways to do this.
ctx.resetTransform(); // Check browser support for this call
ctx.setTransform(1,0,0,1,0,0); // same as above just does it manaly
In my code I tested to see if there is a rotation needed. If so I set a translate on the canvas to give me a new start point: ctx.translate(x, y); This allowed me to simplify my location settings for placing text and the background colors, which means they are showing up correctly. Here is the changed code to compare with the original:
if (rotate) {
ctx.save();
tmp = h;
h = w;
w = tmp;
ctx.translate(x, y);
}
if (color) {
ctx.fillStyle = color;
ctx.fillRect(0, 0, w, h);
}
ctx.font = "bold " + w / 2 + "px Calibri";
ctx.textAlign = "center";
ctx.fillStyle = "#000";
var c=ctx.canvas;
ctx.rotate(Math.PI / -2);
ctx.fillText(desk.deskID, 0-h/2, w/2); //x + w / 2, y + h / 2);
ctx.restore();
I have a rectangle bounds (10, 20, 100, 200) and the CGPoints are StartPoint (0.5, 0.5) and EndPoints as (1, 1). From these points how needs to calculate the segments bounds ? I need to apply this bounds for CGGradient for start point and end points.
Eg Code :
GradientColor gradientColor1 = new GradientColor(){StartPoint = new CGPoint(0.5, 0), EndPoint= new CGPoint(0.5, 1)};
GradientStop stop1 = new GradientStop() { Color = UIColor.Red, Offset = 0.1f };
GradientStop stop2 = new GradientStop() { Color = UIColor.Blue, Offset = 0.9f };
can you please help me out of this?
Here an an example that will create a left to right linear gradient within the current CGContext.
using (var context = UIGraphics.GetCurrentContext ()) {
context.SaveState();
var startPoint = new CGPoint(rect.Left, 0);
var endPoint = new CGPoint(rect.Right, 0);
var components = new CGColor[] { UIColor.Red.CGColor, UIColor.Blue.CGColor };
using (var rgb = CGColorSpace.CreateDeviceRGB()) {
var gradient = new CGGradient(rgb, components);
context.DrawLinearGradient(gradient, startPoint, endPoint, CGGradientDrawingOptions.DrawsBeforeStartLocation);
};
context.RestoreState();
}
Changing the start and end points you can have the gradient paint right to left, up/down, diagonal, etc..