I am using a kineticjs regular polygon (a hexagon in this case) and I am filling it with an image using setFillPatternImage. This is working. I'm creating a dynamic implementation so I need to scale the source image depending on the current size of the polygon. This involves calculating the setFillPatternOffset and the setFillPatternScale since the dimensions of a regular polygon are relative to the center. There is no clear documentation that I can find regarding the reference point for the fill image, nor whether the scaling factor should use the radius as a proxy for the width and height ratios or not. The following code results in a misplaced image on the polygon. Anyone know what the alignment rules are for fillPatternImage?
imageObj.onload = function() {
var whex = hexagon.getRadius() * 2;
var xratio = whex / imageObj.width;
var yratio = whex / imageObj.height;
hexagon.setFillPatternImage(imageObj);
hexagon.setFillPatternOffset(-whex/2,-whex/2);
hexagon.setFillPatternScale( [ xratio, yratio ] );
};
Thanks!
Looks like I was over-thinking this. Rather than using the width of the destination polygon when setting the offset, kineticjs handles the scaling of that offset for you. As a result you simply set the offset with:
hexagon.setFillPatternOffset(-imageObj.width/2, -imageObj.height/2);
Related
In a nutshell, let's say, I need to draw a complex object (arrow) which consists of certain amount of objects, like five (or more) lines, for instance. And what's more important, that object must be transformed with particular (dynamic) coordinates (including scaling, possibly).
My question is whether SkiaSharp has anything which I can use for manipulating of this complex object transformation (some sort of grouping etc.) or do I still need to calculate every single point manually (with matrix, for instance).
This question is related particularly to SkiaSharp as I use it on Xamarin, but maybe some general answers from Skia can also help with it?
I think, the question might be too common (and possibly not for stackoverflow exactly), but I just can't find any specific information in google.
Yes, I know how to use SkiaSharp for drawing primitives.
create an SKPath and add lines and other shapes to it
SKPath path = new SKPath();
path.LineTo(...);
...
...
then draw the SKPath on your canvas
canvas.DrawPath(path,paint);
you can apply a transform to the entire path before drawing
var rot = new SKMatrix();
SKMatrix.RotateDegrees(ref rot, 45.0f);
path.Transform(rot);
If you are drawing something more complex than a path SKPicture is perfect for this. You can set it up so that you construct it once and then reuse it easily and efficiently. In the example below, the SKPicture's origin is in the center of a 100 x 100 rectangle but that is arbitrary.
SKPicture myPicture;
SKPicture MyPicture {
get {
if(myPicture != null) {
return myPicture;
}
using(SKPictureRecorder recorder = new SKPictureRecorder())
using(SKCanvas canvas = recorder.BeginRecording(new SKRect(-50, -50, 50, 50)))
// draw using primitives
...
myPicture = recorder.EndRecording();
}
return myPicture;
}
}
Then you apply your transforms to the canvas, draw the picture and restore the canvas state. offsetX and offsetY correspond to where the origin of the SKPicture will be rendered.
canvas.Save();
canvas.Translate(offsetX, offsetY);
canvas.Scale(scaleAmount);
canvas.RotateDegrees(degrees);
canvas.DrawPicture(MyPicture);
canvas.Restore();
I create a polygon on image_area in matlab.
I used impoly.
But after creation polygon.
I need to block possibility to move and drag impoly (ROI is already created).
I don't know how I should do it ?
I would appreciate for any help please.
You can set the makeConstrainToRectFcn such that it is a rectangle encompassing your ROI, then whenever you try to move the latter it won't work. You can also, after creating the ROI, set the setVerticesDraggable method to false in order to prevent vertices from being dragged.
Sample code (adapted from example by the Mathworks):
clc
clear
figure
imshow('gantrycrane.png');
h = impoly(gca, [188,30; 189,142; 93,141; 13,41; 14,29]);
%// Get currentposition
Pos = getPosition(h);
%// Prevent draggable vertices
setVerticesDraggable(h,0);
%// Set up rectangle to prvent movement of ROI
fcn = makeConstrainToRectFcn('impoly', [min(Pos(:,1)) max(Pos(:,1))], [min(Pos(:,2)) max(Pos(:,2))]);
%// Apply function
h.setPositionConstraintFcn(fcn);
which results in this kind of situation (with red rectangle for illustration):
I've got a question about libGDX collision detection. Because it's a rather specific question I have not found any good solution on the internet yet.
So, I already created "humans" that consist of different body parts, each with rectangle-shaped collision detection.
Now I want to implement weapons and skills, which for example look like this:
Skill example image
Problem
Working with rectangles in collision detections would be really frustrating for players when there are skills like this: They would dodge a skill successfully but the collision detector would still damage them.
Approach 1:
Before I started working with Libgdx I have created an Android game with a custom engine and similar skills. There I solved the problem following way:
Detect rectangle collision
Calculate overlapping rectangle section
Check every single pixel of the overlapping part of the skill for transparency
If there is any non-transparent pixel found -> Collision
That's a kind of heavy way, but as only overlapping pixels are checked and the rest of the game is really light, it works completely fine.
At the moment my skill images are loaded as "TextureRegion", where it is not possible to access single pixels.
I have found out that libGDX has a Pixmap class, which would allow such pixel checks. Problem is: having them loaded as Pixmaps additionally would 1. be even more heavy and 2. defeat the whole purpose of the Texture system.
An alternative could be to load all skills as Pixmap only. What do you think: Would this be a good way? Is it possible to draw many Pixmaps on the screen without any issues and lag?
Approach 2:
An other way would be to create Polygons with the shape of the skills and use them for the collision detection.
a)
But how would I define a Polygon shape for every single skill (there are over 150 of them)? Well after browsing a while, I found this useful tool: http://www.aurelienribon.com/blog/projects/physics-body-editor/
it allows to create Polygon shapes by hand and then save them as JSON files, readable by the libGDX application. Now here come the difficulties:
The Physics Body Editor is connected to Box2d (which I am not using). I would either have to add the whole Box2d physics engine (which I do not need at all) just because of one tiny collision detection OR I would have to write a custom BodyEditorLoader which would be a tough, complicated and time-intensive task
Some images of the same skill sprite have a big difference in their shapes (like the second skill sprite example). When working with the BodyEditor tool, I would have to not only define the shape of every single skill, but I would have to define the shape of several images (up to 12) of every single skill. That would be extremely time-intensive and a huge mess when implementing these dozens of polygon shapes
b)
If there is any smooth way to automatically generate Polygons out of images, that could be the solution. I could simply connect every sprite section to a generated polygon and check for collisions that way. There are a few problems though:
Is there any smooth tool which can generate Polygon shapes out of an image (and does not need too much time therefor)?
I don't think that a tool like this (if one exists) can directly work with Textures. It would probably need Pixmaps. It would not be needed to keep te Pixmaps loaded after the Polygon creation though. Still an extremely heavy task!
My current thoughts
I'm stuck at this point because there are several possible approaches but all of them have their difficulties. Before I choose one path and continue coding, it would be great if you could leave some of your ideas and knowledge.
There might be helpful classes and code included in libGDX that solve my problems within seconds - as I am really new at libGDX I just don't know a lot about it yet.
Currently I think I would go with approach 1: Work with pixel detection. That way I made exact collision detections possible in my previous Android game.
What do you think?
Greetings
Felix
I, personally, would feel like pixel-to-pixel collision would be overkill on performance and provide some instances where I would still feel cheated - (I got hit by the handle of the axe?)
If it were me, I would add a "Hitbox" to each skill. StreetFighter is a popular game which uses this technique. (newer versions are in 3D, but hitbox collision is still 2D) Hitboxes can change frame-by-frame along with the animation.
Empty spot here to add example images - google "Streetfighter hitbox" in the meantime
For your axe, there could be a defined rectangle hitbox along the edge of one or both ends - or even over the entire metal head of the axe.
This keeps it fairly simple, without having to mess with exact polygons, but also isn't overly performance heavy like having every single pixel being its own hitbox.
I've used that exact body editor you referenced and it has the ability to generate polygons and/or circles for you. I also made a loader for the generated JSON with the Jackson library. This may not be the answer for you since you'd have to implement box2d. But here's how how I did it anyway.
/**
* Adds all the fixtures defined in jsonPath with the name'lookupName', and
* attach them to the 'body' with the properties defined in 'fixtureDef'.
* Then converts to the proper scale with 'width'.
*
* #param body the body to attach fixtures to
* #param fixtureDef the fixture's properties
* #param jsonPath the path to the collision shapes definition file
* #param lookupName the name to find in jsonPath json file
* #param width the width of the sprite, used to scale fixtures and find origin.
* #param height the height of the sprite, used to find origin.
*/
public void addFixtures(Body body, FixtureDef fixtureDef, String jsonPath, String lookupName, float width, float height) {
JsonNode collisionShapes = null;
try {
collisionShapes = json.readTree(Gdx.files.internal(jsonPath).readString());
} catch (JsonProcessingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
for (JsonNode node : collisionShapes.findPath("rigidBodies")) {
if (node.path("name").asText().equals(lookupName)) {
Array<PolygonShape> polyShapes = new Array<PolygonShape>();
Array<CircleShape> circleShapes = new Array<CircleShape>();
for (JsonNode polygon : node.findPath("polygons")) {
Array<Vector2> vertices = new Array<Vector2>(Vector2.class);
for (JsonNode vector : polygon) {
vertices.add(new Vector2(
(float)vector.path("x").asDouble() * width,
(float)vector.path("y").asDouble() * width)
.sub(width/2, height/2));
}
polyShapes.add(new PolygonShape());
polyShapes.peek().set(vertices.toArray());
}
for (final JsonNode circle : node.findPath("circles")) {
circleShapes.add(new CircleShape());
circleShapes.peek().setPosition(new Vector2(
(float)circle.path("cx").asDouble() * width,
(float)circle.path("cy").asDouble() * width)
.sub(width/2, height/2));
circleShapes.peek().setRadius((float)circle.path("r").asDouble() * width);
}
for (PolygonShape shape : polyShapes) {
Vector2 vectors[] = new Vector2[shape.getVertexCount()];
for (int i = 0; i < shape.getVertexCount(); i++) {
vectors[i] = new Vector2();
shape.getVertex(i, vectors[i]);
}
shape.set(vectors);
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
for (CircleShape shape : circleShapes) {
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
}
}
}
And I would call it like this:
physics.addFixtures(body, fixtureDef, "ship/collision_shapes.json", shipType, width, height);
Then for collision detection:
public ContactListener shipsExplode() {
ContactListener listener = new ContactListener() {
#Override
public void beginContact(Contact contact) {
Body bodyA = contact.getFixtureA().getBody();
Body bodyB = contact.getFixtureB().getBody();
for (Ship ship : ships) {
if (ship.body == bodyA) {
ship.setExplode();
}
if (ship.body == bodyB) {
ship.setExplode();
}
}
}
};
return listener;
}
then you would add the listener to the world:
world.setContactListener(physics.shipsExplode());
my sprites' width and height were small since you're dealing in meters not pixels once you start using box2d. One sprite height was 0.8f and width was 1.2f for example. If you made the sprites width and height in pixels the physics engine hits speed limits that are built in http://www.iforce2d.net/b2dtut/gotchas
Don't know if this still matter to you guys, but I built a small python script that returns the pixels positions of the points in the edges of the image. There is room to improve the script, but for me, for now its ok...
from PIL import Image, ImageFilter
filename = "dship1"
image = Image.open(filename + ".png")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(filename + "_edge.png")
cols = image.width
rows = image.height
points = []
w = 1
h = 1
i = 0
for pixel in list(image.getdata()):
if pixel[3] > 0:
points.append((w, h))
if i == cols:
w = 0
i = 0
h += 1
w += 1
i += 1
with open(filename + "_points.txt", "wb") as nf:
nf.write(',\n'.join('%s, %s' % x for x in points))
In case of updates you can find them here: export positions
var bd:BitmapData=new BitmapData(file.width,file.height);
bd.setPixels(new Rectangle(0,0,file.width,file.height),file.raw);
var scale_x_percents:Number = (w / bd.width);
var scale_y_percents:Number = (h / bd.height);
if(!stretch) {
if(bd.width*scale_y_percents>=w) {
scale_x_percents=scale_y_percents;
}
else if(bd.height*scale_x_percents>=h) {
scale_y_percents=scale_x_percents;
}
}
var matrix:Matrix = new Matrix();
matrix.scale(scale_x_percents,scale_y_percents);
var resizedBd:BitmapData = new BitmapData(Math.floor(bd.width*scale_x_percents), Math.floor(bd.height*scale_y_percents), true, 0x000000);
resizedBd.draw(bd, matrix, null, null, null, true); // true is smoothing option, it will blur sharpened pixels
Having problem with images resizing. Looks like smoothing is not working or something is missing in the code. Maybe Matrix should have something more?
Original image:
http://imageshack.us/a/img28/4784/dc7f2ec4b0f3323cdc4e01e.jpg
and it's result:
http://imageshack.us/a/img855/4784/dc7f2ec4b0f3323cdc4e01e.jpg
I can link a bunch of others images. Some strange pixel disposition exist.
Can it be fixed somehow?
I have tested jpeg quality 100% and stage.quality='best', but none of them give the required quality outcome.
It seems that your problem is "nearest" sampling mode when drawing a BitmapData over a BitmapData. Perhaps the following might help:
var sh:Shape=new Shape();
sh.graphics.beginBitmapFill(bd,matrix,false,true);
sh.graphics.lineStyle(0,0,0); // no lines border this shape
sh.graphics.drawRect(0,0,resizedBD.width,resizedBD.height);
sh.graphics.endFill();
resizedBD.draw(sh); // or with smoothing on
Using Flash's native graphics renderer will most likely perform at least a bilinear interpolation on a drawn bitmap, which seemingly is your desired result. Also, stage.quality applies if that shape is added to stage (BTW, you can use the shape to display an uploaded pic, then draw over a BitmapData in order to save.) But, this might not work - I can't test this right now.
Using the plethora of drawing functions in Cocoa or Quartz it's rather easy to draw paths, and fill them using a gradient. I can't seem to find an acceptable way however, to 'stroke'-draw a path with a line width of a few pixels and fill this stroke using a gradient. How is this done?
Edit: Apparently the question wasn't clear enough. Thanks for the responses so far, but I already figured that out. What I want to do is this:
(source: emle.nl)
The left square is NSGradient drawn in a path followed by a path stroke message. The right is what I want to do; I want to fill the stroke using the gradient.
If you convert the NSBezierPath to a CGPath, you can use the CGContextReplacePathWithStrokedPath() method to retrieve a path that is the outline of the stroked path. Graham Cox's excellent GCDrawKit has a -strokedPath category method on NSBezierPath that will do this for you without needing to drop down to Core Graphics.
Once you have the outlined path, you can fill that path with an NSGradient.
I can't seem to find an acceptable way however, to 'stroke'-draw a path with a line width of a few pixels and fill this stroke using a gradient. How is this done?
[Original answer replaced with the following]
Ah, I see. You want to apply the gradient to the stroke.
To do that, you use a blend mode. I explained how to do this in an answer on another question. Here's the list of steps, adapted to your goal:
Begin a transparency layer.
Stroke the path with any non-transparent color.
Set the blend mode to source in.
Draw the gradient.
End the transparency layer.
According to Peter Hosey's answer I've managed to do a simple gradient curve, which looks like this:
I've done this in drawRect(_:) method of UIView class by writing the code below:
override func drawRect(rect: CGRect) {
let context = UIGraphicsGetCurrentContext()
CGContextBeginTransparencyLayer (context, nil)
let path = createCurvePath()
UIColor.blueColor().setStroke()
path.stroke()
CGContextSetBlendMode(context, .SourceIn)
let colors = [UIColor.blueColor().CGColor, UIColor.redColor().CGColor]
let colorSpace = CGColorSpaceCreateDeviceRGB()
let colorLocations :[CGFloat] = [0.0, 1.0]
let gradient = CGGradientCreateWithColors(colorSpace, colors, colorLocations)
let startPoint = CGPoint(x: 0.0, y: rect.size.height / 2)
let endPoint = CGPoint(x: rect.size.width, y: rect.size.height / 2)
CGContextDrawLinearGradient(context, gradient, startPoint, endPoint, CGGradientDrawingOptions.DrawsBeforeStartLocation)
CGContextEndTransparencyLayer(context)
}
Function createCurvePath() returns an UIBezierPath object. I've also set path.lineWidth to 5 points.