Kinect Color and Depth Stream Performance - performance

I am developing an XNA game. In my game I am using Kinect's Color Stream and Depth Stream to get the image of the player only. For this cause I check the depth pixels and find the pixels with PlayerIndex > 0. Then I map these depth points to color points with MapDepthPointToColorPoint method and get the colors of these pixels from color video.
This works normally but the performance is very bad (especially when this is for a game). When I closed color stream and set the pixels with black color (I mean pixels with player index), it's all smooth. But when I enable color stream it is not very effective.
I tried 2 things in AllFramesReady function for this :
1-
using (ColorImageFrame colorVideoFrame = imageFrames.OpenColorImageFrame())
{
//color logic
}
using (DepthImageFrame depthVideoFrame = imageFrames.OpenDepthImageFrame())
{
//depth logic
}
and
2-
using (ColorImageFrame colorVideoFrame = imageFrames.OpenColorImageFrame())
{
colorReceived=true;
}
if (colorReceived){
//color logic
}
using (DepthImageFrame depthVideoFrame = imageFrames.OpenDepthImageFrame())
{
depthReceived=true;
}
if (depthReceived){
//depth logic
}
The second one seems to have a better performance because it applies color and depth logic outside of using blocks and returns resources to Kinect as soon as possible. But when the second player comes in performance decreases drastically. But sometimes the image of the player just disappears for 1 or 2 frames when I use the second choice.
What more can I do to increase performance of color and depth streams? Thanks for any help.

Related

Unity-3d-5 Scale images for 16:9 to other resolutions

So I created a snake game with a border created with 2d sprites. I have my game window set to 16:9, when in this resolution the images look fine. However, scaling to anything else begins to make the game look weird. I want the game window to be re-sizable. How can I make my sprites stretch and shrink based on the current resolution?
I have already tried creating a sprite that is 120 in Width and 1 in Height, then using the x,y,z scales to change the scale to 16. This produced a huge sprite.
I am experimenting with using a canvas scaler, but with no success.
My end goal isn't to have my game fit pre-defined resolutions like 16:9, but to scale according to the current window size. So that if they make the window extremely thin, the game will only make the top and bottom borders extremely thin, while still confing the game play into the borders.
Below I post the screenshots of how my sprites are setup. And how they are placed into the hierarchy.
Border sprite - this sprite's width is now 70 pixels, because this is how it was given to me.
Border in hierarchy, position, scale, and rotation are defaults. Then for example the BorderTop is moved 25 on the y axis to move it to the top of the screen.
Camera setup
Example resolutions and current output
16:9
5:4
Add a simple script to every border:
public class Border : MonoBehaviour {
enum BorderTypes
{
bottom, top, left, right
}
[SerializeField] float borderOffset = 0.1f;
[SerializeField] BorderTypes type = BorderTypes.top;
// Use this for initialization
void Start () {
switch (type)
{
case BorderTypes.bottom:
transform.position = Camera.main.ViewportToWorldPoint(new Vector3(0.5f, borderOffset, 10));
break;
case BorderTypes.top:
transform.position = Camera.main.ViewportToWorldPoint(new Vector3(0.5f, 1 - borderOffset, 10));
break;
case BorderTypes.left:
transform.position = Camera.main.ViewportToWorldPoint(new Vector3(borderOffset, 0.5f, 10));
break;
case BorderTypes.right:
transform.position = Camera.main.ViewportToWorldPoint(new Vector3(1 - borderOffset, 0.5f, 10));
break;
default:
break;
}
}
}
You can use a simple way:
First, create a Canvas
in Canvas component in Inspector, set Render Mode to Screen Space - Camera and drag your main camera to Render Camera field.
Then, in Canvas Scaler component in Inspector window, set UI Scale Mode to Scale Width Screen Size and other settings like this image
Now drag your game objects to Canvas.
First, you should not change your sprites. You problem is a game viewport. You simply cannot have 16:9 fixed aspect ratio on every device in "full-screen". You have 2 options here:
Don't care about aspect ratio and adapt your gameplay, make your canvas scalermode to "scale with screen size" and reference resolution something like 1920x1080 or 1600:900 or 800:450. Your game logic must take into account that you may have different size of screens. You can experiment in editor by switching different aspect ratios in game view.
Maintain 16:9 and therefore calculate for where to add "bars" (sides or top and down), when the game is initialized.

Is it possible to save a generated image in Codename One?

My question is related to this previous question. What I want to achieve is to stack images (they have transparency), write a string on top, and save the photomontage / photocollage with full resolution.
#Override
protected void beforeMain(Form f) {
Image photoBase = fetchResourceFile().getImage("Voiture_4_3.jpg");
Image watermark = fetchResourceFile().getImage("Watermark.png");
f.setLayout(new LayeredLayout());
final Label drawing = new Label();
f.addComponent(drawing);
// Image mutable dans laquelle on va dessiner (fond blanc)
Image mutableImage = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
drawing.getUnselectedStyle().setBgImage(mutableImage);
drawing.getUnselectedStyle().setBackgroundType(Style.BACKGROUND_IMAGE_SCALED_FIT);
// Paint all the stuff
paints(mutableImage.getGraphics(), photoBase, watermark, photoBase.getWidth(), photoBase.getHeight());
// Save the collage
Image screenshot = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
f.revalidate();
f.setVisible(true);
drawing.paintComponent(screenshot.getGraphics(), true);
String imageFile = FileSystemStorage.getInstance().getAppHomePath() + "screenshot.png";
try(OutputStream os = FileSystemStorage.getInstance().openOutputStream(imageFile)) {
ImageIO.getImageIO().save(screenshot, os, ImageIO.FORMAT_PNG, 1);
} catch(IOException err) {
err.printStackTrace();
}
}
public void paints(Graphics g, Image background, Image watermark, int width, int height) {
g.drawImage(background, 0, 0);
g.drawImage(watermark, 0, 0);
g.setColor(0xFF0000);
// Upper left corner
g.fillRect(0, 0, 10, 10);
// Lower right corner
g.setColor(0x00FF00);
g.fillRect(width - 10, height - 10, 10, 10);
g.setColor(0xFF0000);
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(220, Font.STYLE_BOLD);
g.setFont(f);
// Draw a string right below the M from Mercedes on the car windscreen (measured in Gimp)
g.drawString("HelloWorld",
(int) (848 ),
(int) (610)
);
}
This is the saved screenshot I get if I use the Iphone6 skin (the payload image is smaller than the original one and is centered). If I use the Xoom skin this is what I get (the payload image is still smaller than the original image but it has moved to the left).
So to sum it all up : why is the saved screenshot with Xoom skin different from the one I get with Iphone skin ? Is there anyway to directly save the graphics on which I paint in the paints method so that the saved image would have the original dimensions ?
Thanks a lot to anyone that could help me :-)!
Cheers,
You can save an image in Codename one using the ImageIO class. Notice that you can draw a container hierarchy into a mutable image using the paintComponent(Graphics) method.
You can do both approaches with draw image on mutable or via layouts. Personally I always prefer layouts as I like the abstraction but I wouldn't say the mutable image approach is right/wrong.
Notice that if you change/repaint a lot then mutable images are slower (this will not be noticeable for regular code or on the simulator) as they are forced to use the software renderer and can't use the GPU fully.
In the previous question it seems you placed the image with a "FIT" style which naturally drew it smaller than the containing container and then drew the image on top of it manually... This is problematic.
One solution is to draw everything manually but then you will need to do the "fit" aspect of drawing yourself. If you use layouts you should position everything based on the layouts including your drawing/text.

Visual C++: Good way to draw and animated fill path to screen?

I want to use Visual C++ to animate fill paths to screen. I have done it with C# before, but now switch to C++ for better perfomance and want do more complex works in the future.
Here is the concept in C#:
In a Canvas I have a number of Path. These paths are closed geometries combine of LineTo and QuadraticBezierTo functions.
Firstly, I fill Silver color for all path.
Then for each path, I fill Green color from one end to other end (up/down/left/right direction) (imagine a progress bar increase its value from min to max). I do it by set the Fill brush of the path to a LinearGradientBrush with two color Green and Silver with same offset, then increase the offset from 0 to 1 by Timer.
When a path is fully green, continue with next path.
When all path is fill with Green, come back first step.
I want to do same thing in Visual C++. I need to know an effective way to:
Create and store paths in a collection to reuse. Because the path is quite lot of point, recreate them repeatly take lots of CPU usage.
Draw all paths to a window.
Do animation fill like step 2, 3, 4 in above concept.
So, what I need is:
A suitable way to create and store closed paths. Note: paths are combine of points connect by functions same with C# LineTo and QuadraticBezierTo function.
Draw and animated fill the paths to screen.
Can you please suggest one way to do above step? (outline what I have to read, then I can study about it myself). I know basic of Visual C++, Win32 GUI and a little about draw context (HDC) and GDI, but only start to learn Graphic/Drawing.
Sorry about my English! If anythings I explain dont clear, please let me know.
how many is quite lot of point ? what is the target framerate? for low enough counts you can use GDI for this otherwise you need HW acceleration like OpenGL,DirectX.
I assume 2D so You need:
store your path as list of segments
for example like this:
struct path_segment
{
int p0[2],p1[2],p2[2]; // points
int type; // line/bezier
float length; // length in pixels or whatever
};
const int MAX=1024; // max number of segments
path_segment path[MAX]; // list of segments can use any template like List<path_segment> path; instead
int paths=0; // actual number of segments;
float length=0.0; // while path length in pixels or whatever
write functions to load and render path[]
The render is just for visual check if you load is OK ... for now atlest
rewrite the render so
it take float t=<0,1> as input parameter which will render path below t with one color and the rest with other. something like this:
int i;
float l=0.0,q,l0=t*length; // separation length;
for (i=0;i<paths;i++)
{
q=l+path[i].length;
if (q>=l0)
{
// split/render path[i] to < 0,l-l0> with color1
// split/render path[i] to <l-l0,q-l0> with color2
// if you need split parameter in <0,1> then =(l-l0)/path[i].length;
i++; break;
}
else
{
//render path[i] with color1
}
l=q;
}
for (;i<paths;i++)
{
//render path[i] with color2
}
use backbuffer for speedup
so render whole path with color1 to some bitmap. On each animation step just render the newly added color1 stuff. And on each redraw just copy the bitmap to screen instead of rendering the same geometry over and over. Of coarse if you have zoom/pan/resize capabilities you need to redraw the bitmap fully on each of those changes ...

libGDX- Exact collision detection - Polygon creation?

I've got a question about libGDX collision detection. Because it's a rather specific question I have not found any good solution on the internet yet.
So, I already created "humans" that consist of different body parts, each with rectangle-shaped collision detection.
Now I want to implement weapons and skills, which for example look like this:
Skill example image
Problem
Working with rectangles in collision detections would be really frustrating for players when there are skills like this: They would dodge a skill successfully but the collision detector would still damage them.
Approach 1:
Before I started working with Libgdx I have created an Android game with a custom engine and similar skills. There I solved the problem following way:
Detect rectangle collision
Calculate overlapping rectangle section
Check every single pixel of the overlapping part of the skill for transparency
If there is any non-transparent pixel found -> Collision
That's a kind of heavy way, but as only overlapping pixels are checked and the rest of the game is really light, it works completely fine.
At the moment my skill images are loaded as "TextureRegion", where it is not possible to access single pixels.
I have found out that libGDX has a Pixmap class, which would allow such pixel checks. Problem is: having them loaded as Pixmaps additionally would 1. be even more heavy and 2. defeat the whole purpose of the Texture system.
An alternative could be to load all skills as Pixmap only. What do you think: Would this be a good way? Is it possible to draw many Pixmaps on the screen without any issues and lag?
Approach 2:
An other way would be to create Polygons with the shape of the skills and use them for the collision detection.
a)
But how would I define a Polygon shape for every single skill (there are over 150 of them)? Well after browsing a while, I found this useful tool: http://www.aurelienribon.com/blog/projects/physics-body-editor/
it allows to create Polygon shapes by hand and then save them as JSON files, readable by the libGDX application. Now here come the difficulties:
The Physics Body Editor is connected to Box2d (which I am not using). I would either have to add the whole Box2d physics engine (which I do not need at all) just because of one tiny collision detection OR I would have to write a custom BodyEditorLoader which would be a tough, complicated and time-intensive task
Some images of the same skill sprite have a big difference in their shapes (like the second skill sprite example). When working with the BodyEditor tool, I would have to not only define the shape of every single skill, but I would have to define the shape of several images (up to 12) of every single skill. That would be extremely time-intensive and a huge mess when implementing these dozens of polygon shapes
b)
If there is any smooth way to automatically generate Polygons out of images, that could be the solution. I could simply connect every sprite section to a generated polygon and check for collisions that way. There are a few problems though:
Is there any smooth tool which can generate Polygon shapes out of an image (and does not need too much time therefor)?
I don't think that a tool like this (if one exists) can directly work with Textures. It would probably need Pixmaps. It would not be needed to keep te Pixmaps loaded after the Polygon creation though. Still an extremely heavy task!
My current thoughts
I'm stuck at this point because there are several possible approaches but all of them have their difficulties. Before I choose one path and continue coding, it would be great if you could leave some of your ideas and knowledge.
There might be helpful classes and code included in libGDX that solve my problems within seconds - as I am really new at libGDX I just don't know a lot about it yet.
Currently I think I would go with approach 1: Work with pixel detection. That way I made exact collision detections possible in my previous Android game.
What do you think?
Greetings
Felix
I, personally, would feel like pixel-to-pixel collision would be overkill on performance and provide some instances where I would still feel cheated - (I got hit by the handle of the axe?)
If it were me, I would add a "Hitbox" to each skill. StreetFighter is a popular game which uses this technique. (newer versions are in 3D, but hitbox collision is still 2D) Hitboxes can change frame-by-frame along with the animation.
Empty spot here to add example images - google "Streetfighter hitbox" in the meantime
For your axe, there could be a defined rectangle hitbox along the edge of one or both ends - or even over the entire metal head of the axe.
This keeps it fairly simple, without having to mess with exact polygons, but also isn't overly performance heavy like having every single pixel being its own hitbox.
I've used that exact body editor you referenced and it has the ability to generate polygons and/or circles for you. I also made a loader for the generated JSON with the Jackson library. This may not be the answer for you since you'd have to implement box2d. But here's how how I did it anyway.
/**
* Adds all the fixtures defined in jsonPath with the name'lookupName', and
* attach them to the 'body' with the properties defined in 'fixtureDef'.
* Then converts to the proper scale with 'width'.
*
* #param body the body to attach fixtures to
* #param fixtureDef the fixture's properties
* #param jsonPath the path to the collision shapes definition file
* #param lookupName the name to find in jsonPath json file
* #param width the width of the sprite, used to scale fixtures and find origin.
* #param height the height of the sprite, used to find origin.
*/
public void addFixtures(Body body, FixtureDef fixtureDef, String jsonPath, String lookupName, float width, float height) {
JsonNode collisionShapes = null;
try {
collisionShapes = json.readTree(Gdx.files.internal(jsonPath).readString());
} catch (JsonProcessingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
for (JsonNode node : collisionShapes.findPath("rigidBodies")) {
if (node.path("name").asText().equals(lookupName)) {
Array<PolygonShape> polyShapes = new Array<PolygonShape>();
Array<CircleShape> circleShapes = new Array<CircleShape>();
for (JsonNode polygon : node.findPath("polygons")) {
Array<Vector2> vertices = new Array<Vector2>(Vector2.class);
for (JsonNode vector : polygon) {
vertices.add(new Vector2(
(float)vector.path("x").asDouble() * width,
(float)vector.path("y").asDouble() * width)
.sub(width/2, height/2));
}
polyShapes.add(new PolygonShape());
polyShapes.peek().set(vertices.toArray());
}
for (final JsonNode circle : node.findPath("circles")) {
circleShapes.add(new CircleShape());
circleShapes.peek().setPosition(new Vector2(
(float)circle.path("cx").asDouble() * width,
(float)circle.path("cy").asDouble() * width)
.sub(width/2, height/2));
circleShapes.peek().setRadius((float)circle.path("r").asDouble() * width);
}
for (PolygonShape shape : polyShapes) {
Vector2 vectors[] = new Vector2[shape.getVertexCount()];
for (int i = 0; i < shape.getVertexCount(); i++) {
vectors[i] = new Vector2();
shape.getVertex(i, vectors[i]);
}
shape.set(vectors);
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
for (CircleShape shape : circleShapes) {
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
}
}
}
And I would call it like this:
physics.addFixtures(body, fixtureDef, "ship/collision_shapes.json", shipType, width, height);
Then for collision detection:
public ContactListener shipsExplode() {
ContactListener listener = new ContactListener() {
#Override
public void beginContact(Contact contact) {
Body bodyA = contact.getFixtureA().getBody();
Body bodyB = contact.getFixtureB().getBody();
for (Ship ship : ships) {
if (ship.body == bodyA) {
ship.setExplode();
}
if (ship.body == bodyB) {
ship.setExplode();
}
}
}
};
return listener;
}
then you would add the listener to the world:
world.setContactListener(physics.shipsExplode());
my sprites' width and height were small since you're dealing in meters not pixels once you start using box2d. One sprite height was 0.8f and width was 1.2f for example. If you made the sprites width and height in pixels the physics engine hits speed limits that are built in http://www.iforce2d.net/b2dtut/gotchas
Don't know if this still matter to you guys, but I built a small python script that returns the pixels positions of the points in the edges of the image. There is room to improve the script, but for me, for now its ok...
from PIL import Image, ImageFilter
filename = "dship1"
image = Image.open(filename + ".png")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(filename + "_edge.png")
cols = image.width
rows = image.height
points = []
w = 1
h = 1
i = 0
for pixel in list(image.getdata()):
if pixel[3] > 0:
points.append((w, h))
if i == cols:
w = 0
i = 0
h += 1
w += 1
i += 1
with open(filename + "_points.txt", "wb") as nf:
nf.write(',\n'.join('%s, %s' % x for x in points))
In case of updates you can find them here: export positions

alpha channel blending issue in GLES2

I have a need where I need to have separate blending for alpha channel but its not working.
Let me explain with my test.
Alpha blending is enabled and I have set color and alpha write mask to true.
1) I am clearing my buffer with glClear(0,0,0,1)
2) I am drawing a big 2D quad which will have its color blend to half and will "set destination alpha values to zero" (-> this is what I need to achieve !)
Src Color is (1,1,1,0.5)
Here are the blend parameters I am setting
transparencyData.color_blendfunc = BlendInfo.BLEND_FUNC_ADD
transparencyData.alpha_blendfunc = BlendInfo.BLEND_FUNC_ADD
transparencyData.colorSrc_blendfactor = BlendInfo.BLEND_FACTOR_SRC_ALPHA
transparencyData.colorDst_blendfactor = BlendInfo.BLEND_FACTOR_ONE_MINUS_SRC_ALPHA
transparencyData.alphaSrc_blendfactor = BlendInfo.BLEND_FACTOR_ZERO
transparencyData.alphaDst_blendfactor = BlendInfo.BLEND_FACTOR_ZERO
3) I am drawing a smaller 2D quad inside the above quad
Src Color is (1,1,1,1)
Here are the blend parameters I am setting
transparencyData.color_blendfunc = BlendInfo.BLEND_FUNC_ADD
transparencyData.alpha_blendfunc = BlendInfo.BLEND_FUNC_ADD
transparencyData.colorSrc_blendfactor = BlendInfo.BLEND_FACTOR_ONE_MINUS_DST_ALPHA
transparencyData.colorDst_blendfactor = BlendInfo.BLEND_FACTOR_DST_ALPHA
transparencyData.alphaSrc_blendfactor = BlendInfo.BLEND_FACTOR_ONE
transparencyData.alphaDst_blendfactor = BlendInfo.BLEND_FACTOR_ZERO
Now while quad rendered by step 2 appears half blended, my desired need where it sets the final alpha value of the destination to zero is not happening.
How I know this is because the quad rendered in step 3 should appear if destination alpha is zero and will not appear if destination alpha is one.
This is making me believe that either the alpha blending which might be happening is not working and getting written to the destination or I am doing something not right !!!
I am using glBlendEquationSeparate and glBlendFuncSeparate for this purpose.
Can you please shed light and help me in what I am trying to do ?

Resources