I have a question regarding changing visibility of path segments via VG_LINE_TO_ABS and VG_MOVE_TO_ABS
First, I've been told it's resource expensive to create and destroy OpenVg paths, and it's much fast to create a path, then modify it
Therefore, in my Init function I have
vg3DPath = vgCreatePath(VG_PATH_FORMAT_STANDARD, VG_PATH_DATATYPE_F, 1.0f, 0.0f, seg_pts, seg_pts * 2, VG_PATH_CAPABILITY_ALL);
vgAppendPathData(vg3DPath, seg_pts, (const VGubyte *)vg3DPathSegments, points);
And in my Draw function I have,
vgModifyPathCoords(vg3DPath, 0, seg_pts, points);
The number of points, seg_pts does not change, only the location of points, stored in the points array (defined as 2*seg_pts in size for X and Y coordinates of each point) .
This works fine.
My issue is that vgModifyPathCoords() does not take the segment description array, vg3DPathSegments
(defined as seg_pts+1 in size, for VG_MOVE_TO_ABS, VG_LINE_TO_ABS ... VG_LINE_TO_ABS, VG_CLOSE_PATH)
If I want to change visibility of some segments. i. e. change some of the VG_LINE_TO_ABS to VG_MOVE_TO_ABS, I cannot pass this into vgModifyPathCoords(..)
My initial thinking was to make vg3DPathSegments, a class private variable, and changing values in it would automatically change those properties in the path, but it's passed as a const, so this does not work.
How can I change these properties of the path?
Is there any better approach?
The language is C++11
The platform is Imx6, Yocto
Thank you very much
-D
Related
I want to use Visual C++ to animate fill paths to screen. I have done it with C# before, but now switch to C++ for better perfomance and want do more complex works in the future.
Here is the concept in C#:
In a Canvas I have a number of Path. These paths are closed geometries combine of LineTo and QuadraticBezierTo functions.
Firstly, I fill Silver color for all path.
Then for each path, I fill Green color from one end to other end (up/down/left/right direction) (imagine a progress bar increase its value from min to max). I do it by set the Fill brush of the path to a LinearGradientBrush with two color Green and Silver with same offset, then increase the offset from 0 to 1 by Timer.
When a path is fully green, continue with next path.
When all path is fill with Green, come back first step.
I want to do same thing in Visual C++. I need to know an effective way to:
Create and store paths in a collection to reuse. Because the path is quite lot of point, recreate them repeatly take lots of CPU usage.
Draw all paths to a window.
Do animation fill like step 2, 3, 4 in above concept.
So, what I need is:
A suitable way to create and store closed paths. Note: paths are combine of points connect by functions same with C# LineTo and QuadraticBezierTo function.
Draw and animated fill the paths to screen.
Can you please suggest one way to do above step? (outline what I have to read, then I can study about it myself). I know basic of Visual C++, Win32 GUI and a little about draw context (HDC) and GDI, but only start to learn Graphic/Drawing.
Sorry about my English! If anythings I explain dont clear, please let me know.
how many is quite lot of point ? what is the target framerate? for low enough counts you can use GDI for this otherwise you need HW acceleration like OpenGL,DirectX.
I assume 2D so You need:
store your path as list of segments
for example like this:
struct path_segment
{
int p0[2],p1[2],p2[2]; // points
int type; // line/bezier
float length; // length in pixels or whatever
};
const int MAX=1024; // max number of segments
path_segment path[MAX]; // list of segments can use any template like List<path_segment> path; instead
int paths=0; // actual number of segments;
float length=0.0; // while path length in pixels or whatever
write functions to load and render path[]
The render is just for visual check if you load is OK ... for now atlest
rewrite the render so
it take float t=<0,1> as input parameter which will render path below t with one color and the rest with other. something like this:
int i;
float l=0.0,q,l0=t*length; // separation length;
for (i=0;i<paths;i++)
{
q=l+path[i].length;
if (q>=l0)
{
// split/render path[i] to < 0,l-l0> with color1
// split/render path[i] to <l-l0,q-l0> with color2
// if you need split parameter in <0,1> then =(l-l0)/path[i].length;
i++; break;
}
else
{
//render path[i] with color1
}
l=q;
}
for (;i<paths;i++)
{
//render path[i] with color2
}
use backbuffer for speedup
so render whole path with color1 to some bitmap. On each animation step just render the newly added color1 stuff. And on each redraw just copy the bitmap to screen instead of rendering the same geometry over and over. Of coarse if you have zoom/pan/resize capabilities you need to redraw the bitmap fully on each of those changes ...
If I do:
actor.setOrigin(0, 0);
actor.setRotation(45);
actor.setOrigin(actor.getWidth() / 2, actor.getHeight() / 2);
It appears that on the last setOrigin call, the actor gets repositioned to the location it would've been if actor.setRotation(45) would have been called after its latest origin was set.
What do I do to make it so that the latest origin of the actor is only used for future "scale" and "rotation" actions?
Okay so i looked in the source code of libgdx, and i'll tell you the short answer.
Basically when you set the origin or the rotation, you just change a variable named "originx", "originy" and "rotation". So every call to setOrigin will overwrite the values set in previous calls.
And every time you draw the actor, it recalculates the bounds using the current variable.
To be clear, setOrigin looks like this :
public void setOrigin (float originX, float originY) {
this.originX = originX;
this.originY = originY;
}
So the precedent setOrigin is lost.
The reposition of the actor itself in your case does not change, but the position of the displayed sprite or texture will change.
It is calculated in this order:
Position -> Origin -> Scale -> Rotation
See: Sprite.java (method: "getVertices ()")
When you change the Origin point of an already rotated element, the point in the plane around which the rotation occurs changes and the sprite will be drawn in a different place (the actor's position in this case does not change).
I've got a row dimensional array of values that I want to visualize in 3D and I'm using scene kit under OS X for it. I've done it in a clumsy manner by using each column as a point on the X axis, each row as a point on the Z axis, and each value as a normalized point on the Y axis -- I place a sphere at the vector defined by each data point. It works but it doesn't look too good.
I've also done this by building a mesh of lines based on #Matthew's function in Drawing a line between two points using SceneKit (the answer he posted, not the original question). For each point I use his function to draw two lines - one between my current point and the next point to the right and another between my current point and the next point towards the front (except when there is no additional column/row, of course).
Using the second method, my results look much better... however the performance is quite hideous! It takes quite a long time to complete the initial rendering, and if I use a trackpad/mouse to rotate or translate the scene, I might as well get a cup of coffee to wait until my system is usable again (and this is not much hyperbole). Using the sphere method, things render and update very quickly.
Any advice on how to improve the performance when using the lines method? (Note that I am not trying to add both lines and spheres at the same time.) Code-wise, the only difference between approach is which of the following methods gets called (and that for each point, addPixelAt... is called once, but addLineAt... is called twice for most points).
- (SCNNode *)addPixelAtRow:(CGFloat)row Column:(CGFloat)column size:(CGFloat)size color:(NSColor *)color
{
CGFloat radius = 0.5;
SCNSphere *ball = [SCNSphere sphereWithRadius:radius*1.5];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[ball setMaterials:#[material]];
SCNNode *ballNode = [SCNNode nodeWithGeometry:ball];
[ballNode setPosition:SCNVector3Make(column, size, row)];
[_baseNode addChildNode:ballNode];
return ballNode;
}
- (SCNNode *)addLineFromRow:(CGFloat)row1 Column:(CGFloat)column1 size:(CGFloat)size1
toRow2:(CGFloat)row2 Column2:(CGFloat)column2 size2:(CGFloat)size2 color:(NSColor *)color
{
SCNVector3 positions[] = {
SCNVector3Make(column1, size1, row1),
SCNVector3Make(column2, size2, row2)
};
int indices[] = {0, 1};
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithVertices:positions count:2];
NSData *indexData = [NSData dataWithBytes:indices length:sizeof(indices)];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:indexData
primitiveType:SCNGeometryPrimitiveTypeLine
primitiveCount:1
bytesPerIndex:sizeof(int)];
SCNGeometry *line = [SCNGeometry geometryWithSources:#[vertexSource] elements:#[element]];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[line setMaterials:#[material]];
SCNNode *lineNode = [SCNNode nodeWithGeometry:line];
[_baseNode addChildNode:lineNode];
return lineNode;
}
From the data that you've shown in your question I would say that your main problem is the number of draw calls. Your's is in the tens of thousands, which is way too much. It should probably be a lot closer to ~100.
The reason why you have so many draw calls is that you have so many distinct objects in your scene (each line). The better (but more advanced solution) would probably be to generate a single element for the entire mesh that consists of all the lines. If you want to achieve the same rendering with that mesh (with a color from cold to warm based on the height) then you could do that in a shader modifier.
However, in your case I would start by flattening all the lines (since that would be the smallest code change and should still have a significant performance improvement in your case).
(Optimizing performance is always an iterative process. Once you fix one thing there will be another thing which is the most expensive operation. Without your code I can only say what would help with the current performance problem)
Create an empty node (without adding it to your scene) and generate all the lines, adding them to this node. Then create a flattened copy of that node by calling flattenedClone on the node that contains all the lines
SCNNode *nodeWithAllTheLines = [SCNNode node];
// create all the lines and add them to it...
SCNNode *flattenedNode = [nodeWithAllTheLines flattenedClone];
[_baseNode addChildNode:flattenedNode];
When you do this you should see a significant drop in the number of draw calls (the number after the diamond in the statistics) and hopefully a big increase in performance.
I'm new to XNA and would like to develop a light-weight 2D engine over it, with the entities organized into parent-child hierarchy. I think of matrix when drawing children, because their position, rotation and scale are depend on their parent.
If I use SpriteBatch.Begin(), my rectangles can be drawn on the screen, but when I change them into:
this.DrawingMatrix = Matrix.Identity;
this.SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullClockwise, null, this.DrawingMatrix);
nothing is drawn anymore. I even tried new Matrix() or Matrix.CreateTranslation(0, 0, 0) for DrawingMatrix.
My first question is: why doesn't it work? I'm not working with any camera or viewport.
Secondly, before drawing an entity, I call the PreDraw to transform the matrix (I will then reset to original state at PostDraw):
protected virtual void PreDraw(Engine pEngine)
{
pEngine.DrawingMatrix *=
Matrix.CreateTranslation(this.X, this.Y, 0) *
Matrix.CreateScale(this.ScaleX, this.ScaleY, 1) *
Matrix.CreateRotationZ(this.Rotation);
}
Please clarify the correction of above code. And I need to scale not at the origin, but at ScaleCenterX and ScaleCenterY, how can I achieve this?
ADDED: Here is an example of my engine's draw process:
Call these code:
this.DrawingMatrix = Matrix.CreateTranslation(0, 0, 0);
this.SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullClockwise, null, this.DrawingMatrix);
Call PreDraw(), with is:
protected virtual void PreDraw(Engine pEngine)
{
pEngine.DrawingMatrix *=
Matrix.CreateTranslation(this.X, this.Y, 0) *
Matrix.CreateScale(this.ScaleX, this.ScaleY, 1) *
Matrix.CreateRotationZ(this.Rotation);
}
Call Draw(), for example, in my Rect class:
protected override void Draw(Engine pEngine)
{
pEngine.SpriteBatch.Draw(pEngine.RectangleTexture, new Rectangle(0, 0, (int)this.Width, (int)this.Height), new Rectangle(0, 0, 1, 1), this.Color);
}
If I replace above Begin code with this.SpriteBatch.Begin(), the rectangle is drawn correctly, so I guess it is because of the matrix.
First issue is a simple bug: The default for SpriteBatch is CullCounterClockwise, but you have specified CullClockwise causing all your sprites to get back-face-culled. You can pass null if you just want to use the default render states - you don't need to specify them explicitly.
(You would need to change the cull mode if you used a negative scale.)
To answer your second question: You need to translate "back" to place the scaling origin (your ScaleCenterX and ScaleCenterY) at the world origin (0,0). Transformations always happen around (0,0). So normally the order is: translate sprite origin back to the world origin, scale, rotate, translate to place sprite origin at desired world position.
Also, I hope that your PostDraw is not applying the reverse transformations (you made it sound like it does). That is very likely to cause precision problems. You should save and restore the matrix instead.
I'm working with Graphviz API, Visual C++. Before I call gvLayout to calculate node coordinates, I have to set node width and height (for each node in the graph). The problem is, ND_width and ND_height macro approach, just does not seem to have affect, while setting same values with agsafeset works as expected. I just don't want to use string-based APIs like agsafeset, because I'm processing bunch of nodes in a loop, and prefer to set width and height values with ND_width(pNode) and ND_height(pNode) (or directly as pNode->u.width and pNode->u.height). What am I doing wrong?
Following code does not work (does not have any effect):
const DWORD dwPixelsPerInch = 96;
ND_width(pGvzNode) = (double)dwWidthInPixels / dwPixelsPerInch;
ND_height(pGvzNode) = (double)dwHeightInPixels / dwPixelsPerInch;
While following code works:
CStringA csaValue;
csaValue.Format("%f", (double)dwWidthInPixels / dwPixelsPerInch);
agsafeset(pGvzNode, "width", csaValue.GetBuffer(), "");
csaValue.Format("%f", (double)dwHeightInPixels / dwPixelsPerInch);
agsafeset(pGvzNode, "height", csaValue.GetBuffer(), "");
P.S.: I use Graphviz solely for layout, I do custom rendering, so all I need is calculation of nodes' and edges' coordinates (in pixels) given nodes' width and height (in pixels). I'm setting these values just before calling gvLayout (for "dot"). I'm setting agsafeset(pGvzNode, "fixedsize", "1", "") as well.
agsafeset sets the node attributes, which are used by gvLayout to calculate the layout information, while ND_width and ND_height are used to get the layout size. Before gvLayout is called, the ND_width and ND_height can set the values, but the values you have set will be overwritten by gvLayout. So you must use agsafeset, the ND_width and ND_height cannot work.