I have read a lot about influencing the initial animation of the force layout but am afraid I have not understood it yet.
I have found out (and could implement) this example, about how to effectively "stop" it.
But my question is, is it possible to control it (i.e to influence how long it takes until "force stops"?).
From the documentation it seems that alpha is the parameter to change but it does not make a difference ( I tried negative values, zero and positive value without any noticeable difference).
Here is a jsdiddle of what I am trying to do : yrscc fiddle of what I am trying to do.
var force = d3.layout.force()
.nodes(d3.values(datax.nodes))
.links(datax.links)
.size([xViewPortArea.Width, xViewPortArea.Height])
.linkDistance(xGraphParameters.xLinkDistance)
.charge(xGraphParameters.xCharge)
.on("tick", tick)
.alpha(-5) // HERE
.start();
My questions:
which value of alpha would actually influence the number of iterations ? (I thought that it was is meant with " *If value is nonpositive, and the force layout is running, this method stops the force layout on the next tick and dispatches an "end" in the documentation
in this this posting a function is proposed by #JohnS which apparently can help . But I have not understood where one is supposed to call it.
P.S: another cool option is to have an image on the screen and then compute the optimal layout in the background like here. But I gave up trying to implement it
One way to cool down the force layout is to force the tick events:
var k = 0;
while ((force.alpha() > 1e-2) && (k < 150)) {
force.tick(),
k = k + 1;
}
the alpha value measures the temperature of the force, lower values indicate that the layout is more stable. The number of ticks to consider it stable will depend on the number of nodes, I have had good results with 50-200. The friction coefficient will help to stabilize the force, but the layout will be less optimal.
Related
This is a small equation that's giving me a headache, I'm close to solving but- Ugh.
I'll try to be prompt.
I have this:
As you see it is a slider that goes from 0.1x to 3x Difficulty.
I have other sliders like this, for audio for example that just go from 0% to 100%.
That works fine.
However, with a minimum value greater than 0 my math breaks a bit and I'm stuck not being able to
perfectly slide the bar all the way to the bottom because it isn't 0 and it is 0.1 instead.
I want to make it to where even if the minimum value isn't 0, the bar goes all the way to empty.
Here is the relevant equations/calculations at play:
var percent = val/val_max
var adjustment = ((x2-x1)*val_min)-((((x2-x1)*val_min)*percent)*val_min)
var x2_final = (x1+((x2-x1)*percent))-adjustment
percent is the percentage of the current value relative to the max value (0.0 to 1.0)
adjustment is trying to find how much to additionally add/remove from x2_final based on the current value to keep the slider properly scaled when the minimum value isn't 0. (This is where the problem is)
x2_final is the final (in pixels) coordinate where the slider should stop based on the previous calculations.
Initially the slider would over fill when full (that was fixed by the current adjustments) but now the slider doesn't go all the way empty and leaves a "0.1" worth of slider.
I don't usually use forums or stackoverflow as I try to figure out things on my own but so I apologize if my explanation needs work.
Here is what the slider looks like when I set it as low as it will go:
Also, if I have more math related problems, are there any good tools I can use to help simulate my calculations like this so people can run it for them selves or something?
Thanks in advance!
Solved it!
So my equation was a bit wrong for the adjustment.
As now it looks like this:
var percent = val/val_max
var percent_min = val_min/val_max
var adjustment = ((x2-x1)*percent_min)-(((x2-x1)*percent_min)*percent)
var x2_final = (x1+((x2-x1)*percent))-adjustment
And now the slider properly fills to full as well as empties to the bottom,
regardless of what the minimum value is.
But I also noticed then the bar as I was sliding it wasn't following my mouse.
So to fix that I had to go later in my code where I update the current value of the slider as the user clicks and changes it...
var mouse_x_percent = round2((mouse_x-x1+adjustment)/(x2-x1+adjustment),2)
Just had to add the adjustment to both sides of that calculation (getting the mouse_x's percentage relative to the beginning and end of the slider itself which would then be used to calculate the new value by multiplying the mouse_x_percent with the max value).
(Round2() takes two numbers, the first being the number to round, and the second to what decimal place)
Always love solving a problem, I hope this helps someone else.
I'm trying to set a slider (actually a kitchen timer) using a pan gesture in ionic2 see: http://ionicframework.com/docs/v2/components/#gestures
The slider/timer has an open upper-bound that could be set by a panright, but go down to zero on a panleft.
How can I best translate the pan event to be speed-sensitive to allow an upper bounds near 36000 but sensitive enough to set increments as small as 10? The max deltaX would be around 400px, but I suppose the user could use a few pan gestures to reach a large value.
Is there some ready-built easing function that I can use to achieve this?
Just thinking abstractly:
You could detect the magnitude between 2 consecutive pan events. If its small enough, then you can allow the smaller granularity of incrementation. On the other hand, if you decide its big enough, then you can allow for larger increments. In theory you can do this check continuously during the pan event(s), even affecting the upper-bound dynamically (though not sure why this is relevant).
I don't see why you need to worry about the upper bound in this case. Can you explainwhy?
Using Riron's comment, you can just multiply the velocityX by the deltaX. I saw in a quick test that the speed can easily get higher than 8 and lower than 1 (0.5 for example). The delta can be in the tens or in the hundreds so it allows for both required sensitivities.
Here's a code for that:
hammerObj.on("panright panleft",
function(ev){
if(ev.type == "panright"){
timer += ev.speedX * ev.deltaX;
//timer = Math.min(36000, ev.speedX*ev.deltaX+timer); for an upper bound of 36000 instead of unbound
} else if (ev.type == "panleft"){
timer = Math.max(0, timer - (ev.speedX * ev.deltaX));
}
}
So I'm creating some animations for a client and I've been playing with two.js because I liked its SVG capabilities. Unfortunately I'm having issues with performance.
I'm plotting 100 circles on the screen. Each circle contains 6 further circles for a total of 700 circles being rendered on load.
The circles react to mouse movements on the x-axis and cascade slowly downwards on the y-axis.
Currently in Chrome its only running at around 32FPS. In Firefox it barely even works!
I've also tried two.js's webgl renderer but while there is a slight performance increase, the elements just don't look as good as SVG.
The source is here: https://github.com/ashmore11/verifyle/tree/develop
path to file: src/coffee/elements/dots
Let me know if there's any more info I can provide.
What you've made is very beautiful!
Hmmm, so there are many factors as to why the performance isn't as stellar as you'd like.
The size of the drawable area matters (i.e: the <svg /> or <canvas /> element). The bigger the area the more pixels to render.
The amount of elements added to the DOM. Yes there are 100 dots, but each dot is comprised of many elements.
Of those elements the amount of changes an element has on any given frame.
Finally, of the elements changing how many operations (i.e: opacity, scale, translation, etc.)
These considerations compound in most computer generated imagery to affect real-time rendering. The goal is basically to reduce the load on any one of those dimensions and see if it's enough to give you the performance you're looking for. You're gonna have to get creative, but there are options. Here are a few things off the top of my head that you can do to try to speed things up:
Reducing the amount of shapes will make it run faster ^^
For something like this Two.Types.canvas might be fastest.
Instead of moving each dot split the translation into 2 or 3 groups and move them based on the container groups. Kind of like a foreground and background parallax.
If you're sticking with Two.Types.svg try animating only a handful of dots on any given frame. This way you're not doing entire traversal of the whole scene every frame and each dot isn't animating every frame.
Pseudo code for this might look like:
// ... some array : dots inferred ... //
var now = Date.now();
var index, length = 12;
two.bind('update', function() {
for (var i = index; i < Math.min(index + 12, dots.length); i++) {
var dot = dots[i];
dot.scale = (Math.sin(now / 100) + 1) / 4 + 0.75;
}
index = (index + 12) % dots.length;
});
If none of these are giving you anything substantial you're looking for than I would highly recommend turning in each Dot into a texture and drawing those textures either directly through canvas2d or with WebGL and a library. Three.js will be able to render hundreds of thousands of these: http://threejs.org/examples/#webgl_particles_sprites You'll have to rethink a lot of how the texture itself is generated and how the opacity varies between the lines and of course it'll look slightly different as you described in your question. Bitmap is different from Vector >_<
Hope this helps!
I want to write a paint program in the style of MS Paint.
For painting things on screen when the user moves the mouse, I have to wait for mouse move events and draw on the screen whenever I receive one. Apparently, mose move events are not sent very often, so I have to interpolate the mouse movement by drawing a line between the current mouse position and the previous one. In pseudocode, this looks something like this:
var positionOld = null
def handleMouseMove(positionNew):
if mouse.button.down:
if positionOld == null:
positionOld = positionNew
screen.draw.line(positionOld,positionNew)
positionOld = positionNew
Now my question: interpolating with straight line segments looks too jagged for my taste, can you recommend a better interpolation method? What method do GIMP or Adobe Photoshop implement?
Alternatively, is there a way to increase the frequency of the mouse move events that I receive? The GUI framework I'm using is wxWidgets.
GUI framework: wxWidgets.
(Programming language: Haskell, but that's irrelevant here)
EDIT: Clarification: I want something that looks smoother than straight line segments, see the picture (original size):
EDIT2: The code I'm using looks like this:
-- create bitmap and derive drawing context
im <- imageCreateSized (sy 800 600)
bitmap <- bitmapCreateFromImage im (-1) -- wxBitmap
dc <- memoryDCCreate -- wxMemoryDC
memoryDCSelectObject dc bitmap
...
-- handle mouse move
onMouse ... sw (MouseLeftDrag posNew _) = do
...
line dc posOld posNew [color := white
, penJoin := JoinRound
, penWidth := 2]
repaint sw -- a wxScrolledWindow
-- handle paint event
onPaint ... = do
...
-- draw bitmap on the wxScrolledWindow
drawBitmap dc_sw bitmap pointZero False []
which might make a difference. Maybe my choices of wx-classes is why I'm getting a rather low frequency of mouse move events.
Live demos
version 1 - more smooth, but more changing while you draw: http://jsfiddle.net/Ub7RV/1/
version 2 - less smooth but more stable: http://jsfiddle.net/Ub7RV/2/
The way to go is
Spline interpolation of the points
The solution is to store coordinates of the points and then perform spline interpolation.
I took the solution demonstrated here and modified it. They computed the spline after you stop drawing. I modified the code so that it draws immediately. You might see though that the spline is changing during the drawing. For real application, you probably will need two canvases - one with the old drawings and the other with just the current drawing, that will change constantly until your mouse stops.
Version 1 uses spline simplification - deletes points that are close to the line - which results in smoother splines but produce less "stable" result. Version 2 uses all points on the line and produces much more stable solution though (and computationally less expensive).
You can make them really smooth using splines:
http://freespace.virgin.net/hugo.elias/graphics/x_bezier.htm
But you'll have to delay the drawing of each line segment until one frame later, so that you have the start and end points, plus the next and previous points available for the calculation.
so, as I see the problem of jagged edge of freehand made curve, when the mouse are moved very fast, is not solved !!! In my opinion there are need to work around with the polling frequency of mousemove event in the system i.e. using different mouse driver or smf.. And the second way is the math.. using some kind of algorithm, to accuratly bend the straight line between two points when the mouse event is polled out.. For clear view you can compare how is drawed free hand line in photoshop and how in mspaint.. thanks folks.. ;)
I think you need to look into the Device Context documentation for wxWidgets.
I have some code that draws like this:
//screenArea is a wxStaticBitmap
int startx, starty;
void OnMouseDown(wxMouseEvent& event)
{
screenArea->CaptureMouse();
xstart = event.GetX();
ystart = event.GetY();
event.Skip();
}
void OnMouseMove(wxMouseEvent& event)
{
if(event.Dragging() && event.LeftIsDown())
{
wxClientDC dc(screenArea);
dc.SetPen(*wxBLACK_PEN);
dc.DrawLine(startx, starty, event.GetX(), event.GetY());
}
startx = event.GetX();
starty = event.GetY();
event.Skip();
}
I know it's C++ but you said the language was irrelevant, so I hope it helps anyway.
This lets me do this:
which seems significantly smoother than your example.
Interpolating mouse movements with line segments is fine, GIMP does it that way, too, as the following screenshot from a very fast mouse movement shows:
So, smoothness comes from a high frequency of mouse move events. WxWidgets can do that, as the example code for a related question demonstrates.
The problem is in your code, Heinrich. Namely, drawing into a large bitmap first and then copying the whole bitmap to the screen is not cheap! To estimate how efficient you need to be, compare your problem to video games: a smooth rate of 30 mouse move events per second correspond to 30fps. Copying a double buffer is no problem for modern machines, but WxHaskell is likely not optimized for video games, so it's not surprising that you experience some jitter.
The solution is to draw only as much as necessary, i.e. just the lines, directly on the screen, for example as shown in the link above.
I agree with harviz - the problem isn't solved. It should be solved on the operating system level by recording mouse movements in a priority thread, but no operating system I know of does that. However, the app developer can also work around this operating system limitation by interpolating better than linear.
Since mouse movement events don't always come fast enough, linear interpolation isn't always enough.
I experimented a little bit with the spline idea brought up by Rocketmagnet.
Instead of putting a line between two points A and D, look at the point P preceding A and use a cubic spline with the following control points B = A + v' and C = D - w', where
v = A - P,
w = D - A,
w' = w / 4 and
v' = v * |w| / |v| / 4.
This means we fall into the second point with the same angle as the line interpolation would, but go out a starting point in the same angle the previous segment came in, making the edge smooth. We use the length of the segment for both control point distances to make the size of the bend fit its proportion.
The following picture shows the result with very few data points (indicated in grey).
The sequence starts at the top left and ends in the middle.
There is still some level of uneasiness here which may be alleviated if one uses both the previous and the next point to adjust for both angles, but that would also mean to draw one point less than what one has got. I find this result already satisfactory, so I didn't try.
I am looking for efficent algorithm for checking if one point is nearby another in 3D.
sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) < radius
This doesn't seem to be too fast and actually I don't need such a big accuracy. How else could I do this?
Square the distance, and drop the call to sqrt(), that's much faster:
(((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2 < radius * radius
Of course, in many cases at least radius * radius can be computed ahead of time and stored as e.g. squaredRadius.
Well if you can be content with a cube distance rather than a spherical distance a pretty naive implementation would be like this:
Math.Abs(x2-x1) < radius && Math.Abs(y2-y1) < radius && Math.Abs(z2-z1) < radius
You can use your own favourite methods of optimising Math.Abs if it proves a bottleneck.
I should also add that if one of the dimensions generally varies less than other dimensions then putting that one last should lead to a performance gain. For example if you are mainly dealing with objects on a "ground" x-y plane then check the z axis last, as you should be able to rule out collisions earlier by using the x and y checks.
If you do not need big accuracy maybe you can check if 2nd point is inside cube (side length '2a'), not sphere, where the 1st point is in center:
|x2-x1|<a && |y2-y1|<a && |z2-z1|<a
Because of the pipelined processor architectures it is - nowadays - in most cases more efficient to do the FPU calculation twice, as branching once. In case of a branch mis-prediction you are stalling for ages ( in cpu-terms ).
So, I would rather go the calculation-way, not the branching-way.
if you don't need the accuracy you can check whether you are in a cube rather than a sphere.
there are options here as well you can pick the cube that enclose the sphere (no false negatives) the cube with the same volume as the sphere (some false positives and negatives, but max error is minimized), the cube that is contained within the sphere (no false positives).
this technique also extends well to higher dimensions.
if you want to get all the points near another one some form of spacial indexing may also be appropriate (kd-tree perhaps)
If you have to check against many other points, you could consider using a spatial ordering method to quickly discover points, that are near a certain location. Have a look at this link:
wiki link
If we were going to optimise this because it was being run billions of times, I would solve this by using unwind's method, and then parallelizing it using SIMD. There's a few different ways to do that. You might simply do all the subtractions (x2-x1,y2-y1,z2-z1) in one op, and then the multiplies in one op also. That way you parallize inside the method without re-designing your algorithm.
Or you could write a bulk version which calculates (x2-x1)^2+(y2-y1)^2+(z2-z1)^2 - r^2 on many elements and stores the results in an array. You can maybe get better throughput, but it means re-designing your algorithm and depends what the tests are used for.
You could also easily optimize this using something like OpenMP if you were really doing lots of tests in a row.
Use max(abs(x1-x2), abs(y1-y2), abs(z1-z2))
After dropping the square root, if the values gets larger, its better to apply log.
This does the cube-distance, and if you are doing a lot of points, most of the time it only does the first test.
close = (abs(x2-x1) < r && abs(y2-y1) < r && abs(z2-z1) < r);