I'am trying to create some snake-like movement, but i cant implement algorithm to move one body part straight by another and so on.
I wanna to have some auto-moved snake which consists of separate blocks ( spheres ). This snake should move along some path. I generate path with bezier spline and have already implemented one future snake's part along it. Point for head is obtained from spline by next api:
class BezierSpline
{
Vector3 GetPoint(float progress) // 0 to 1
}
And than I have SnakeMovement script
public class SnakeMovement : MonoBehaviour
{
public BezierSpline Path;
public List<Transform> Parts;
public float minDistance = 0.25f;
public float speed = 1;
//.....
void Update()
{
Vector3 position = Path.GetPoint(progress);
Parts.First().localPosition = position;
Parts.First().LookAt(position + Path.GetDirection(progress));
for (int i = 1; i < Parts.Count; i++)
{
Transform curBody = Parts[i];
Transform prevBody = Parts[i - 1];
float dist = Vector3.Distance(prevBody.position, curBody.position);
Vector3 newP = prevBody.position;
newP.y = Parts[0].position.y;
float t = Time.deltaTime * dist / minDistance * curspeed;
curBody.position = Vector3.Slerp(curBody.position, newP, t);
curBody.rotation = Quaternion.Slerp(curBody.rotation, prevBody.rotation, t);
}
//....
}
For now, if I stopped head movement all parts dont preserve distance and keep moving to the head position. Another problem with above algorithm is that parts don't exectly follow the head path. They can "cut" corners while turning.
The main idea is to have user/ai control for only head(first body part) and each followed part should exectly repeat head path and preserve distance between its neighbours.
For a snake like motion you are likely to get lots of strange behaviours if you treat spheres as seperate objects. While i can imagine its possible to get it to work, I think this is not the best approach.
First solution that comes to mind is to create a List, onto which you would add to index 0, on every frame, the position of the head of the snake.
The list would grow, and all the other segments would wait their turn, so lag x frames, and on each update segment y would have position of list[x*y]
If Count() of the list is greater than number_of_segments*lag, you RemoveAt(Count()-1)
This can be optimized as changing the list is somewhat costly (a ring buffer would be better suited, but a Queue could also work. For starters i find Lists much easier to follow and you can always optimize later). This may behave a bit awkward if your framerate varies a lot but should be very stable in general (as in - no unpredictable motion, we only re-use the same values over and over)
Second method:
You mentioned using a bezier spline to generate a path. beziers are parametrized by a float t so you have something like
SplineAt(t).
if you take your bezier_path_length and distance_between_segments, than segment n should have position of
SplineAt(t-n*distance_between_segments/bezier_path_length)
Related
I recently converted a DirectX 9 application that was using D3DXIntersect to find ray/mesh intersections to DirectX 11. Since D3DXIntersect is not available in DX11, I wrote my own code to find the intersection, which just loops over all the triangles in the mesh and tests them, keeping track of the closest hit to the origin. This is done on the CPU side and works fine for picking via the GUI, but I have another part of the application that creates a new mesh from an existing one based on several different viewpoints, and I need to check line of sight for every triangle in the mesh many times. This gets pretty slow.
Does it make sense to use a DX11 compute shader to do this (i.e. would there be a significant speedup from doing it on the CPU)? I searched the internet but could not find an existing example.
Assuming the answer is yes, here is the approach I am thinking of:
Launch a thread for every triangle in my mesh
Each thread computes the distance to a hit on that triangle, or returns max float on a miss. Store one value per thread in a buffer.
Then do a reduction and return the minimum (non-negative) value.
I wish I had access to something like CUDA Thrust in DirectX, because I think coding up that reduction is going to be a pain. That's why I'm asking, so I don't do a bunch of work for nothing!
This is totally doable, here is some HLSL code that allows to perform that (and also handles the case where you hit 2 triangles with the same distance).
I assume that you know how to create resources (Structured Buffer) and bind them to compute pipeline.
Also I'll consider that your geometry is Indexed.
The first step is to collect triangles that pass the test. Instead of using a "Hit" flag, we will use an Append buffer to only push elements that pass the test.
First create 2 structured buffers (position and triangle indices), and copy your model data onto those.
Then create a Structured Buffer with an Appendable Unordered view.
To perform Hit detection, you can use the following Compute code:
struct rayHit
{
uint triangleID;
float distanceToTriangle;
};
cbuffer cbRaySettings : register(b0)
{
float3 rayFrom;
float3 rayDir;
uint TriangleCount;
};
StructuredBuffer<float3> positionBuffer : register(t0);
StructuredBuffer<uint3> indexBuffer : register(t1);
AppendStructuredBuffer<rayHit> appendRayHitBuffer : register(u0);
void TestTriangle(float3 p1, float3 p2, float3 p3, out bool hit, out float d)
{
//Perform ray/triangle intersection
hit = false;
d = 0.0f;
}
[numthreads(64,1,1)]
void CS_RayAppend(uint3 tid : SV_DispatchThreadID)
{
if (tid.x >= TriangleCount)
return;
uint3 indices = indexBuffer[tid.x];
float3 p1 = positionBuffer[indices.x];
float3 p2 = positionBuffer[indices.y];
float3 p3 = positionBuffer[indices.z];
bool hit;
float d;
TestTriangle(p1,p2,p3,hit, d);
if (hit)
{
rayHit hitData;
hitData.triangleID = tid.x;
hitData.distanceToTriangle = d;
appendRayHitBuffer.Append(hitData);
}
}
Please note that you need to provide a sufficient size for appendRayHitBuffer (worst case scenario is Triangle Count, eg :every triangle is hit by the ray).
Once this is done, the beginning part of the buffer contains hit data, and the unordered view counter the number of triangles that passed the test.
Then you need to create an argument buffer, and a small Byte Address Buffer (size 16, since I don't think runtime will allow 12)
You also need a small structured buffer (one element is enough), which will be used to store minimum distance
Use CopyStructureCount to pass the UnorderedView counter into those buffers (plase note that second and third element of the Argument buffer needs to be both set to 1, as they will be arguments for use dispatch).
Clear the small StructuredBuffer Buffer using UINT_MAXVALUE, and use the Argument buffer with DispatchIndirect
I assume that you will not have many hits, so for next part numthreads will be set to 1,1,1 (if you want to use larger groups, you will need to run another compute shader to build the argument buffer).
Then to find minimum distance:
StructuredBuffer<rayHit> rayHitbuffer : register(t0);
ByteAddressBuffer rayHitCount : register(t1);
RWStructuredBuffer<uint> rwMinBuffer : register(u0);
[numthreads(1,1,1)]
void CS_CalcMin(uint3 tid : SV_DispatchThreadID)
{
uint count = rayHitCount.Load(0);
if (tid.x >= count)
return;
rayHit hit = rayHitbuffer[tid.x];
uint dummy;
InterlockedMin(rwMinBuffer[0],asuint(hit.distanceToTriangle), dummy);
}
Since we expect that hit distance will be greater than zero, we can use asuint and InterlockedMin in that scenario. Also since we use DispatchIndirect, this part is now only applied to the elements that previously passed the test.
Now your single element buffer contains the minimum distance, but not the index( or indices).
Last part, we need to finally extract triangle index that is at the minimum hit distance.
You need again a new StructuredBuffer with an UnorderedView to store the minimum index.
Use the same dispatch arguments as before (indirect), and perform the following:
ByteAddressBuffer rayHitCount : register(t1);
StructuredBuffer<uint> MinDistanceBuffer : register(t2);
AppendStructuredBuffer<uint> appendMinHitIndexBuffer : register(u0);
[numthreads(1,1,1)]
void CS_AppendMin(uint3 tid : SV_DispatchThreadID)
{
uint count = rayHitCount.Load(0);
if (tid.x >= count)
return;
rayHit hit = rayHitbuffer[tid.x];
uint minDist = MinDistanceBuffer[0];
uint d = asuint(hit.distanceToTriangle);
if (d == minDist)
{
appendMinHitIndexBuffer.Append(hit.triangleID);
}
}
Now appendMinHitIndexBuffer contains the triangle index that is the closest (or several if you have that scenario), you can copy it back using a Staging resource and Map your resource for reading.
Actually it makes a lot of sense. Here is also a whitepaper which has some useful shader snippets: http://www.graphicon.ru/html/2012/conference/EN2%20-%20Graphics/gc2012Shumskiy.pdf . Also you can use DirectCompute/CUDA/OpenCL in DirectX, but if I might give you a hint, do it in DirectCompute, because I think it is the least hassle to set up and get it running
What is the proper way to add a sphere constraint to a cloth sim?
I am trying to add a sphere (or capsule) constraing to Skeel Lee's cloth simulation source code, but I am not sure how to do it properly.
I created a rather simple constraint which "kicks" the particle back out of the sphere in the opposite direction (opposite from the vector towards the center):
void SatisfySphereConstraints()
{
foreach (var simObj in this.simObjects)
simObj.CurrPosition += SphereConstraint(simObj.CurrPosition, _center, _radius);
}
Vector3 SphereConstraint(Vector3 position, Vector3 center, float radius)
{
var delta = position - center;
var distance = delta.Length();
if (distance < radius)
return (radius - distance) * delta / distance;
return Vector3.Zero;
}
And then I inserted the method in the existing code:
ApplyForces();
Integrate();
for (var i = 0; i < constraintIterations; i++)
{
foreach (Constraint constraint in constraints)
constraint.SatisfyConstraint();
SatisfySphereConstraints(); // <-- I added it here
}
The collision code works fairly well for situations like this (C is the center of the sphere, P is the current particle position, P' is the resolved position):
But the problem occurs if particles are moving very quickly, because then the particle basically jumps to the other side of the sphere (P1 is the previous position, P2 is the current position, P' is how I think it should be resolved), instead of returning back to the previous position:
Since this is a cloth simulation, the cloth basically jumps over the sphere in that case, instead of being "stopped" by the sphere.
Now, I could try to return in the direction of the previous point, but since the sphere might also be moving, I am not sure if P1 is even a valid position (and if it will make sense). Also, it seems to be more computationally expensive - is this how I am supposed to do it, or not?
Cloth like things snapping to the wrong side of an obstacle and getting stuck there is not too uncommon. Even more common is fast moving objects overlapping way too much when the collision is detected.
A common solution is, on detecting a collision, sub divide the previous step until the collision is less severe and then resolve it. I think you will find trying to detect how deep the collision is to be difficult in your case, but if you could limit the top speed of spheres in your system you could binary split the frames in which collisions occur a fixed number of times and assume it will be good enough?
if (it collides at time T and didn't at T-1)
if (it collides at T-0.5)
try T-0.75
else
try T-0.25
etc...
Are you prepared to accept that it will sometimes be wrong, or does it have to be always a good result?
I'm coding a game using Box2D and SFML, and I'd like to let my users import their own textures to use as physics polygons. The polygons are created using the images' alpha layer. It doesn't need to be pixel perfect, and this is where my problem is. If it's pixel-perfect, it's going to be way too buggy when the player gets stuck between two rather complex shapes. I have a working edge-detection algorithm, and it produces something like this. It's pixel per pixel (and the shape it's tracing is simply a square with an dip). After that, I have a simplifying algorithm that produces this. It works fine to me, but if every single corner is traced like that, I'm going to have some problems. The code for the vector-simplifying is this:
//borders is a std::vector containing simple Box2D b2Vec2 (2D vector class containing an x and a y)
//vector shortener
for(unsigned int i = 0; i < borders.size(); i++)
{
int x = 0, y = 0;
int counter = 0;
//get the values for x and y that need to be added to check whether in a line or not
x = borders[i].x - borders[i-1].x;
y = borders[i].y - borders[i-1].y;
//while points are aligned..
while((borders[i].x + x*counter == borders[i + counter].x) && (borders[i].y + y*counter == borders[i+counter].y))
{
counter++;
}
if(counter-1 > i)
{
borders.erase(borders.begin() + i, borders.begin() + i + counter -1);
}
}
So my question is, how can I transform the previous set of vectors into something a bit less precise? Are there any rounding algorithms out there? If so, which is best? Any tips you can give me? It doesn't matter whether the resulting polygon is convex or concave, I'm triangulating it anyways.
Thanks,
AsterAlff
I am trying to implement Bezier Curves for an assignment. I am trying to move a ball (using bezier curves) by giving my function an array of key frames. The function should give me all the frames in between the key frames ... or control points ... but although I'm using the formula found on wikipedia... it is not really working :s
her's my code:
private void interpolate(){
float x,y,b, t = 0;
frames = new Frame[keyFrames.length];
for(int i =0;i<keyFrames.length;++i){
t+=0.001;
b = Bint(i,keyFrames.length,t);
x = b*keyFrames[i].x;
y = b*keyFrames[i].y;
frames[i] = new Frame(x,y);
}
}
private float Bint(int i, int n, float t){
float Cni = fact(n)/(fact(i) * fact(n-i));
return Cni * pow(1-t,n-i) * pow(t,i);
}
Also I've noticed that the frames[] array should be much bigger but I can't find any other text which is more programmer friendly
Thanks in advance.
There are lots of things that don't look quite right here.
Doing it this way, your interpolation will pass exactly through the first and last control points, but not through the others. Is that what you want?
If you have lots of key frames, you're using a very-high-degree polynomial for your interpolation. Polynomials of high degree are notoriously badly-behaved, you may get your position oscillating wildly in between the key frame positions. (This is one reason why the answer to question 1 should probably be no.)
Assuming for the sake of argument that you really do want to do this, your value of t should go from 0 at the start to 1 at the end. Do you happen to have exactly 1001 of these key frames? If not, you'll be doing the wrong thing.
Evaluating these polynomials with lots of calls to fact and pow is likely to be inefficient, especially if n is large.
I'm reluctant to go into much detail about what you should do without knowing more about the scope of your assignment -- it will do no one any good for Stack Overflow to do your homework for you! What have you already been told about Bezier curves? What exactly does your assignment ask you to do?
EDITED to add:
The simplest way to do interpolation using Bezier curves is probably this. Have one (cubic) Bezier curve between each pair of key-points. The endpoints (first and last control points) of each Bezier curve are those keypoints. You need two more control points. For motion to be smooth as you move through a given keypoint, you need (keypoint minus previous control point) = (next control point minus keypoint). So you're choosing a single vector at each keypoint, which will determine where the previous and subsequent control points go. As you move through each keypoint, you'll be moving in the direction of that vector, and the longer the vector is the faster you'll be moving. (If the vector is zero then your cubic Bezier degenerates into a simple straight-line path.)
Choosing that vector so that everything looks nice is highly nontrivial, but you probably aren't really being asked to do that at this stage. So something pretty simple will probably be good enough. You might, e.g., take the vector to be proportional to (next keypoint minus previous keypoint). You'll need to do something a bit different at the start and end of your path if you do that.
Finally got What I needed! Here's what I did:
private void interpolate() {
float t = 0;
float x,y,b;
for(int f =0;f<frames.length;f++) {
x=0;
y=0;
for(int i = 0; i<keyFrames.length; i++) {
b = Bint(i,keyFrames.length-1,map(t,0,time,0,1));
x += b*keyFrames[i].x;
y += b*keyFrames[i].y;
}
frames[f] = new Frame(x,y);
t+=partialTime;
}
}
private void createInterpolationData() {
time = keyFrames[keyFrames.length-1].time -
keyFrames[0].time;
noOfFrames = 60*time;
partialTime = time/noOfFrames;
frames = new Frame[ceil(noOfFrames)];
}
I have upto 10,000 randomly positioned points in a space and i need to be able to tell which the cursor is closest to at any given time. To add some context, the points are in the form of a vector drawing, so they can be constantly and quickly added and removed by the user and also potentially be unbalanced across the canvas space..
I am therefore trying to find the most efficient data structure for storing and querying these points. I would like to keep this question language agnostic if possible.
After the Update to the Question
Use two Red-Black Tree or Skip_list maps. Both are compact self-balancing data structures giving you O(log n) time for search, insert and delete operations. One map will use X-coordinate for every point as a key and the point itself as a value and the other will use Y-coordinate as a key and the point itself as a value.
As a trade-off I suggest to initially restrict the search area around the cursor by a square. For perfect match the square side should equal to diameter of your "sensitivity circle” around the cursor. I.e. if you’re interested only in a nearest neighbour within 10 pixel radius from the cursor then the square side needs to be 20px. As an alternative, if you’re after nearest neighbour regardless of proximity you might try finding the boundary dynamically by evaluating floor and ceiling relative to cursor.
Then retrieve two subsets of points from the maps that are within the boundaries, merge to include only the points within both sub sets.
Loop through the result, calculate proximity to each point (dx^2+dy^2, avoid square root since you're not interested in the actual distance, just proximity), find the nearest neighbour.
Take root square from the proximity figure to measure the distance to the nearest neighbour, see if it’s greater than the radius of the “sensitivity circle”, if it is it means there is no points within the circle.
I suggest doing some benchmarks every approach; it’s two easy to go over the top with optimisations. On my modest hardware (Duo Core 2) naïve single-threaded search of a nearest neighbour within 10K points repeated a thousand times takes 350 milliseconds in Java. As long as the overall UI re-action time is under 100 milliseconds it will seem instant to a user, keeping that in mind even naïve search might give you sufficiently fast response.
Generic Solution
The most efficient data structure depends on the algorithm you’re planning to use, time-space trade off and the expected relative distribution of points:
If space is not an issue the most efficient way may be to pre-calculate the nearest neighbour for each point on the screen and then store nearest neighbour unique id in a two-dimensional array representing the screen.
If time is not an issue storing 10K points in a simple 2D array and doing naïve search every time, i.e. looping through each point and calculating the distance may be a good and simple easy to maintain option.
For a number of trade-offs between the two, here is a good presentation on various Nearest Neighbour Search options available: http://dimacs.rutgers.edu/Workshops/MiningTutorial/pindyk-slides.ppt
A bunch of good detailed materials for various Nearest Neighbour Search algorithms: http://simsearch.yury.name/tutorial.html, just pick one that suits your needs best.
So it's really impossible to evaluate the data structure is isolation from algorithm which in turn is hard to evaluate without good idea of task constraints and priorities.
Sample Java Implementation
import java.util.*;
import java.util.concurrent.ConcurrentSkipListMap;
class Test
{
public static void main (String[] args)
{
Drawing naive = new NaiveDrawing();
Drawing skip = new SkipListDrawing();
long start;
start = System.currentTimeMillis();
testInsert(naive);
System.out.println("Naive insert: "+(System.currentTimeMillis() - start)+"ms");
start = System.currentTimeMillis();
testSearch(naive);
System.out.println("Naive search: "+(System.currentTimeMillis() - start)+"ms");
start = System.currentTimeMillis();
testInsert(skip);
System.out.println("Skip List insert: "+(System.currentTimeMillis() - start)+"ms");
start = System.currentTimeMillis();
testSearch(skip);
System.out.println("Skip List search: "+(System.currentTimeMillis() - start)+"ms");
}
public static void testInsert(Drawing d)
{
Random r = new Random();
for (int i=0;i<100000;i++)
d.addPoint(new Point(r.nextInt(4096),r.nextInt(2048)));
}
public static void testSearch(Drawing d)
{
Point cursor;
Random r = new Random();
for (int i=0;i<1000;i++)
{
cursor = new Point(r.nextInt(4096),r.nextInt(2048));
d.getNearestFrom(cursor,10);
}
}
}
// A simple point class
class Point
{
public Point (int x, int y)
{
this.x = x;
this.y = y;
}
public final int x,y;
public String toString()
{
return "["+x+","+y+"]";
}
}
// Interface will make the benchmarking easier
interface Drawing
{
void addPoint (Point p);
Set<Point> getNearestFrom (Point source,int radius);
}
class SkipListDrawing implements Drawing
{
// Helper class to store an index of point by a single coordinate
// Unlike standard Map it's capable of storing several points against the same coordinate, i.e.
// [10,15] [10,40] [10,49] all can be stored against X-coordinate and retrieved later
// This is achieved by storing a list of points against the key, as opposed to storing just a point.
private class Index
{
final private NavigableMap<Integer,List<Point>> index = new ConcurrentSkipListMap <Integer,List<Point>> ();
void add (Point p,int indexKey)
{
List<Point> list = index.get(indexKey);
if (list==null)
{
list = new ArrayList<Point>();
index.put(indexKey,list);
}
list.add(p);
}
HashSet<Point> get (int fromKey,int toKey)
{
final HashSet<Point> result = new HashSet<Point> ();
// Use NavigableMap.subMap to quickly retrieve all entries matching
// search boundaries, then flatten resulting lists of points into
// a single HashSet of points.
for (List<Point> s: index.subMap(fromKey,true,toKey,true).values())
for (Point p: s)
result.add(p);
return result;
}
}
// Store each point index by it's X and Y coordinate in two separate indices
final private Index xIndex = new Index();
final private Index yIndex = new Index();
public void addPoint (Point p)
{
xIndex.add(p,p.x);
yIndex.add(p,p.y);
}
public Set<Point> getNearestFrom (Point origin,int radius)
{
final Set<Point> searchSpace;
// search space is going to contain only the points that are within
// "sensitivity square". First get all points where X coordinate
// is within the given range.
searchSpace = xIndex.get(origin.x-radius,origin.x+radius);
// Then get all points where Y is within the range, and store
// within searchSpace the intersection of two sets, i.e. only
// points where both X and Y are within the range.
searchSpace.retainAll(yIndex.get(origin.y-radius,origin.y+radius));
// Loop through search space, calculate proximity to each point
// Don't take square root as it's expensive and really unneccessary
// at this stage.
//
// Keep track of nearest points list if there are several
// at the same distance.
int dist,dx,dy, minDist = Integer.MAX_VALUE;
Set<Point> nearest = new HashSet<Point>();
for (Point p: searchSpace)
{
dx=p.x-origin.x;
dy=p.y-origin.y;
dist=dx*dx+dy*dy;
if (dist<minDist)
{
minDist=dist;
nearest.clear();
nearest.add(p);
}
else if (dist==minDist)
{
nearest.add(p);
}
}
// Ok, now we have the list of nearest points, it might be empty.
// But let's check if they are still beyond the sensitivity radius:
// we search area we have evaluated was square with an side to
// the diameter of the actual circle. If points we've found are
// in the corners of the square area they might be outside the circle.
// Let's see what the distance is and if it greater than the radius
// then we don't have a single point within proximity boundaries.
if (Math.sqrt(minDist) > radius) nearest.clear();
return nearest;
}
}
// Naive approach: just loop through every point and see if it's nearest.
class NaiveDrawing implements Drawing
{
final private List<Point> points = new ArrayList<Point> ();
public void addPoint (Point p)
{
points.add(p);
}
public Set<Point> getNearestFrom (Point origin,int radius)
{
int prevDist = Integer.MAX_VALUE;
int dist;
Set<Point> nearest = Collections.emptySet();
for (Point p: points)
{
int dx = p.x-origin.x;
int dy = p.y-origin.y;
dist = dx * dx + dy * dy;
if (dist < prevDist)
{
prevDist = dist;
nearest = new HashSet<Point>();
nearest.add(p);
}
else if (dist==prevDist) nearest.add(p);
}
if (Math.sqrt(prevDist) > radius) nearest = Collections.emptySet();
return nearest;
}
}
I would like to suggest creating a Voronoi Diagram and a Trapezoidal Map (Basically the same answer as I gave to this question). The Voronoi Diagram will partition the space in polygons. Every point will have a polygon describing all points that are closest to it.
Now when you get a query of a point, you need to find in which polygon it lies. This problem is called Point Location and can be solved by constructing a Trapezoidal Map.
The Voronoi Diagram can be created using Fortune's algorithm which takes O(n log n) computational steps and costs O(n) space.
This website shows you how to make a trapezoidal map and how to query it. You can also find some bounds there:
Expected creation time: O(n log n)
Expected space complexity: O(n) But
most importantly, expected query
time: O(log n).
(This is (theoretically) better than O(√n) of the kD-tree.)
Updating will be linear (O(n)) I think.
My source(other than the links above) is: Computational Geometry: algorithms and applications, chapters six and seven.
There you will find detailed information about the two data structures (including detailed proofs). The Google books version only has a part of what you need, but the other links should be sufficient for your purpose. Just buy the book if you are interested in that sort of thing (it's a good book).
The most efficient data structure would be a kd-tree link text
Are the points uniformly distributed?
You could build a quad-tree up to a certain depth, say, 8. At the top you have a tree node that divides the screen into four quadrants. Store at each node:
The top left and the bottom right coordinate
Pointers to four child nodes, which divide the node into four quadrants
Build the tree up to a depth of 8, say, and at the leaf nodes, store a list of points associated with that region. That list you can search linearly.
If you need more granularity, build the quad-tree to a greater depth.
It depends on the frequency of updates and query. For fast query, slow updates, a Quadtree (which is a form of jd-tree for 2-D) would probably be best. Quadtree are very good for non-uniform point too.
If you have a low resolution you could consider using a raw array of width x height of pre-computed values.
If you have very few points or fast update, a simple array is enough, or may be a simple partitioning (which goes toward the quadtree).
So the answer depends on parameters of you dynamics. Also I would add that nowadays the algo isn't everything; making it use multiple processors or CUDA can give a huge boost.
You haven't specified the dimensions of you points, but if it's a 2D line drawing then a bitmap bucket - a 2D array of lists of points in a region, where you scan the buckets corresponding to and near to a cursor can perform very well. Most systems will happily handle bitmap buckets of the 100x100 to 1000x1000 order, the small end of which would put a mean of one point per bucket. Although asymptotic performance is O(N), real-world performance is typically very good. Moving individual points between buckets can be fast; moving objects around can also be made fast if you put the objects into the buckets rather than the points ( so a polygon of 12 points would be referenced by 12 buckets; moving it becomes 12 times the insertion and removal cost of the bucket list; looking up the bucket is constant time in the 2D array ). The major cost is reorganising everything if the canvas size grows in many small jumps.
If it is in 2D, you can create a virtual grid covering the whole space (width and height are up to your actual points space) and find all the 2D points which belong to every cell. After that a cell will be a bucket in a hashtable.