I am trying to do a memcpy of the TangoImageBuffer data field, which, if the image is YUV, according to me should be (buffer->width * buffer->height * 3 * sizeof(uint8_t)) (sizeof just for kicks, I know it is 1), but this makes it segfault. if I just copy height*width bytes it works, and also with height*width*2, I do seem to be getting valid data, I just don't know which size should this field be.
My (relevant) code:
void onImageCallback(void *context, TangoCameraId id, const TangoImageBuffer *buffer)
{
memcpy(img_struct->image_buffer->getBuffer(), buffer->data, buffer->width * buffer->height * 3 * sizeof(uint8_t)));
}
Where image_buffer is a java ByteBuffer wrapping class which I am using in C++, inside it is allocating memory by calling new with the specified size (which in this case is the same I am trying to memcpy), and a jobject reference by doing a env->NewGlobalRef(env->NewDirectByteBuffer(buffer, this->bufferSize));, where this->bufferSize equals (buffer->width * buffer->height * 3 * sizeof(uint8_t))
I am pretty sure it is allocating the right amount of memory, as I have used it also for memcopying the xyzij buffer in other function (with the correspondent size difference, as those are floats) and it works just fine (yet I have also tried overallocating), so I know the problem is not the destination being too small.
In case this information might add to the question, I am using the regular color camera, so height should be 1280 and width 720 if I recall correctly.
Edit: Upon manually looking for the maximum amount of data I could copy without getting the segfault, it seems it tops on 1384448 (i.e. that works, 1384449 segfaults), which is roughly 1.5 times the size in pixels of the image, adding to my confusion.
The pixel format is documented as YV12 aka YUV420SP. It has full resolution for Y with 2x2 subsampling for U and V, thus U and V both have 1/4 as many samples as Y. The total number of samples is then width*height*(1 + 1/4 + 1/4) = 1.5*width*height.
Related
I have a cable winch system that I would like to know how much cable is left given the number of rotations that have occurred and vice versa. This system will run on a low-cost microcontroller with low computational resources and should be able to update quickly, long for/while loop iterations are not ideal.
The inputs are cable diameter, inner drum diameter, inner drum width, and drum rotations. The output should be the length of the cable on the drum.
At first, I was calculating the maximum number of wraps of cable per layer based on cable diameter and inner drum width, I could then use this to calculate the length of cable per layer. The issue comes when I calculate the total length as I have to loop through each layer, a costly operation (could be 100's of layers).
My next approach was to precalculate a table with each layer, then perform a 3-5 degree polynomial regression down to an easy to calculate formula.
This appears to work for the most part, however, there are slight inaccuracies at the low and high end (0 rotations could be + or - a few units of cable length). The real issue comes when I try and reverse the function to get the current rotations of the drum given the length. So far, my reversed formula does not seem to equal the forward formula (I am reversing X and Y before calculating the polynomial).
I have looked high and low and cannot seem to find any formulas for cable length to rotations that do not use recursion or loops. I can't figure out how to reverse my polynomial function to get the reverse value without losing precision. If anyone happens to have an insight/ideas or can help guide me in the right direction that would be most helpful. Please see my attempts below.
// Units are not important
CableLength = 15000
CableDiameter = 5
DrumWidth = 50
DrumDiameter = 5
CurrentRotations = 0
CurrentLength = 0
CurrentLayer = 0
PolyRotations = Array
PolyLengths = Array
PolyLayers = Array
WrapsPerLayer = DrumWidth / CableDiameter
While CurrentLength < CableLength // Calcuate layer length for each layer up to cable length
CableStackHeight = CableDiameter * CurrentLayer
DrumDiameterAtLayer = DrumDiameter + (CableStackHeight * 2) // Assumes cables stack vertically
WrapDiameter = DrumDiameterAtLayer + CableDiameter // Center point of cable
WrapLength = WrapDiameter * PI
LayerLength = WrapLength * WrapsPerLayer
CurrentRotations += WrapsPerLayer // 1 Rotation per wrap
CurrentLength += LayerLength
CurrentLayer++
PolyRotations.Push(CurrentRotations)
PolyLengths.Push(CurrentLength)
PolyLayers.Push(CurrentLayer)
End
// Using 5 degree polynomials, any lower = very low precision
PolyLengthToRotation = CreatePolynomial(PolyLengths, PolyRotations, 5) // 5 Degrees
PolyRotationToLength = CreatePolynomial(PolyRotations, PolyLengths, 5) // 5 Degrees
// 40 Rotations should equal about 3141.593 units
RealRotation = 40
RealLength = 3141.593
CalculatedLength = EvaluatePolynomial(RealRotation,PolyRotationToLength)
CalculatedRotations = EvaluatePolynomial(RealLength,PolyLengthToRotation)
// CalculatedLength = 3141.593 // Good
// CalculatedRotations = 41.069 // No good
// CalculatedRotations != RealRotation // These should equal
// 0 Rotations should equal 0 length
RealRotation = 0
RealLength = 0
CalculatedLength = EvaluatePolynomial(RealRotation,PolyRotationToLength)
CalculatedRotations = EvaluatePolynomial(RealLength,PolyLengthToRotation)
// CalculatedLength = 1.172421e-9 // Very close
// CalculatedRotations = 1.947, // No good
// CalculatedRotations != RealRotation // These should equal
Side note: I have a "spool factor" parameter to calibrate for the actual cable spooling efficiency that is not shown here. (cable is not guaranteed to lay mathematically perfect)
#Bathsheba May have meant cable, but a table is a valid option (also experimental numbers are probably more interesting in the real world).
A bit slow, but you could always do it manually. There's only 40 rotations (though optionally for better experimental results, repeat 3 times and take the average...). Reel it completely in. Then do a rotation (depending on the diameter of your drum, half rotation). Measure and mark how far it spooled out (tape), record it. Repeat for the next 39 rotations. You now have a lookup table you can find the length in O(log N) via binary search (by sorting the data) and a bit of interpolation (IE: 1.5 rotations is about half way between 1 and 2 rotations).
You can also use this to derived your own experimental data. Do the same thing, but with a cable half as thin (perhaps proportional to the ratio of the inner diameter and the cable radius?). What effect does it have on the numbers? How about twice or half the diameter? Math says circumference is linear (2πr), so half the radius, half the amount per rotation. Might be easier to adjust the table data.
The gist is that it may be easier for you to have a real world reference for your numbers rather than relying purely on an abstract mathematically model (not to say the model would be wrong, but cables don't exactly always wind up perfectly, who knows perhaps you can find a quirk about your winch that would have lead to errors in a pure mathematical approach). Who knows might be able to derive the formula yourself :) with a fudge factor for the real world even lol.
I am writing an OpenCL app on mac using c++, and it crashes in certain cases depending on the work size.
The program crashes due to a SIGABRT.
Is there any way to get more information about the error?
Why is SIGABRT being raised? Can I catch it?
EDIT:
I realize that this program is a doozie, however I will try to explain it in case anyone would like to take a stab at it.
Through debugging I discovered that the cause of the SIGABRT was one of the kernels timing out.
The program is a tile-based 3D renderer. It is an OpenCL implementation of this algorithm: https://github.com/ssloy/tinyrenderer
The screen is divided into 8x8 tiles. One of the kernels (the tiler) computes which polygons overlap each tile, storing the results in a data structure called tilePolys. A subsequent kernel (the rasterizer), which runs one work item per tile, iterates over the list of polys occupying the tile and rasterizes them.
The tiler writes to an integer buffer which is a list of lists of polygon indices. Each list is of a fixed size (polysPerTile + 1 for the count) where the first element is the count and the subsequent polysPerTile elements are indices of polygons in the tile. There is one such list per tile.
For some reason in certain cases the tiler writes a very large poly count (13172746) to one of the tile's lists in tilePolys. This causes the rasterizer to loop for a long time and time out.
The strange thing is that the index to which the large count is written is never accessed by the tiler.
The code for the tiler kernel is below:
// this kernel is executed once per polygon
// it computes which tiles are occupied by the polygon and adds the index of the polygon to the list for that tile
kernel void tiler(
// number of polygons
ulong nTris,
// width of screen
int width,
// height of screen
int height,
// number of tiles in x direction
int tilesX,
// number of tiles in y direction
int tilesY,
// number of pixels per tile (tiles are square)
int tileSize,
// size of the polygon list for each tile
int polysPerTile,
// 4x4 matrix representing the viewport
global const float4* viewport,
// vertex positions
global const float* vertices,
// indices of vertices
global const int* indices,
// array of array-lists of polygons per tile
// structure of list is an int representing the number of polygons covering that tile,
// followed by [polysPerTile] integers representing the indices of the polygons in that tile
// there are [tilesX*tilesY] such arraylists
volatile global int* tilePolys)
{
size_t faceInd = get_global_id(0);
// compute vertex position in viewport space
float3 vs[3];
for(int i = 0; i < 3; i++) {
// indices are vertex/uv/normal
int vertInd = indices[faceInd*9+i*3];
float4 vertHomo = (float4)(vertices[vertInd*4], vertices[vertInd*4+1], vertices[vertInd*4+2], vertices[vertInd*4+3]);
vertHomo = vec4_mul_mat4(vertHomo, viewport);
vs[i] = vertHomo.xyz / vertHomo.w;
}
float2 bboxmin = (float2)(INFINITY,INFINITY);
float2 bboxmax = (float2)(-INFINITY,-INFINITY);
// size of screen
float2 clampCoords = (float2)(width-1, height-1);
// compute bounding box of triangle in screen space
for (int i=0; i<3; i++) {
for (int j=0; j<2; j++) {
bboxmin[j] = max(0.f, min(bboxmin[j], vs[i][j]));
bboxmax[j] = min(clampCoords[j], max(bboxmax[j], vs[i][j]));
}
}
// transform bounding box to tile space
int2 tilebboxmin = (int2)(bboxmin[0] / tileSize, bboxmin[1] / tileSize);
int2 tilebboxmax = (int2)(bboxmax[0] / tileSize, bboxmax[1] / tileSize);
// loop over all tiles in bounding box
for(int x = tilebboxmin[0]; x <= tilebboxmax[0]; x++) {
for(int y = tilebboxmin[1]; y <= tilebboxmax[1]; y++) {
// get index of tile
int tileInd = y * tilesX + x;
// get start index of polygon list for this tile
int counterInd = tileInd * (polysPerTile + 1);
// get current number of polygons in list
int numPolys = atomic_inc(&tilePolys[counterInd]);
// if list is full, skip tile
if(numPolys >= polysPerTile) {
// decrement the count because we will not add to the list
atomic_dec(&tilePolys[counterInd]);
} else {
// otherwise add the poly to the list
// the index is the offset + numPolys + 1 as tilePolys[counterInd] holds the poly count
int ind = counterInd + numPolys + 1;
tilePolys[ind] = (int)(faceInd);
}
}
}
}
My theories are that either:
I have incorrectly implemented the atomic functions for reading and incrementing the count
I am using an incorrect number format causing garbage to be written into tilePolys
One of my other kernels is inadvertently writing into the tilePolys buffer
I do not think it is the last one though because if instead of writing faceInd to tilePolys, I write a constant value, the large poly count disappears.
tilePolys[counterInd+numPolys+1] = (int)(faceInd); // this is the problem line
tilePolys[counterInd+numPolys+1] = (int)(5); // this fixes the issue
It looks like your kernel is crashing on the GPU itself. You can't really get any extra diagnostics about that directly, at least not on macOS. You'll need to start narrowing down the problem. Some suggestions:
As the crash is currently happening in clFinish() you don't know what asynchronous command is causing the crash. Try switching all your enqueue calls to blocking mode. This should cause it to crash in the call that's actually going wrong.
Check return/error codes on all OpenCL API calls. Sometimes, ignoring an error from an earlier call can cause problems in a later call which relies on earlier results. For example, if creating a buffer fails, passing the result of that buffer creation as a kernel argument will cause problems when trying to run the kernel.
The most likely reason for the crash is that your OpenCL kernel is accessing memory out of bounds or is otherwise misusing pointers. Re-check any array index calculations.
Check if the problem occurs with smaller work batches. Scale up from one workgroup (or work item if not using groups) and see if it only occurs beyond a certain work size. This may give you a clue about buffer sizes and array indices that might be causing the crash.
Systematically comment out parts of your kernel. If the crash goes away if you comment out a specific piece of code, there's a good chance the problem is in that code.
If you've narrowed the problem down to a small area of code but can't work out where it's coming from, start recording diagnostic output to check that variables have the values you're expecting.
Without seeing any code, I can't give you any more specific advice than that.
Note that OpenCL is deprecated on macOS, so if you're specifically targeting that platform and don't need to support Linux, Windows, etc. I recommend learning Metal Compute instead. Apple has made it clear that this is the GPU programming platform they want to support, and the tooling for it is already much better than their OpenCL tooling ever was.
I suspect Apple will eventually stop implementing OpenCL support when they release a Mac with a new type of GPU, so even if you're targeting the Mac as well as other platforms, you will probably need to switch to Metal on the Mac somewhere down the line anyway. As of macOS 10.14, the minimum system requirements of the OS already include a Metal-capable GPU, so you only need OpenCL as a fallback if you wish to support all Mac models able to run 10.13 or an even older OS version.
I would like to convert a JPEG image into Y'UV420p using turbo jpeg. I think that this uses some 2x2 blocks with there being twice the information in the Y' that there is in each of the U and V components but I haven't been able to find an example that does this.
How do I extract these individual coponents from tjDecompressToYUV2 and what is the format of the buffer that is allocated for say an image whose dimensions are a multiple of a power of 2?
Say I have an image that is 1024 wide by 512 pixels high. How would I extract each Y' U and V value from the following code:
constexpr auto height = 512u;
constexpr auto width = 1024u;
unsigned char buffer[height * width * 3 / 2];
...
tjDecompressToYUV2(jpegDecompressor, jpegImage, jpegSize, buffer, width, 2, height, TJFLAG_FASTDCT);
...
ie. How are the components extracted from buffer?
Try using the tjDecompressToYUV function which gives you the 3 colorspaces on 3 different output buffers. The Y buffer should contain one byte (eight bit value) for the Y component. I have never used the chroma components so I cannot tell you how they are stored.
What is it that you are trying to do, as there might be another way to solve this problem?
I am trying to edit the point cloud (stored in a FloatBuffer) in order to keep recorded points on the screen. However, when I display the points, they all lie on the x, y or z axis. I am using the example Point Cloud program from Google, and so all I'm doing right now is copying the buffer so I can edit it since the current buffer is read-only. I haven't changed anything else since I need to get my copy to work first. Here is my code for copying the buffer (edited from transferring bytes from one ByteBuffer to another):
private FloatBuffer cloneBuffer(FloatBuffer original) {
final ByteBuffer byteClone = (original.isDirect()) ?
//multiplied by 4 and added 3 so the capacity would be the
//same when converted to a FloatBuffer
ByteBuffer.allocateDirect(original.capacity() * 4 + 3) :
ByteBuffer.allocate(original.capacity() * 4 + 3);
final FloatBuffer clone = byteClone.asFloatBuffer();
final FloatBuffer readOnlyCopy = original.asReadOnlyBuffer();
readOnlyCopy.rewind();
clone.put(readOnlyCopy);
clone.flip();
clone.position(original.position());
return clone;
}
Looking at:
https://developers.google.com/project-tango/apis/known-issues
In the "Depth" section, notice that:
The IJ buffer of the XYZij struct is under development and not yet populated via the API.
Occasionally, or when under high CPU load, the depth flash may appear in the color image, or no depth points are returned. Let the device cool down and/or reboot.
Your problem may lie within the first mentioned point.
I have a point that moves (in one dimension), and I need it to move smoothly. So I think that it's velocity has to be a continuous function and I need to control the acceleration and then calculate it's velocity and position.
The algorithm doesn't seem something obvious to me, but I guess this must be a common problem, I just can't find the solution.
Notes:
The final destination of the object may change while it's moving and the movement needs to be smooth anyway.
I guess that a naive implementation would produce bouncing, and I need to avoid that.
This is a perfect candidate for using a "critically damped spring".
Conceptually you attach the point to the target point with a spring, or piece of elastic. The spring is damped so that you get no 'bouncing'. You can control how fast the system reacts by changing a constant called the "SpringConstant". This is essentially how strong the piece of elastic is.
Basically you apply two forces to the position, then integrate this over time. The first force is that applied by the spring, Fs = SpringConstant * DistanceToTarget. The second is the damping force, Fd = -CurrentVelocity * 2 * sqrt( SpringConstant ).
The CurrentVelocity forms part of the state of the system, and can be initialised to zero.
In each step, you multiply the sum of these two forces by the time step. This gives you the change of the value of the CurrentVelocity. Multiply this by the time step again and it will give you the displacement.
We add this to the actual position of the point.
In C++ code:
float CriticallyDampedSpring( float a_Target,
float a_Current,
float & a_Velocity,
float a_TimeStep )
{
float currentToTarget = a_Target - a_Current;
float springForce = currentToTarget * SPRING_CONSTANT;
float dampingForce = -a_Velocity * 2 * sqrt( SPRING_CONSTANT );
float force = springForce + dampingForce;
a_Velocity += force * a_TimeStep;
float displacement = a_Velocity * a_TimeStep;
return a_Current + displacement;
}
In systems I was working with a value of around 5 was a good point to start experimenting with the value of the spring constant. Set it too high will result in too fast a reaction, and too low the point will react too slowly.
Note, you might be best to make a class that keeps the velocity state rather than have to pass it into the function over and over.
I hope this is helpful, good luck :)
EDIT: In case it's useful for others, it's easy to apply this to 2 or 3 dimensions. In this case you can just apply the CriticallyDampedSpring independently once for each dimension. Depending on the motion you want you might find it better to work in polar coordinates (for 2D), or spherical coordinates (for 3D).
I'd do something like Alex Deem's answer for trajectory planning, but with limits on force and velocity:
In pseudocode:
xtarget: target position
vtarget: target velocity*
x: object position
v: object velocity
dt: timestep
F = Ki * (xtarget-x) + Kp * (vtarget-v);
F = clipMagnitude(F, Fmax);
v = v + F * dt;
v = clipMagnitude(v, vmax);
x = x + v * dt;
clipMagnitude(y, ymax):
r = magnitude(y) / ymax
if (r <= 1)
return y;
else
return y * (1/r);
where Ki and Kp are tuning constants, Fmax and vmax are maximum force and velocity. This should work for 1-D, 2-D, or 3-D situations (magnitude(y) = abs(y) in 1-D, otherwise use vector magnitude).
It's not quite clear exactly what you're after, but I'm going to assume the following:
There is some maximum acceleration;
You want the object to have stopped moving when it reaches the destination;
Unlike velocity, you do not require acceleration to be continuous.
Let A be the maximum acceleration (by which I mean the acceleration is always between -A and A).
The equation you want is
v_f^2 = v_i^2 + 2 a d
where v_f = 0 is the final velocity, v_i is the initial (current) velocity, and d is the distance to the destination (when you switch from acceleration A to acceleration -A -- that is, from speeding up to slowing down; here I'm assuming d is positive).
Solving:
d = v_i^2 / (2A)
is the distance. (The negatives cancel).
If the current distance remaining is greater than d, speed up as quickly as possible. Otherwise, begin slowing down.
Let's say you update the object's position every t_step seconds. Then:
new_position = old_position + old_velocity * t_step + (1/2)a(t_step)^2
new_velocity = old_velocity + a * t_step.
If the destination is between new_position and old_position (i.e., the object reached its destination in between updates), simply set new_position = destination.
You need an easing formula, which you would call at a set interval, passing in the time elapsed, start point, end point and duration you want the animation to be.
Doing time-based calculations will account for slow clients and other random hiccups. Since it calculates on time elapsed vs. the time in which it has to compkete, it will account for slow intervals between calls when returning how far along your point should be in the animation.
The jquery.easing plugin has a ton of easing functions you can look at:
http://gsgd.co.uk/sandbox/jquery/easing/
I've found it best to pass in 0 and 1 as my start and end point, since it will return a floating point between the two, you can easily apply it to the real value you are modifying using multiplication.