Can I manipulate functions/codes inside UnityEngine for Unity5? - unityscript

Is it possible to change some fundamental codes inside UnityEngine?
For example, I want to know if I can change how acceleration behaves in reaction to the force the object is applied with
-> changing from F = m*a to F = m*a^2.
I tried doing this way:
void Update () {
if (enableKinematic == true) {
accelerationVector = forceVector / mass;
velocityVector += accelerationVector * Time.deltaTime;
positionVector += velocityVector * Time.deltaTime;
// Debug.Log ("a:" + accelerationVector + " v:" + velocityVector + " d:" + positionVector);
// Apply position
transform.position += positionVector;
}
}
But I realized that collision detection gets pretty complicated this way.

Related

When creating an angle, how do I control the attributes of the automatically created points?

I'm working with a polygon and attempting to create angles with labels but when angles are created, so are the points used to define them. This would be fine but I can't control the labels on the automatically created points (and I don't know what they are called or how to find out).
var points = [
[0, 0],
[0, 5],
[3, 0]
];
for (k = 0; k < showAngle.length; k++) {
if (showAngle[k] == 1) {
var angle = board.create('angle', [points[k], points[((k + 1) % points.length)], points[((k + 2) % points.length)]],{fixed:true});
} else if (showAngle[k] == 2) {
var angle = board.create('angle', [points[k], points[((k + 1) % points.length)], points[((k + 2) % points.length)]], {
fixed: false,
name: function() {
return ((180/Math.PI)*JXG.Math.Geometry.rad(points[k], points[((k + 1) % points.length)], points[((k + 2) % points.length)])).toFixed(1) + '°';
}
});
}
}
https://jsfiddle.net/jscottuq/acyrLxfh/12/ contains what I've got so far.
The arrays showLen and showAngle are setting what labels are shown for each side/angle (0 - no label, 1 - name , 2 - measurement).
These will be set when the jsxgraph is created.
At the time being, the possibility to control the style of the newly created points of an angle is missing. We will add this soon.
However, a solution would be to use the already existing points which are hidden in this example. For this it would be helpful to kee a list of these points, e.g. jxg_points:
var jxg_points = [];
for (i = 0; i < points.length; i++) {
var rise = points[(i + 1) % points.length][1] - points[i][1];
var run = points[(i + 1) % points.length][0] - points[i][0];
var point = board.create('point', [points[i][0], points[i][1]], {
fixed: true,
visible:false
});
jxg_points.push(point); // Store the point
points[i].pop();
len[i] = Math.round((Math.sqrt(rise * rise + run * run) + Number.EPSILON) * 100) / 100;
}
Then the points can be reused for the angles without creating new points:
for (k = 0; k < showAngle.length; k++) {
if (showAngle[k] == 1) {
angle = board.create('angle', [
jxg_points[k],
jxg_points[((k + 1) % jxg_points.length)],
jxg_points[((k + 2) % jxg_points.length)]
],{fixed:true});
} else if (showAngle[k] == 2) {
var angle = board.create('angle', [
jxg_points[k],
jxg_points[((k + 1) % jxg_points.length)],
jxg_points[((k + 2) % jxg_points.length)]], {
fixed: false,
name: function() {
return ((180/Math.PI)*JXG.Math.Geometry.rad(points[k], points[((k + 1) % points.length)], points[((k + 2) % points.length)])).toFixed(1) + '°';
}
});
}
}
See it live at https://jsfiddle.net/d8an0epy/.

Compute missing box dimensions from given ones

Given the structure:
structure box_dimensions:
int? left
int? right
int? top
int? bottom
point? top_left
point? top_right
point? bottom_left
point? bottom_right
point? top_center
point? bottom_center
point? center_left
point? center_right
point? center
int? width
int? height
rectangle? bounds
where each field can be defined or not.
How would you implement the function check_and_complete(box_dimensions) ?
That function should return an error if there is not enough fields defined to describe a box, or too many.
If input is consistent, it should compute the undefined fields.
You can describe a box by its center, width and height, or top_left and bottom_right corners, etc
The only solution I can think of contains way to many if-elses. I'm sure there's a smart way to do it.
EDIT
If you wonder how I end up with a structure like that, here is why :
I'm toying with the idea of a "layout by constraints" system:
User define a bunch of boxes, and for each box define a set of constraints like "box_a.top_left = box_b.bottom_right", "box_a.width = box_b.width / 2".
The real structure fields are actually expression AST, not values.
So I need to check if a box is "underconstrained" or "overconstrained", and if it's ok, create the missing expression AST from the given ones.
Yes, certainly there will be too many if-elses.
Here's my attempt to keep them reasonably organized:
howManyLefts = 0
if (left is set) { realLeft = left; howManyLefts++; }
if (top_left is set) { realLeft = top_left.left; howManyLefts++; }
if (bottom_left is set) { realLeft = bottom_left.left; howManyLefts++; }
if (center_left is set) { realLeft = center_left.left; howManyLefts++; }
if (bounds is set) { realLeft = bounds.left; howManyLefts++; }
if (howManyLefts > 1) return error;
Now, repeat that code block for center, right and width.
Now you end up with howManyLefts, howManyCenters, howManyRights and howManyWidths, all of them being either zero or one, depending on whether the value was provided or not. You need exactly two values set and two unset, so:
if (howManyLefts + howManyRights + howManyCenters + howManyWidths != 2) return error
if (howManyWidths == 0)
{
// howManyWidths is 0, so we look for the remaining 0 and assume the rest is 1s
if (howManyCenters == 0)
{ realWidth = realRight - realLeft; realCenter = (realRight + realLeft) / 2; }
else if (howManyLefts == 0)
{ realWidth = 2 * (realRight - realCenter); realLeft = realRight - realWidth; }
else
{ realWidth = 2 * (realCenter - realLeft); realRight = realLeft + realWidth; }
}
else
{
// howManyWidths is 1, so we look for the remaining 1 and assume the rest is 0s
if (howManyCenters == 1)
{ realLeft = realCenter - realWidth / 2; realRight = realCenter + realWidth / 2; }
else if (howManyLefts == 1)
{ realRight = realLeft + realWidth; realCenter = (realRight + realLeft) / 2; }
else
{ realLeft = realRight - realWidth; realCenter = (realRight + realLeft) / 2; }
}
Now, repeat everything for the vertical axis (i.e. replacing { left, center, right, width } with { top, center, bottom, height }).

Scala implementation of sobel filter

I'm looking for some help in a IT school project. We need to create a programm which can detect roads in a satelite photograph. Our group decided to use a function for detect edges. We search differents solutions and filters on Internet and we decides to use Sobel filter.
We have tried to implement this filter in Scala but it didn't work. We use differents webpages to help us, some of these are on StackOverflow (here). We use this one to help us and try to translate the code : Sobel filter in Ruby.
Start Code --
codeGrey(); // This function transform the RGB in grey level
var sobel_x: Array[Array[Double]] = Array(
Array(-1, 0, 1),
Array(-2, 0, 2),
Array(-1, 0, 1))
var sobel_y: Array[Array[Double]] = Array(
Array(1, 2, 1),
Array(0, 0, 0),
Array(-1, -2, 1))
for (x <- 1 to wrappedImage.height - 2) {
for (y <- 1 to wrappedImage.width - 2) {
var a = (image2D(x - 1)(y - 1) & 0x00FF0000) >> 16
var b = (image2D(x)(y - 1) & 0x00FF0000) >> 16
var c = (image2D(x + 1)(y - 1) & 0x00FF0000) >> 16
var d = (image2D(x - 1)(y) & 0x00FF0000) >> 16
var e = (image2D(x)(y) & 0x00FF0000) >> 16
var f = (image2D(x + 1)(y) & 0x00FF0000) >> 16
var g = (image2D(x - 1)(y + 1) & 0x00FF0000) >> 16
var h = (image2D(x)(y + 1) & 0x00FF0000) >> 16
var i = (image2D(x + 1)(y + 1) & 0x00FF0000) >> 16
var pixel_x =
(sobel_x(0)(0) * a) + (sobel_x(0)(1) * b) + (sobel_x(0)(2) * c) +
(sobel_x(1)(0) * d) + (sobel_x(1)(1) * e) + (sobel_x(1)(2) * f) +
(sobel_x(2)(0) * g) + (sobel_x(2)(1) * h) + (sobel_x(2)(2) * i);
var pixel_y =
(sobel_y(0)(0) * a) + (sobel_x(0)(1) * b) + (sobel_x(0)(2) * c) +
(sobel_y(1)(0) * d) + (sobel_x(1)(1) * e) + (sobel_x(1)(2) * f) +
(sobel_y(2)(0) * g) + (sobel_x(2)(1) * h) + (sobel_x(2)(2) * i);
var res = (Math.sqrt((pixel_x * pixel_x) + (pixel_y * pixel_y)).ceil).toInt
image2D(x)(y) = 0xFF000000 + (res * 65536 + res * 256 + res);
}
}
End Code --
The image returned by this implementation is just an image with black and white pixels and I don't know why. I've got no experience in image processing and we learned Scala 8 weeks ago so that doesn't help.
I'm sorry, my english is not perfect so please forgive me if I didn't write correctly.
I'm not sure I grasp all the details of your solution, anyway here some observation:
consider using vals instead of vars: Scala prefers
immutables and you are not really changing any of those variables.
In scala you can write nested for cycles as a single one over two
variables (check here for details:
Nested iteration in Scala). I think it makes code cleaner.
I presume image2D is the array of arrays in which you are
holding your image. In the last line of your nested for loop you are
changing the current pixel value. This is not good because you will
access that same pixel later when you calculate your a,b,..,h,i
values. The center pixel during current iteration is the side pixel
during next iteration. I think you should write the result in a
different matrix.

Particle system running slowly

here is update function. As soon as i turn update on my program gets slower. I'm not even able to render 25000 particles at a time. Voxels is a 3 dimensional array. How to i change my update function so that the calculations is done faster. i want to able to render at least 100000 particles.
function update(){
newTime = Date.now();
elapsedTime = newTime - oldTime;
oldTime = newTime;
for(var index =0 ; index < particles.vertices.length; index++){
//particle's old position
var oldPosition = particles.vertices[index];
//making sure particles do not og out of boundary
if (oldPosition.x > screenSquareLength || oldPosition.x < -screenSquareLength){
oldPosition.x = 2 * screenSquareLength * Math.random() - screenSquareLength;
}
if (oldPosition.y > screenSquareLength || oldPosition.y < -screenSquareLength){
oldPosition.y = 2 * screenSquareLength * Math.random() - screenSquareLength;
}
if (oldPosition.z > screenSquareDepth/2 || oldPosition.z < -screenSquareDepth/2){
oldPosition.z = screenSquareDepth * Math.random() - screenSquareDepth/2;
}
var oldVelocity = particlesExtraInfo[index].velocity;
var fieldVelocity;
var xIndex, yIndex, zIndex;
try{
//calculating index of voxel
xIndex = Math.floor(( oldPosition.x + screenSquareLength ) / voxelSize);
yIndex = Math.floor(( oldPosition.y + screenSquareLength ) / voxelSize);
zIndex = Math.floor(( screenSquareDepth / 2 - oldPosition.z) / voxelSize);
//getting velocity, color for particle and if voxel is
fieldVelocity = voxels[zIndex][xIndex][yIndex].userData["velocity"];
particleColor = voxels[zIndex][xIndex][yIndex].userData["color"];
activeVoxel = voxels[zIndex][xIndex][yIndex].userData["visible"];
}catch (e){
console.log("indexX = "+xIndex + " \t Yindex = "+ yIndex+" \t zIndex = "+ zIndex);
}
var particleColor;
var activeVoxel;
try{
var vx = ((oldVelocity.x + fieldVelocity.x) * elapsedTime);
var vy = ((oldVelocity.y + fieldVelocity.y) * elapsedTime);
var vz = ((oldVelocity.z + fieldVelocity.z) * elapsedTime);
var magnitude = Math.abs(vx) + Math.abs(vy) + Math.abs(vz); //Math.sqrt(vx*vx + vy*vy+ vz*vz);
var normalized = new THREE.Vector3(vx / magnitude, vy / magnitude, vz / magnitude);
if((particles.vertices[index].x < 0.1 && particles.vertices[index].x > -0.1) && (particles.vertices[index].y < 0.1 && particles.vertices[index].y > -0.1) && (particles.vertices[index].z < 0.1 && particles.vertices[index].z > -0.1) ){
particles.vertices[index].x = 2 * screenSquareLength * Math.random() - screenSquareLength;;
particles.vertices[index].y = 2 * screenSquareLength * Math.random() - screenSquareLength;;
particles.vertices[index].z = 2 * screenSquareLength * Math.random() - screenSquareLength;;
}
//if voxel is not part of the model update particle postion and velocity
if( activeVoxel == 0){
particles.colors[index] = new THREE.Color(particleColor);//new THREE.Color(0, 0, 1);
particles.colorsNeedUpdate = true;
particles.vertices[index].x += normalized.x/slowingFactor;
particles.vertices[index].y += normalized.y/slowingFactor;
particles.vertices[index].z += normalized.z/slowingFactor;
particles.verticesNeedUpdate = true;
particlesExtraInfo[index].velocity = normalized;
}else{
//voxel is part of particle so update color property of particle
particles.colors[index] = new THREE.Color(0, 0, 1);
particles.colorsNeedUpdate = true;
particles.vertices[index].x += normalized.x/(slowingFactor * 200);
particles.vertices[index].y += normalized.y/(slowingFactor * 200);
particles.vertices[index].z += normalized.z/(slowingFactor * 200);
particles.verticesNeedUpdate = true;
particlesExtraInfo[index].velocity = new THREE.Vector3( normalized.x/slowingFactor, normalized.y/slowingFactor, normalized.z/slowingFactor );
}
}catch(e){
}
}
}
I don't know much about what exactly happens when you update a buffer like this, but I know that it can be slow.
While 25k may be a lot for what you're trying to do (i experimented with 5k and had trouble) there is no reason why you can't optimize your JS before trying to move everything to the gpu (for example).
var foo = 0;
foo+= normalized.x / someFactor;
//better done this way:
var invSomeFactor = 1/someFactor;
// now you avoid dividing the same thing many times in your loop
foo += normalized.x * invSomeFactor;
Math.random() is pretty expensive, you could make a look up table (a large one) and fetch these precomputed values from it.
var myLookupTable = [];
var MAX_VALUES = 2048;
for ( var i = 0 ; i < MAX_VALUES ; i ++ ){
myLookupTable.push(Math.random());
}
//and then you can have a stride for example
var RAND_STRIDE = 0;
//and in the loop
someVec.x += something.x * myLookupTable[ RAND_STRIDE ++ ];
RAND_STRIDE %= MAX_VALUES; //read from the beginning
Finally, you can write a fragment shader, that would read from a buffer, and write into another buffer doing all this logic in the process. Each fragment is your particle and once you run this pass and compute your positions, you need to be able to read the buffer in your particle vertex shader and just assign those positions.

OpenCL (JOCL) - 2D calculus over two arrays in Kernel

I'm asking this here because I thought I've understood how OpenCL works but... I think there are several things I don't get.
What I want to do is to get the difference between all the values of two arrays, then calculate the hypot and finally get the maximum hypot value, so If I have:
double[] arrA = new double[]{1,2,3}
double[] arrB = new double[]{6,7,8}
Calculate
dx1 = 1 - 1; dx2 = 2 - 1; dx3 = 3 - 1, dx4= 1 - 2;... dxLast = 3 - 3
dy1 = 6 - 6; dy2 = 7 - 6; dy3 = 8 - 6, dy4= 6 - 7;... dyLast = 8 - 8
(Extreme dx and dy will get 0, but i don't care about ignoring those cases by now)
Then calculate each hypot based on hypot(dx(i), dy(i))
And once all these values where obtained, get the maximum hypot value
So, I have the next defined Kernel:
String programSource =
"#ifdef cl_khr_fp64 \n"
+ " #pragma OPENCL EXTENSION cl_khr_fp64 : enable \n"
+ "#elif defined(cl_amd_fp64) \n"
+ " #pragma OPENCL EXTENSION cl_amd_fp64 : enable \n"
+ "#else "
+ " #error Double precision floating point not supported by OpenCL implementation.\n"
+ "#endif \n"
+ "__kernel void "
+ "sampleKernel(__global const double *bufferX,"
+ " __global const double *bufferY,"
+ " __local double* scratch,"
+ " __global double* result,"
+ " __const int lengthX,"
+ " __const int lengthY){"
+ " const int index_a = get_global_id(0);"//Get the global indexes for 2D reference
+ " const int index_b = get_global_id(1);"
+ " const int local_index = get_local_id(0);"//Current thread id -> Should be the same as index_a * index_b + index_b;
+ " if (local_index < (lengthX * lengthY)) {"// Load data into local memory
+ " if(index_a < lengthX && index_b < lengthY)"
+ " {"
+ " double dx = (bufferX[index_b] - bufferX[index_a]);"
+ " double dy = (bufferY[index_b] - bufferY[index_a]);"
+ " scratch[local_index] = hypot(dx, dy);"
+ " }"
+ " } "
+ " else {"
+ " scratch[local_index] = 0;"// Infinity is the identity element for the min operation
+ " }"
//Make a Barrier to make sure all values were set into the local array
+ " barrier(CLK_LOCAL_MEM_FENCE);"
//If someone can explain to me the offset thing I'll really apreciate that...
//I just know there is alway a division by 2
+ " for(int offset = get_local_size(0) / 2; offset > 0; offset >>= 1) {"
+ " if (local_index < offset) {"
+ " float other = scratch[local_index + offset];"
+ " float mine = scratch[local_index];"
+ " scratch[local_index] = (mine > other) ? mine : other;"
+ " }"
+ " barrier(CLK_LOCAL_MEM_FENCE);"
//A barrier to make sure that all values where checked
+ " }"
+ " if (local_index == 0) {"
+ " result[get_group_id(0)] = scratch[0];"
+ " }"
+ "}";
For this case, the defined GWG size is (100, 100, 0) and a LWI size of (10, 10, 0).
So, for this example, both arrays have size 10 and the GWG and LWI are obtained as follows:
//clGetKernelWorkGroupInfo(kernel, device, CL.CL_KERNEL_WORK_GROUP_SIZE, Sizeof.size_t, Pointer.to(buffer), null);
long kernel_work_group_size = OpenClUtil.getKernelWorkGroupSize(kernel, device.getCl_device_id(), 3);
//clGetDeviceInfo(device, CL_DEVICE_MAX_WORK_ITEM_SIZES, Sizeof.size_t * numValues, Pointer.to(buffer), null);
long[] maxSize = device.getMaximumSizes();
maxSize[0] = ( kernel_work_group_size > maxSize[0] ? maxSize[0] : kernel_work_group_size);
maxSize[1] = ( kernel_work_group_size > maxSize[1] ? maxSize[1] : kernel_work_group_size);
maxSize[2] = ( kernel_work_group_size > maxSize[2] ? maxSize[2] : kernel_work_group_size);
// maxSize[2] =
long xMaxSize = (x > maxSize[0] ? maxSize[0] : x);
long yMaxSize = (y > maxSize[1] ? maxSize[1] : y);
long zMaxSize = (z > maxSize[2] ? maxSize[2] : z);
long local_work_size[] = new long[] { xMaxSize, yMaxSize, zMaxSize };
int numWorkGroupsX = 0;
int numWorkGroupsY = 0;
int numWorkGroupsZ = 0;
if(local_work_size[0] != 0)
numWorkGroupsX = (int) ((total + local_work_size[0] - 1) / local_work_size[0]);
if(local_work_size[1] != 0)
numWorkGroupsY = (int) ((total + local_work_size[1] - 1) / local_work_size[1]);
if(local_work_size[2] != 0)
numWorkGroupsZ = (int) ((total + local_work_size[2] - 1) / local_work_size[2]);
long global_work_size[] = new long[] { numWorkGroupsX * local_work_size[0],
numWorkGroupsY * local_work_size[1], numWorkGroupsZ * local_work_size[2]};
The thing is I'm not getting the espected values so I decided to make some tests based on a smaller kernel and changing the [VARIABLE TO TEST VALUES] object returned in a result array:
/**
* The source code of the OpenCL program to execute
*/
private static String programSourceA =
"#ifdef cl_khr_fp64 \n"
+ " #pragma OPENCL EXTENSION cl_khr_fp64 : enable \n"
+ "#elif defined(cl_amd_fp64) \n"
+ " #pragma OPENCL EXTENSION cl_amd_fp64 : enable \n"
+ "#else "
+ " #error Double precision floating point not supported by OpenCL implementation.\n"
+ "#endif \n"
+ "__kernel void "
+ "sampleKernel(__global const double *bufferX,"
+ " __global const double *bufferY,"
+ " __local double* scratch,"
+ " __global double* result,"
+ " __const int lengthX,"
+ " __const int lengthY){"
//Get the global indexes for 2D reference
+ " const int index_a = get_global_id(0);"
+ " const int index_b = get_global_id(1);"
//Current thread id -> Should be the same as index_a * index_b + index_b;
+ " const int local_index = get_local_id(0);"
// Load data into local memory
//Only print values if index_a < ArrayA length
//Only print values if index_b < ArrayB length
//Only print values if local_index < (lengthX * lengthY)
//Only print values if this is the first work group.
+ " if (local_index < (lengthX * lengthY)) {"
+ " if(index_a < lengthX && index_b < lengthY)"
+ " {"
+ " double dx = (bufferX[index_b] - bufferX[index_a]);"
+ " double dy = (bufferY[index_b] - bufferY[index_a]);"
+ " result[local_index] = hypot(dx, dy);"
+ " }"
+ " } "
+ " else {"
// Infinity is the identity element for the min operation
+ " result[local_index] = 0;"
+ " }"
The returned values are far of being the espected but, if the [VARIABLE TO TEST VALUES] is (index_a * index_b) + index_a, almost each value of the returned array has the correct (index_a * index_b) + index_a value, i mean:
result[0] -> 0
result[1] -> 1
result[2] -> 2
....
result[97] -> 97
result[98] -> 98
result[99] -> 99
but some values are: -3.350700319577517E-308....
What I'm not doing correctly???
I hope this is well explained and not that big to make you angry with me....
Thank you so much!!!!!
TomRacer
You have many problems in your code, and some of them are concept related. I think you should read the standard or OpenCL guide completely before starting to code. Because some of the system calls you are using have a different behaviour that what you expect.
Work-groups and work-items are NOT like CUDA. If you want 100x100 work-items, separated into 10x10 work-groups you use as global-size (100x100) and local-size(10x10). Unlike CUDA, where the global work item is multiplied by the local size internally.
1.1. In your test code, if you are using 10x10 with 10x10. Then you are not filling the whole space, the non filled area will still have garbage like -X.xxxxxE-308.
You should not use lengthX and lengthY and put a lot of ifs in your code. OpenCL has a method to call the kernels with offsets and with specific number of items, so you can control this from the host side. BTW doing this is a performance loss and is never a good practice since the code is less readable.
get_local_size(0) gives you the local size of axis 0 (10 in your case). What is what you do not understand in this call? Why do you divide it by 2 always?
I hope this can help you in your debugging process.
Cheers
thank you for your answer, first of all this kernel code is based on the commutative reduction code explained here: http://developer.amd.com/resources/documentation-articles/articles-whitepapers/opencl-optimization-case-study-simple-reductions/.
So I'm using that code but I added some things like the 2D operations.
Regarding to the point you mentioned before:
1.1- Actually the global work group size is (100, 100, 0)... That 100 is a result of multiplying 10 x 10 where 10 is the current array size, so my global work group size is based on this rule... then the local work item size is (10, 10, 0).
Global work group size must be a multiple of local work item size, I have read this in many examples and I think this is ok.
1.2- In my test code I'm using the same arrays, in fact if I change the array size GWG size and LWI size will change dinamically.
2.1- There are not so many "if" there, there are just 3 "if", the first one checks when I must compute the hypot() based on the array objects or fill that object with zero.
The second and third "if"s are just part of the reduction algorithm that seems to be fine.
2.2- Regarding to the lengthX and lengthY yeah you are right but I haven't got that yet, how should I use that??
3.1- Yeah, I know that, but I realized that I'm not using the Y axis id so maybe there is another problem here.
3.2- The reduction algorithm iterates over each pair of elements stored in the scratch variable and checking for the maximum value between them, so for each "for" that it does it is reducing the elements to be computed to the half of the previous quantity.
Also I'm going to post a some changes on the main kernel code and in the test kernel code because there where some errors.
Greetings...!!!

Resources