Control ball movement by tilting device in Unity3D - unityscript

I'm using the following script to control a ball but it doesn't do exactly what I want.
Our 3D game will be played in landscape mode with the home button (or bottom) of the device in the right hand. Tilting (not turning) the device to the left should make the ball roll left, tilting to the right should make it roll right. Tilting the device down (top of device going down) should make the ball roll faster and tilting the device upward should slow it down.
I don't want the ball to indefinitely accelerate either.
The code below wants the device held straight as opposed to laying flat and it moves the ball by turning the device not by tiling it.
void FixedUpdate()
{
// Player movement in mobile devices
// Building of force vector
Vector3 movement = new Vector3(-Input.acceleration.x, 0.0f, -Input.acceleration.z);
// Adding force to rigidbody
var move = movement * speed * Time.deltaTime;
rigidbdy.AddForce(move);
}

For your tilting problem you likely just need to choose something other than (-Input.acceleration.x, 0.0f, -Input.acceleration.z);, in the example in the documentation they do (-Input.acceleration.y, 0.0f, Input.acceleration.x); to do tilt controls.
For the max speed issue, just add a check for rigidbdy.velocity.magnitude > maxSpeed in your code and cap the value if it is maxed.
public float maxSpeed;
void FixedUpdate()
{
// Player movement in mobile devices
// Building of force vector
Vector3 movement = new Vector3(-Input.acceleration.y, 0.0f, Input.acceleration.x);
// Adding force to rigidbody
var move = movement * speed * Time.deltaTime;
rigidbdy.AddForce(move);
//Limits the max speed
if(rigidbdy.velocity.magnitude > maxSpeed)
{
rigidbdy.velocity = rigidbdy.velocity.normalized * maxSpeed;
}
}
That will cause the velocity to be capped to whatever value you have set for maxSpeed in the inspector.

Related

How to apply two-finger touch anywhere in the image, such that the point stays at the same location

I have a 2D texture image that is zoomed in/out via 2-finger touch and pinch.
Currently the image is not panned, i.e. the center of the image is always in the middle.
I want the center point between the twoFinger touch to stay between the 2 fingers.
If I pinch exactly in the center of the image, then the center image point will stay between the fingers - good!
But if I pinch near the corner of the image the point will move away, relative to the 2 fingers, because of the zoom.
So I need to apply some pan in addition to the zoom, to make the point appear in the same place.
I basically need to transftorm the camera position such that, for every zoom, the same world coordinate is projected to the same screen coord.
Figures 1-3 illustrate the problem.
Figure1 is the original image.
Currently when I zoom in the camera stays in the same position, so the object between the 2 fingers (the cat's eye on the right) is drifted from being between the 2 fingers, as the image zooms (Figure 2).
I want to pan the camera such that the object between the 2 fingers stays between the 2 fingers even after the zooming the image (Figure 3).
I used the code below, but the object still drifts as the image zooms in/out.
How should I calculate the amount of shift that needs to be applied to the camera?
Thanks
Code to calculate the amount of shift that needs to be applied to the camera
handleTwoFingerTouchMove( p3_inScreenCoord) {
// normalize the screen coord to be in the range of [-1, 1]
// (See method1 in https://stackoverflow.com/questions/13542175/three-js-ray-intersect-fails-by-adding-div/)
let point2dNormalizedX = ( ( p3_inScreenCoord.x - windowOffset.left ) / windowWidth) * 2 - 1;
let point2dNormalizedY = -( ( p3_inScreenCoord.y - windowOffset.top ) / windowHeight) * 2 + 1;
// calc p3 before zoom (in world coords)
let p3_beforeZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// Apply zoom
this.dollyInOut( this.getZoomScale(), true );
// calc p3 after zoom (in world coords)
let p3_afterZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// calc the required shift in camera position
let deltaX = p3_afterZoom.x - p3_beforeZoom.x;
let deltaZ = p3_afterZoom.z - p3_beforeZoom.z;
// shift in camera position
this.pan( deltaX, deltaZ );
};
I was able to solve my problem. Here is my solution in the hope that it helps others.
When first applying a 2-finger touch (i.e. on touchstart event), the code computes:
the world-coordinate of the object pointed at (e.g. the cat's eye on the right), when starting two-finger touch
centerPoint3d_inWorldCoord0 (Vector3)
the screen-coordinate anchor for zooming via two-finger touch
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized (Vector2)
While zooming in/out via two-finger pinch in/out (on touchmove event)
in the event listener function, immediately after the applying the zoom,
I call the following code:
// Calculate centerPoint3d_inWorldCoord2, which is the new world-coordinate, for
// centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized given the new zoom setting.
let centerPoint3d_inWorldCoord2 = new THREE_Vector3( centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.x,
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.y,
-1 ).unproject( camera );
// compute the shift in world-coordinate between the new vs the original world-coordinate
let delta_inWorldCoords = new THREE_Vector2(centerPoint3d_inWorldCoord2.x - centerPoint3d_inWorldCoord0.x,
centerPoint3d_inWorldCoord2.z - centerPoint3d_inWorldCoord0.z);
// pan the camera, to compensate the shift
pan_usingWorldCoords( delta_inWorldCoords );
The function pan_usingWorldCoords shifts the camera in the x axis (panLeft), and then in the y axis (panUp)
pan_usingWorldCoords( delta_inWorldCoord ) {
panLeft( delta_inWorldCoord.x );
panUp( delta_inWorldCoord.y );
};
The functions panLeft, panUp are similar to the functions that are used in three.js-r114/examples/jsm/controls/OrbitControls.js
Initially the object pointed at was drifting from being between the 2 fingers, as the image zoomed in/out.
I added this.camera.updateProjectionMatrix() at the end of each function.
This updates the projection matrix at the end of panLeft, before using it again in panUp.
With the code above and after updating the projection matrix at the end of panLeft, panUp, the object pointed at (e.g. the eye on the right), when starting two-finger touch, is kept between the 2 fingers, while zooming via two-finger pinch.

How can I optimize an animation in Processing, and keep it from leaving a trail of images?

I am creating a model of a solar system in processing, and after removing the background I noticed the planets were leaving a trail of their image behind them. The program runs fine when the background is back in, but I want to add a lot more and I am sure this is inefficient and will bog things down.
I am very new to processing, and I am really not sure how to solve this. Maybe delete previous images after a delay to create a shortened trail?
These are just the parts I think are important cherry picked from the code, this is just the example of one planet. Sorry if the code is clunky, any suggestions are happily accepted.
Planet p1;
void setup() {
mercury = loadImage("mercury.png")
p1 = new Planet(40, random(TWO_PI), 0.05);
}
void draw() {
//background(0)
translate(width / 2, height / 2);
p1.display1();
p1.orbit();
}
class Planet {
float radius;
float angle;
float distance;
float orbitSpeed;
Planet(float r, float d, float o) {
radius = r;
distance = d;
orbitSpeed = o;
angle = random(TWO_PI);
}
void orbit() {
angle = angle + orbitSpeed;
}
void display1() {
pushMatrix();
rotate(angle);
translate(distance, 0);
imageMode(CENTER);
image(mercury, radius, radius, 10, 10);
popMatrix();
}
}
I realized that this would probably happen, and I am not sure how to stop it.
The behavior you describe is simply the nature of computer graphics; it's how games, operating systems, and hardware displays all work – they clear and redraw everything every frame.
In Processing, graphic objects that are pushed to a buffer remain there indefinitely until the buffer is cleared or something is pushed on top of them (this is why the planets are leaving a trail without calling background() – previous frames remain in the buffer).
You are worried about the background() being inefficient. Don't be, as it's one of the fastest operations (simply sets the value of each pixel, as given by the user).
Processing does provide a clear() function, but this is equivalent to background(0).
If you're are still concerned about efficiency and speed, one way to speed up Processing is to use the FX2D renderer rather than default AWT renderer. Another way is cache drawn objects into PGraphics objects to prevent successive rasterization (since your planets are image files and not drawn with processing, you needn't worry about this).
Your code is simple enough that it doesn't need optimisations at this stage.
As micycle mentions, you are are drawing an image at a translated position, pretty similar to blitting.
In terms of the trails, one common trick you could use is not clear the screen completely, but draw a transparent rectangle as the background. The more transparency, the longer the trails.
Here's a tweaked version of your code:
// planet object
Planet p1;
// planet texture
PImage mercury;
void setup() {
size(300, 300);
// draw image from center
imageMode(CENTER);
// clear to black one
background(0);
// remove strokes (we'll use rect() later)
noStroke();
// set the fill to black but with 9/255 transparency (~3.5% transparent)
fill(0,9);
// init texture
mercury = loadImage("mercury.png");
// init planet
p1 = new Planet(40, random(TWO_PI), 0.05);
}
void draw() {
// draw a transparent rectangle instead of completely clearing the screen
rect(0,0,width,height);
// render planet
translate(width / 2, height / 2);
p1.display1();
p1.orbit();
}
class Planet {
float radius;
float angle;
float distance;
float orbitSpeed;
Planet(float r, float d, float o) {
radius = r;
distance = d;
orbitSpeed = o;
angle = random(TWO_PI);
}
void orbit() {
angle = angle + orbitSpeed;
}
void display1() {
pushMatrix();
rotate(angle);
translate(distance, 0);
image(mercury, radius, radius, 10, 10);
popMatrix();
}
}
It's an efficient quick'n'dirty hack as you won't need to store previous position and redraw multiple times, however it has it limitations in terms of the flexibility of the trails. Hopefully tweaking the fill() alpha parameter will get you the desired effect.
Later on if you're drawing many many many planets it things start running slow have a peak at VisualVM. Profile the CPU and see the methods that take the longest to complete and focus on those. Don't need to optimise everything, just the slowest calls. Remember that Processing have multiple renderers: JAVA2D is the default one, but there's also FX2D and P2D/P3D and they will behave differently. I strongly recommend optimising at the last moment (otherwise code might be less flexible and readable and will slow down development/iteration).

using camera to focus on object after translations and rotations in processing 3.0

I have a program where a box ( player ) can land on the ground ( environment ), move and jump around. In the process of trying to make the boxes forward direction independent from the environment I used a couple or translations and rotations. this works fine for the boxes movement but now i do not know the coordinates of the box to use the camera function to focus on it.
void setup() {
size(600,600,P3D);
PVector eye = new PVector(0,0,0);
PVector translations = new PVector(0,0,0);
background(200);
translate(width/2, height/2);
rotateZ(mouseX/(width/PI));
//ground would be drawn here
translate(0,0,-400);
translate(50, 100 ,490);
rotateZ(-mouseX/(width/PI));
box(150);
translations.add(0,0,0);//translations occurring from rotations and translation
eye.add(translations);
eye.add(500, 1000, 500); //translate the eye away from the point its looking at
camera(eye.x, eye.y, eye.z, translations.x, translations.y, translations.z, 0, 0, -1);
eye.mult(0);
translations.mult(0);
}
How can I find the position of the box after the translations? I know that the use of sin and cos is necessary but i cannot for the life of me get an algorithm that works perfectly.
edit: I changed the code to be complete and verifiable

Flip image with different size width smooth transition

I'm trying to flip some animations in LibGDX, but because they are of different width, the animation plays weird. Here's the problem:
(the red dot marks the X/Y coordinate {0,0})
As you can see, when the animation plays "left" when you punch, the feet starts way behind than were it was, but when you punch right, the animations plays fine because the origin of both animations is the left corner, so the transition is smooth.
The only way I think of achieving what I want is to see what animation is playing and adjust the coordinates accordingly.
This is the code:
public static float draw(Batch batch, Animation animation, float animationState,
float delta,
int posX, int posY, boolean flip) {
animationState += delta;
TextureRegion r = animation.getKeyFrame(animationState, true);
float width = r.getRegionWidth() * SCALE;
float height = r.getRegionHeight() * SCALE;
if (flip) {
batch.draw(r, posX + width, posY, -width, height);
} else {
batch.draw(r, posX, posY, width, height);
}
return animationState;
}
Any suggestion is welcome as how to approach this.
Use some other batch.draw option (with other parameters). You can set "origin" parameters. It's like a hot spot...center of the image... So if you i.e. rotate, rotation will be done around that hot spot.
https://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/g2d/Batch.html
I didn't use it for flipping, but it should work the same way. But if it doesn't then you have to adjust coordinates on your own, make some list with X offset for every frame and add it for flipped images.
Other solution would be to have wider frame images and keep center of the character always match the center of the image. That way your images will be wider then they have to - you'll have some empty space, but for sane number of frame it's acceptable.

Example of OpenGL game coordinates system - done right?

Well it is not surprise what default OpenGL screen coords system quite hard to operate with x-axis: from -1.0 to 1.0, y-axis: from -1.0 to 1.0, and (0.0,0.0) in center of screen.
So i decided to write some wrapper to local game coords with next main ideas:
Screen coords will be 0..100.0 (x-axis), 0..100.0 (y-axis) with (0.0,0.0) in bottom left corner of screen.
There are different screens, with different aspects.
If we draw quad, it must stay quad, not squashed rectangle.
By the quad i mean
quad_vert[0].x = -0.5f;
quad_vert[0].y = -0.5f;
quad_vert[0].z = 0.0f;
quad_vert[1].x = 0.5f;
quad_vert[1].y = -0.5f;
quad_vert[1].z = 0.0f;
quad_vert[2].x = -0.5f;
quad_vert[2].y = 0.5f;
quad_vert[2].z = 0.0f;
quad_vert[3].x = 0.5f;
quad_vert[3].y = 0.5f;
quad_vert[3].z = 0.0f;
I will use glm::ortho and glm::mat4 to achieve this:
#define LOC_SCR_SIZE 100.0f
typedef struct coords_manager
{
float SCREEN_ASPECT;
mat4 ORTHO_MATRIX;//glm 4*4 matrix
}coords_manager;
glViewport(0, 0, screen_width, screen_height);
coords_manager CM;
CM.SCREEN_ASPECT = (float) screen_width / screen_height;
For example our aspect will be 1.7
CM.ORTHO_MATRIX = ortho(0.0f, LOC_SCR_SIZE, 0.0f, LOC_SCR_SIZE);
Now bottom left is (0,0) and top right is (100.0, 100.0)
And it works, well mostly, now we can translate our quad to (25.0, 25.0), scale it to (50.0, 50.0) and it will sit at bottom-left corner with size of 50% percent of screen.
But problem is what it not quad anymore it looks like rectangle, because our screen width not equal with height.
So we use our screen aspect:
CM.ORTHO_MATRIX = ortho(0.0f, LOC_SCR_SIZE * CM.SCREEN_ASPECT, 0.0f, LOC_SCR_SIZE);
Yeah we get right form but another problem - if we position it at (50,25) we get it kinda left then center of screen, because our local system is not 0..100 x-axis anymore, it's now 0..170 (because we multiply by our aspect of 1.7), so we use next function before setting our quad translation
void loc_pos_to_gl_pos(vec2* pos)
{
pos->x = pos->x * CM.SCREEN_ASPECT;
}
And viola, we get right form squad at right place.
But question is - am i doing this right?
OpenGL screen coords system quite hard to operate with x-axis: from -1.0 to 1.0, y-axis: from -1.0 to 1.0, and (0.0,0.0) in center of screen.
Yes, but you will never use them directly. There's usually always a projection matrix, that transforms your coordinates into the right space.
we get it kinda left then center of screen, because our local system is not 0..100 x-axis anymore,
That's why OpenGL maps NDC space (0,0,0) to the screen center. If you draw a quad with coordinates symmetrically around the origin it will stay in the center.
But question is - am i doing this right?
Depends on what you want to achieve.

Resources