UI Elements not scaling while using Canvas Scaler - user-interface

I'm trying to make my UI elements work and remain the same for every different resolution. I added a Canvas Scaler to my Canvas and played around with the settings until it looked finished.
I then tried building the game and running it at few different resolutions to confirm that it was working. However, the Canvas Scaler doesn't seems to work.
http://prntscr.com/d1afz6
Above is some random resolution but that's how big my editor screen is and that's what I'm using as my reference resolution. That's also the hierarchy for this specific Canvas http://prntscr.com/d1aggx. It takes almost the whole screen when ran at 640x480. I have no clue why this is not working. I've read most of the unity guides on that but none of them seem to have that problem.

Ok, to fit something no matter the size of the screen, you have to use a separate coordinate system than Unity's absolute system. One of Unity's models is the View. The View is coordinates 0,0 at the top left, and 1,1 at the bottom right. Creating a basic Rect that handles that, is the following.
using UnityEngine;
namespace SeaRisen.nGUI
{
public class RectAnchored
{
public float x, y, width, height;
public RectAnchored(float x, float y, float width, float height)
{
this.x = x;
this.y = y;
this.width = width;
this.height = height;
}
public static implicit operator Rect(RectAnchored r)
{
return new Rect
{
x = r.x * Screen.width,
y = r.y * Screen.height,
width = r.width * Screen.width,
height = r.height * Screen.height
};
}
}
}
Here, we take the normal Rect floats, the x,y coordinates along with a width and height. But these are in the values [0..1]. I don't clamp it, so it can be tweened on and off the screen with animation, if desired.
The following is a simple script that create's a button in the lower right corner of the screen, and resizes as the screen grows or shrinks.
void MoveMe()
{
RaycastHit hit;
if (Physics.Raycast(transform.position, -Vector3.up, out hit, float.MaxValue)
|| Physics.Raycast(transform.position, Vector3.up, out hit, float.MaxValue))
transform.position = hit.point + Vector3.up * 2;
}
void OnGUI()
{
if (GUI.Button(new RectAnchored(.9f, .9f, .1f, .1f), "Fix me"))
{
MoveMe();
}
}
The X is .9 to the right and Y .9 from the top, and width and height of .1, so the button is 1/10th of the screen in height and width, positioned in the bottom 1/10th of the screen.
Since OnGUI is rendered every frame (or so), the button rect updates with the screen resize automatically. The same would work in a typical UI, if you are using Update() to render your windows.
I hope this explains the difference between what I meant with absolute coordinates. Setting say the previous example (using absolutes) in 640x480, it'd be something like new Rect(576, 432, 64, 48) and it wouldn't scale. By using new RectAnchored(.9f, .9f, .1f, .1f) and have it rendered into UI space based on Screen size, then it scales automatically.

Related

recalibrate ball in pong so the edge hits first, not the centre in Processing

I've recently built a game of pong for a UNI assignment in Processing, but whenever the ball hits the top, bottom, side of the screen or the 'paddle' it only bounces back once half the ball is off the screen. I just want the edge of the ball to hit first rather than the centre, but am unsure where my code is going wrong. I hope this makes sense, I'm a definite beginner.
Here is my code for reference
//underwater pong
float x, y, speedX, speedY;
float diam = 10;
float rectSize = 200;
float diamHit;
PImage bg;
PImage img;
int z;
void setup() {
size(920, 500);
smooth();
fill(255);
stroke(255);
imageMode(CENTER);
bg = loadImage("underthesea.jpg");
img = loadImage("plastic.png");
reset();
}
void reset() {
x = width/2;
y = height/2;
//allows plastic to bounce
speedX = random(5, 5);
speedY = random(5, 5);
}
void draw() {
background(bg);
image(img, x/2, y);
rect(0, 0, 20, height);
rect(width/2, mouseY-rectSize/2, 50, rectSize);
//allows plastic to bounce
x += speedX;
y += speedY;
// if plastic hits movable bar, invert X direction
if ( x > width-30 && x < width -20 && y > mouseY-rectSize/2 && y < mouseY+rectSize/2 ) {
speedX = speedX * -1;
}
// if plastic hits wall, change direction of X
if (x < 25) {
speedX *= -1.1;
speedY *= 1.1;
x += speedX;
}
// if plastic hits up or down, change direction of Y
if ( y > height || y < 0 ) {
speedY *= -1;
}
}
void mousePressed() {
reset();
}
I wasn't able to run your code because I am missing the background and plastic images, but here's what's probably going wrong. I'm not 100% since I do not know the dimensions of your images either.
You are using imageMode(CENTER). See the documentation for details.
From the docs:
imageMode(CENTER) interprets the second and third parameters of image() as the image's center point. If two additional parameters are specified, they are used to set the image's width and height.
This treats the coordinates you input into the image function as the center of the image.
Your first issue is that you are placing your image at x/2 but doing all your collision checks with x in mind. x does not represent the middle of your image, because you're drawing it at x/2.
Then I'm not sure if you are doing your horizontal collision checks right, as you are checking against hardcoded values. I do know you are doing your vertical collision checks wrong. You are checking if the center of the image is at the top of the canvas, 0, or the bottom of the canvas, height. This means that your image will already have moved out of the screen halfway.
If you want to treat the image coordinates as the center of your image, you need to check the left edge of the image at x - imageWidth / 2, the right edge at X+ imageWidth / 2, the top edge at y - imageWidth / 2 (remember the y coordinates are 0 at the top of the canvas and increase towards the bottom) and the bottom at y - imageWidth / 2. Here's a great website which goes into more detail on 2d collision detection, i'd highly recommend you give it a read. It's an awesome website.

Processing rect() causes image to be draw instead of rectangle?

In my game there are enemies wandering around, their draw() method is simple:
core.displayBuffer is a PGraphics object that is draw onto the screen at the end of draw().
if(facingRight) {
core.displayBuffer.image(image,
x, y + offsetY, 80, 80);
} else {
float tX = -core.camera.x+core.game.width/2f + x;
float tY = -core.camera.y+core.game.height/2f+ y;
core.displayBuffer.pushMatrix();
core.displayBuffer.translate(core.camera.x-core.game.width/2f,
core.camera.y-core.game.height/2f);
core.displayBuffer.translate(tX, tY);
core.displayBuffer.scale(-1, 1);
core.displayBuffer.image(image,
-80, offsetY, 80, 80);
core.displayBuffer.popMatrix();
}
Then when we are going to draw walls, we just draw a coloured rectangle like this:
core.displayBuffer.noStroke();
if(destroyed) {
core.displayBuffer.fill(0, 0, 0, 16);
core.displayBuffer.rect(x, y, w, h);
} else {
core.displayBuffer.fill(64);
core.displayBuffer.rect(x, y - WALL_HEIGHT, w, h);
core.displayBuffer.fill(32);
core.displayBuffer.rect(x, y + h - WALL_HEIGHT, w, WALL_HEIGHT);
}
But for some reason, the walls have the texture of the enemies? Here's the loop in which the objects are drawn:
PMatrix displayMatrix = displayBuffer.getMatrix();
PMatrix bloomMatrix = bloomLayer.getMatrix();
PStyle displayStyle = displayBuffer.getStyle();
PStyle bloomStyle = bloomLayer.getStyle();
onScreenObjects.forEach(o -> {
displayBuffer.setMatrix(displayMatrix);
bloomLayer.setMatrix(bloomMatrix);
displayBuffer.style(displayStyle);
bloomLayer.style(bloomStyle);
o.draw(this);
});
displayBuffer.setMatrix(displayMatrix);
bloomLayer.setMatrix(bloomMatrix);
displayBuffer.style(displayStyle);
bloomLayer.style(bloomStyle);
Here's example of the results, red rectangles are around the walls, that are drawn incorrectly.
Also the bullets are flickering for some reason? These 2 bugs don't appear when I don't draw the enemies onto the screen (or I draw just rectangles instead), so that means, that the image() is doing something weird in the background?
Project's source code is at https://github.com/Matrx007/TheLostBits
Ask for additional info if needed!
Nvidia Quadro 4000.
Graphics card driver is from 2016, can't upgrade it, all other games are working fine tho.
Processing version: 3.5.3 (Library)
Operating System and OS version: Windows 10 build 17134
Possible Causes / Solutions:
Maybe that the image() manipulates the current texture being used and rect() uses the texture?
SOLVED
The solution was, that Processing can't draw onto more than one PGraphics at a time. I had beginDraw() called on two PGraphics and I was drawing to both of them at the same time, now I separated them, and the bug is gone! Better explanation here: https://github.com/processing/processing/issues/5863

Flip image with different size width smooth transition

I'm trying to flip some animations in LibGDX, but because they are of different width, the animation plays weird. Here's the problem:
(the red dot marks the X/Y coordinate {0,0})
As you can see, when the animation plays "left" when you punch, the feet starts way behind than were it was, but when you punch right, the animations plays fine because the origin of both animations is the left corner, so the transition is smooth.
The only way I think of achieving what I want is to see what animation is playing and adjust the coordinates accordingly.
This is the code:
public static float draw(Batch batch, Animation animation, float animationState,
float delta,
int posX, int posY, boolean flip) {
animationState += delta;
TextureRegion r = animation.getKeyFrame(animationState, true);
float width = r.getRegionWidth() * SCALE;
float height = r.getRegionHeight() * SCALE;
if (flip) {
batch.draw(r, posX + width, posY, -width, height);
} else {
batch.draw(r, posX, posY, width, height);
}
return animationState;
}
Any suggestion is welcome as how to approach this.
Use some other batch.draw option (with other parameters). You can set "origin" parameters. It's like a hot spot...center of the image... So if you i.e. rotate, rotation will be done around that hot spot.
https://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/g2d/Batch.html
I didn't use it for flipping, but it should work the same way. But if it doesn't then you have to adjust coordinates on your own, make some list with X offset for every frame and add it for flipped images.
Other solution would be to have wider frame images and keep center of the character always match the center of the image. That way your images will be wider then they have to - you'll have some empty space, but for sane number of frame it's acceptable.

Parallax scrolling in Sprite LibGDX

I want to parallax scroll a Texture behind a Sprite with fixed width and height.
The problem is i need to just scroll the Texture in a given width and height and not to the end of the screen. I need something like a window view on this texture.
i could overlay the rest of the screen with black areas but there has to be a better solution i guess ;-)
currently im doing this
sprite.setX(sprite.getX() + (OVERLAY_ANIMATION_SPEED * delta));
sprite2.setX(sprite2.getX() + (OVERLAY_ANIMATION_SPEED * delta));
and reset the sprite where x is bigger than the screen width. But i have a smaller area inside the screen in which the scrolling should appear not from the beginng to the end of the screen.
Hope somebody has a hint for me how to achive this.
I'm using a glViewport to achieve something similar:
public void setViewPort(float dx, float dy, float sx, float sy)
{
Gdx.gl.glViewport((int) (screenWidth * dx), (int) (screenHeight * dy),
(int) (screenWidth * sx), (int) (screenHeight * sy));
}
So:
setViewPort(0, 0, 1, 1);
would render fullscreen and:
setViewPort(0.2f, 0.2f, 0.6f, 0.6f);
would render a 60% sized 'sub'window-viewport at 20% position
(thus centered), nothing is rendered outside that window (clipped by OpenGL). Hope this helps someone!

direct2d image viewer How to convert screen coordinates to image coordinates?

I'm trying to figure out how to convert the mouse position (screen coordinates) to the corresponding point on the underlying transformed image drawn on a direct2d surface.
the code here should be considered pseudo code as i'm using a modified c++/CLI wrapper around direct2d for c#, you won't be able to compile this in anything but my own project.
Render()
{
//The transform matrix combines a rotation, followed by a scaling then a translation
renderTarget.Transform = _rotate * _scale * _translate;
RectF imageBounds = new RectF(0, 0, _imageSize.Width, _imageSize.Height);
renderTarget.DrawBitmap(this._image, imageBounds, 1, BitmapInterpolationMode.Linear);
}
Zoom(float zoomfactor, PointF mousepos)
{
//mousePos is in screen coordinates. I need to convert it to image coordinates.
Matrix3x2 t = _translate.Invert();
Matrix3x2 s = _scale.Invert();
Matrix3x2 r = _rotate.Invert();
PointF center = (t * s * r).TransformPoint(mousePos);
_scale = Matrix3x2.Scale(zoomfactor, zoomfactor, center);
}
This is incorrect, the scale center starts moving around wildly when the zoomfactor increases or decreases smoothly, the resulting zoom function is not smooth and flickers a lot even though the mouse pointer is immobile on the center of the client surface. I tried all the combinations I could think of but could not figure it out.
If I set the scale center point as (imagewidth/2, imageheight/2), the resulting zoom is smooth but is always centered on the image center, so I'm pretty sure the flicker isn't due to some other buggy part of the program.
Thanks.
I finally got it right
this gives me perfectly smooth (incremental?, relative?) zooming centered on the client center
(I abandoned the mouse position idea since I wanted to use mouse movement input to drive the zoom)
protected float zoomf
{
get
{
//extract scale factor from scale matrix
return (float)Math.Sqrt((double)((_scale.M11 * _scale.M11)
+ (_scale.M21 * _scale.M21)));
}
}
public void Zoom(float factor)
{
factor = Math.Min(zoomf, 1) * 0.006f * factor;
factor += 1;
Matrix3x2 t = _translation;
t.Invert();
PointF center = t.TransformPoint(_clientCenter);
Matrix3x2 m = Matrix3x2.Scale(new SizeF(factor, factor), center);
_scale = _scale * m;
Invalidate();
}
Step1: Put android:scaleType="matrix" in ImageView XML file
Step 2: Convert screen touch points to Matrix value.
Step 3: Divide each matrix value with Screen density parameter to
get same coordinate value in all screens.
**XML**
<ImageView
android:id="#+id/myImage"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:scaleType="matrix"
android:src="#drawable/ga"/>
**JAVA**
#Override
public boolean onTouchEvent(MotionEvent event) {
float[] point = new float[]{event.getX(), event.getY()};
Matrix inverse = new Matrix();
getImageMatrix().invert(inverse);
inverse.mapPoints(point);
float density = getResources().getDisplayMetrics().density;
int[] imagePointArray = new int[2];
imagePointArray[0] = (int) (point[0] / density);
imagePointArray[1] = (int) (point[1] / density);
Rect rect = new Rect( imagePointArray[0] - 20, imagePointArray[1] - 20, imagePointArray[0] + 20, imagePointArray[1] + 20);//20 is the offset value near to the touch point
boolean b = rect.contains(267, 40);//267,40 are the predefine image coordiantes
Log.e("Touch inside ", b + "");
return true;
}

Resources