Does anyone know how to convert from Pixel Coordinates to UI Coordinates and vice-versa in Unity? Let's say for example I want to click somewhere on the screen with the mouse, and a UI Image to be at that click position. If I do this won't work:
Image img = null // I assign it via the inspector
void Update()
{
if(Input.GetMouseButtonDown(0))
{
img.rectTransform.anchorPosition = Input.mousePosition;
}
}
Image img = null // I assign it via the inspector
void Update()
{
if(Input.GetMouseButtonDown(0))
{
Vector2 point;
RectTransformUtility.ScreenPointToLocalPointInRectangle((RectTransform)img.rectTransform.parent, Input.mousePosition, canvasCamera, out point);
img.rectTransform.anchorPosition = point;
}
}
Related
I want to pick up a color from drawn canvas.
I found get() function, but it can get color only from image.
Is there some way to get color from current canvas?
You can get() colour from your current canvas: just address the PGraphics instance you need (even the global one) and be sure to call loadPixels() first.
Here's tweaked version of Processing > Examples > Basics > Image > LoadDisplayImage:
/**
* Load and Display
*
* Images can be loaded and displayed to the screen at their actual size
* or any other size.
*/
PImage img; // Declare variable "a" of type PImage
void setup() {
size(640, 360);
// The image file must be in the data folder of the current sketch
// to load successfully
img = loadImage("https://processing.org/examples/moonwalk.jpg"); // Load the image into the program
}
void draw() {
// Displays the image at its actual size at point (0,0)
image(img, 0, 0);
// Displays the image at point (0, height/2) at half of its size
image(img, 0, height/2, img.width/2, img.height/2);
//load pixels so they can be read via get()
loadPixels();
// colour pick
int pickedColor = get(mouseX,mouseY);
// display for demo purposes
fill(pickedColor);
ellipse(mouseX,mouseY,30,30);
fill(brightness(pickedColor) > 127 ? color(0) : color(255));
text(hex(pickedColor),mouseX+21,mouseY+6);
}
It boils down to calling loadPixels(); before get().
Above we're reading pixels from the sketch's global PGraphics buffer.
You can apply the same logic but reference a different PGraphics buffer depending on your setup.
I have a question regarding rendering with box2d and libgdx.
As you can see in the screenshot below I have a problem when changing the window resolution.
Box2d gets scaled over the entire screen although the viewport is only using a small portion of it. Also the lights get scaled and do not match the real position anymore (but I think this is related to the same issue).
My idea is that I somehow need to adjust the matrix (b2dCombinedMatrix) for box2d before rendering but I have no idea how.
Personally I think that I need to tell it that it should use the same "render boundaries" as the viewport but I cannot figure out how to do that.
Here is the render method (the issue is after the draw lights comments part):
public void render(final float alpha) {
viewport.apply();
spriteBatch.begin();
AnimatedTiledMapTile.updateAnimationBaseTime();
if (mapRenderer.getMap() != null) {
mapRenderer.setView(gameCamera);
for (TiledMapTileLayer layer : layersToRender) {
mapRenderer.renderTileLayer(layer);
}
}
// render game objects first because they are in the same texture atlas as the map so we avoid a texture binding --> better performance
for (final Entity entity : gameObjectsForRender) {
renderEntity(entity, alpha);
}
for (final Entity entity : charactersForRender) {
renderEntity(entity, alpha);
}
spriteBatch.end();
// draw lights
b2dCombinedMatrix.set(spriteBatch.getProjectionMatrix());
b2dCombinedMatrix.translate(0, RENDER_OFFSET_Y, 0);
rayHandler.setCombinedMatrix(b2dCombinedMatrix, gameCamera.position.x, gameCamera.position.y, gameCamera.viewportWidth, gameCamera.viewportHeight);
rayHandler.updateAndRender();
if (DEBUG) {
b2dRenderer.render(world, b2dCombinedMatrix);
Gdx.app.debug(TAG, "Last number of render calls: " + spriteBatch.renderCalls);
}
}
And this is the resize method which moves the viewport up by 4 world units:
public void resize(final int width, final int height) {
viewport.update(width, height, false);
// offset viewport by y-axis (get distance from viewport to viewport with offset)
renderOffsetVector.set(gameCamera.position.x - gameCamera.viewportWidth * 0.5f, RENDER_OFFSET_Y + gameCamera.position.y - gameCamera.viewportHeight * 0.5f, 0);
gameCamera.project(renderOffsetVector, viewport.getScreenX(), viewport.getScreenY(), viewport.getScreenWidth(), viewport.getScreenHeight());
viewport.setScreenY((int) renderOffsetVector.y);
}
After hours of fiddling around with the matrix I finally got it to work but there is actually a very easy solution to my problem :D
Basically the render method of the rayhandler was messing up my matrix calculations all the time and the reason is that I did not tell it to use a custom viewport.
So adjusting the resize method to this
public void resize(final int width, final int height) {
viewport.update(width, height, false);
// offset viewport by y-axis (get distance from viewport to viewport with offset)
renderOffsetVector.set(gameCamera.position.x - gameCamera.viewportWidth * 0.5f, RENDER_OFFSET_Y + gameCamera.position.y - gameCamera.viewportHeight * 0.5f, 0);
gameCamera.project(renderOffsetVector, viewport.getScreenX(), viewport.getScreenY(), viewport.getScreenWidth(), viewport.getScreenHeight());
viewport.setScreenY((int) renderOffsetVector.y);
rayHandler.useCustomViewport(viewport.getScreenX(), viewport.getScreenY(), viewport.getScreenWidth(), viewport.getScreenHeight());
}
and simplifying the render method to
// draw lights
rayHandler.setCombinedMatrix(gameCamera);
rayHandler.updateAndRender();
if (DEBUG) {
b2dRenderer.render(world, b2dCombinedMatrix);
Gdx.app.debug(TAG, "Last number of render calls: " + spriteBatch.renderCalls);
}
solved my problem.
Maybe I am stupid but I did not find useCustomViewport method in any of the documentations.
Anyway , solved!
I have used Animation() method to make my view with the animation of scaling and Rotation. With the Rotation based on the Y axis, the default height and width of my view has been changed. It looks like the parallelogram.
rotation of rectangle along y-axis transformed to a parallelogram.
myview.Animate().RotationY(rotationangle)
.X(xposition)
.SetDuration(mduration)
.WithLayer()
.SetInterpolator(interpolate).Start();
My requirement:
I just want the rotation of my view no need to change its projection. How to restrict the rotation of rectangle along y-axis transformed to a parallelogram.
For more reference, please check the attached sample
now view be like,
Image
Please share your idea.
Thanks in Advance.
Note: while using PivotX and PivotY, there is no parallelogram shape. But I don't know the exact usage of that.
Regards,
Hemalatha Marikumar
is that not what are you looking for ?
it may work if you put this code in your current activity
Android: Temporarily disable orientation changes in an Activity
Do you want to create a 2D rotation?
You could try to use ScaleAnimation to rotate the view. If you want to rotate 360 degrees, you could use AnimationListener.
For example:
Button myview = (Button)FindViewById(Resource.Id.button2);
ScaleAnimation scaleAnimation = new ScaleAnimation(1, 0, 1, 1,
Android.Views.Animations.Dimension.RelativeToParent, 0.5f, Android.Views.Animations.Dimension.RelativeToParent, 0.5f);
ScaleAnimation scaleAnimation2 = new ScaleAnimation(0, 1, 1, 1,
Android.Views.Animations.Dimension.RelativeToParent, 0.5f, Android.Views.Animations.Dimension.RelativeToParent, 0.5f);
scaleAnimation.Duration = 4000;
scaleAnimation.SetAnimationListener(new AnimationListener(myview, scaleAnimation2));
scaleAnimation2.Duration = 4000;
myview.StartAnimation(scaleAnimation);
The Listener:
public class AnimationListener :Java.Lang.Object, IAnimationListener
{
View view;
Animation animation2;
public AnimationListener(View view, Animation animation)
{
this.view = view;
this.animation2 = animation;
}
public void OnAnimationEnd(Animation animation)
{
view.StartAnimation(animation2);
}
public void OnAnimationRepeat(Animation animation)
{
}
public void OnAnimationStart(Animation animation)
{
}
}
I am using processing 3, and trying to implement an interactive map via gicentre GeoMap library. I have got the U.S. map shown and the hovering feature work i.e. highlight the hovering state. I am wondering is there any way I can zoom into the state in this GeoMap library. Maybe a mouseClick or a mouseMove event to trigger this function. I am not sure how to redraw the map to make it zoom into the selected state. Here is my starting code:
import org.gicentre.geomap.*;
GeoMap geoMap;
int id = -1;
void setup()
{
size(800, 400);
geoMap = new GeoMap(this); // Create the geoMap object.
geoMap.readFile("usContinental"); // Read shapefile.
}
void draw()
{
background(202, 226, 245); // Ocean colour
stroke(0,40); // Boundary colour
fill(206,173,146); // Land colour
//if (id == -1) {
geoMap.draw(); // Draw the entire map.
//} else {
// geoMap.draw(id);
//}
// Find the country at mouse position and draw in different color.
id = geoMap.getID(mouseX, mouseY);
if (id != -1)
{
fill(180, 120, 120); // Highlighted land colour.
geoMap.draw(id);
}
}
Any idea? Thanks!
Questions like these are best answered by looking at the docs for the library. Here are the docs for the giCentre geoMap library.
According to that, this library basically just shows shape files, without any fancy logic for zooming the map. You could implement this yourself using the scale() function or the camera functions. But you might be best off just finding a library that supports changing the zoom out of the box.
I am working on an app in which images are flying on the Screen.
I need to implement:
Hold onto any of the flying images on Tap
Drag the image to certain position of the user's choice by letting the user hold it.
Here is another easy way to do dragging.
Just draw your image (Texture2d) with respect to a Rectangle instead of Vector2.
Your image variables should look like this
Texture2d image;
Rectangle imageRect;
Draw your image with respect to "imageRect" in Draw() method.
spriteBatch.Draw(image,imageRect,Color.White);
Now in Update() method handle your image with single touch input.
//Move your image with your logic
TouchCollection touchLocations = TouchPanel.GetState();
foreach(TouchLocation touchLocation in touchLocations)
{
Rectangle touchRect = new Rectangle
(touchLocation.Position.X,touchLocation.Position.Y,10,10);
if(touchLocation.State == TouchLocationState.Moved
&& imageRect.Intersects(touchRect))
{
imageRect.X = touchRect.X;
imageRect.Y = touchRect.Y;
}
//you can bring more beauty by bringing centre point
//of imageRect instead of initial point by adding width
//and height to X and Y respectively and divide it by 2
There's a drag-and-drag example in XNA here: http://geekswithblogs.net/mikebmcl/archive/2011/03/27/drag-and-drop-in-a-windows-xna-game.aspx
When you load your image in, you'll need a BoundingBox or Rectangle Object to control where it is.
So, in the XNA app on your phone, you should have a couple of objects declared for your texture.
Texture2D texture;
BoundingBox bBox;
Vector2 position;
bool selected;
Then after you load your image content, keep your bounding box updated with the position of your image.
bBox.Min = new Vector3(position, 1.0f);
bBox.Max = new Vector3(position.X + texture.Width, position.Y + texture.Height, 0f);
Then also in your update method, you should have a touch collection initialized to handle input from the screen, get the positions of the touch collection, loop through them and see if they intersect your boundingbox.
foreach (Vector2 pos in touchPositions)
{
BoundingBox bb = new BoundingBox();
bb.Min = new Vector3(pos, 1.0f);
bb.Max = new Vector3(pos, 0f);
if (bb.Intersects(bBox)
{
if (selected)
{
//do something
}
else
{
selected = true;
}
}
}
From there, you have whether your object is selected or not. Then just use the gestures events to determine what you want to do with your texture object.