detect touch on GUI Textures - user-interface

I have 2 gui textures.
According to screen width and height i kept it in gui.
One for joystick and another for shooter.
Now touching on shooter joystick moves to that specific portion.
i used rect.Contains.
void Start () {
xx = Screen.width - Screen.width/12;
yy = Screen.height - Screen.height/8;
lb = Screen.width/10;
rect = new Rect(-xx/2, -yy/2, lb, lb);
shooter.pixelInset = rect;
shooter.enabled = false;
}
void OnGUI(){
if(characterScript.playbool){
shooter.enabled = true;
}
if (rect.Contains(Event.current.mousePosition)){
shootBool = true;
print("shoot");
alert.text="shoot";
}
}
Not working properly for me. Think space coordinates are different from gui coordinates. How can fix this problem.do anyone can suggest any other good method

You can try HitTest.
function Update()
{
for (var touch : Touch in Input.touches)
{
if (rect.Contains(touch.position))
{
// we are now in the guitexture 'rect'
Debug.Log("rect touched");
exit;
}
}
}
The above code is used with touches, as you described in your question. However, since you tagged mouse, I don't know for sure if you use the mouse or a touch.
So, if you use the mouse to click on the object, you can use:
if (rect.Contains(Input.mousePosition) && Input.GetMouseButtonDown(0))
{
Debug.Log("rect clicked");
exit;
}

Some debugging of your code brings out a few problems.
Say the screen resolution is 800x600px:
xx = 800 - 800 / 12
= 733.3333333333333
yy = 600 - 600 / 8
= 525
lb = 800 / 10
= 80
rect = (-366.665, -262.5, 80, 80)
Why is lb determined by the screen width divided by ten?
Now the heart of the problem is that Event.mousePosition starts at the top left of the screen, while GUITexture.pixelInset is based in the center of the screen.
You will have to either adjust the Event.mousePosition to use the center of the screen as the origin, or you will have to adjust the rect variable to start at the top-left of the screen.

Related

macOS, how resize window across screens?

I'm trying to programmatically resize macOS windows. Similar to Rectangle.
I have the basic resizing code working, for example, move the window to the right half, and when there is only one screen it works fine, however when I try to resize with two screens (in a vertical layout) the math does not work:
public func moveRight() {
guard let frontmostWindowElement = AccessibilityElement.frontmostWindow()
else {
NSSound.beep()
return
}
let screens = screenDetector.detectScreens(using: frontmostWindowElement)
guard let usableScreens = screens else {
NSSound.beep()
print("Unable to obtain usable screens")
return
}
let screenFrame = usableScreens.currentScreen.adjustedVisibleFrame
print("Visible frame of current screen \(usableScreens.visibleFrameOfCurrentScreen)")
let halfPosition = CGPoint(x: screenFrame.origin.x + screenFrame.width / 2, y: -screenFrame.origin.y)
let halfSize = CGSize(width: screenFrame.width / 2, height: screenFrame.height)
frontmostWindowElement.set(size: halfSize)
frontmostWindowElement.set(position: halfPosition)
frontmostWindowElement.set(size: halfSize)
print("movedWindowRect \(frontmostWindowElement.rectOfElement())")
}
If my window is on the main screen then the resizing works correctly, however if it is a screen below (#3 in the diagram below) then the Y coordinate ends up in the top monitor (#2 or #1 depending on x coordinate) instead of the original one.
The output of the code:
Visible frame of current screen (679.0, -800.0, 1280.0, 775.0)
Raw Frame (679.0, -800.0, 1280.0, 800.0)
movedWindowRect (1319.0, 25.0, 640.0, 775.0)
As far as I can see the problem lies in how Screens and windows are positioned:
I'm trying to understand how should I position the window so that it remains in the correct screen (#3), but having no luck so far, there doesn't seem to be any method to get the absolute screen dimensions to place the screen in the correct origin.
Any idea how can this be solved?
I figured it out, I completely missed one of the functions used in the AccessibilityElement class:
static func normalizeCoordinatesOf(_ rect: CGRect) -> CGRect {
var normalizedRect = rect
let frameOfScreenWithMenuBar = NSScreen.screens[0].frame as CGRect
normalizedRect.origin.y = frameOfScreenWithMenuBar.height - rect.maxY
return normalizedRect
}
Basically, since everything is calculated based on the main screen then there is no other option than to take the coordinates of that one and then offset to get the real position of the screen element.

Saving click data to an table (Processing)

I´ve tried looking up the question but sadly I wasnt able to find my an answer to my question in other threads. :(
My problem is the following:
I´ve got my hands on a code which transfers clickdata into a heatmap.
Now what I would need is a way to transfer said clickdata into a table which documents the coordinates.
Here is the (hopefully) relevant part out of the code:
void mouseReleased()
{
if (mouseX >= 0 && mouseX < backgroundImage.width && mouseY >= 0 && mouseY < backgroundImage.height)
{
// blit the clickmapBrush onto the (offscreen) clickmap:
clickmap.blend(clickmapBrush, 0,0,clickmapBrush.width,clickmapBrush.height,mouseX-clickmapBrush.width/2,mouseY-clickmapBrush.height/2,clickmapBrush.width,clickmapBrush.height,BLEND);
// blit the clickmapBrush onto the background image in the upper left corner:
image(clickmapBrush, mouseX-clickmapBrush.width/2, mouseY-clickmapBrush.height/2);
// render the heatmapBrush into the gradientMap:
drawToGradient(mouseX, mouseY);
The code is used for the software "Processing".
I hope my question is specific enough.
Thanks in advance! =)
Here is a sample Processing sketch that will record a list of coordinates for each mouse click:
// We will store a list of coordinates,
// each representing a single mouse click's position
ArrayList<PVector> clickData;
void setup() {
clickData = new ArrayList<PVector>();
// Do any aditional setup you need here
}
void draw() {
// Do any drawing here
}
// Called after each press and release of the mouse
void mouseClicked() {
// Add the mouse's position to our list of mouse click positions
clickData.add(new PVector(mouseX, mouseY));
}
If you want to get the x value of the first recorded mouse click, for example, you can access it through a call to clickData.get(0).x, which first grabs the PVector at position 0 from clickData, then get's the x value associated with that PVector object.
Hope that helps!
You can read about the PVector class here

Unity3D: How do I improve my Idle animation (Y-axis)

Okay, I am almost finished with my 2d rpg click to move game. If you look at this video you will be able to see that when I click forward my player, once it gets to it's position, face the wrong direction rather than facing straight towards the players. To make myself more clearer, this is an image of the sprite sheet I am currently using, as you can see it has 8 directions. When you click here (in the game), my player would walk down and face this direction or this direction, rather than facing this direction (the normal/preferred position). This also happens when I click here (in the game) my player would walk up and face this direction or face this direction, rather than facing this direction. How can I make sure that my player face the right direction once it reached it's destination. Again this is only occurring within the Y-axis, so walking along the X-axis is fine.
private Animator anim;
public float speed = 15f;
private Vector3 target;
private bool touched;
private bool playerMovementRef;
void Start () {
target = transform.position;
anim = GetComponent<Animator> ();
}
void Update () {
if (Input.GetMouseButtonDown (0)) {
Vector3 mousePosition = Input.mousePosition;
mousePosition.z = 10; // distance from the camera
target = Camera.main.ScreenToWorldPoint (mousePosition);
target.z = transform.position.z;
var movementDirection = (target - transform.position).normalized;
Vector3 animDirection = Vector3.zero;
if (movementDirection.sqrMagnitude > 0)
{
// Use >= to default to horizontal on both being equal
if (movementDirection.x > movementDirection.y)
animDirection.x = 1;
else
animDirection.y = 1;
anim.SetBool ("walking", true);
anim.SetFloat ("SpeedX", movementDirection.x);
anim.SetFloat ("SpeedY", movementDirection.y);
if (movementDirection.x < 0) {
anim.SetFloat ("LastMoveX", -1f);
} else if (movementDirection.x > 0) {
anim.SetFloat ("LastMoveX", 1f);
} else {
anim.SetFloat ("LastMoveX", 0f);
}
if (movementDirection.y > 0) {
anim.SetFloat ("LastMoveY", 1f);
} else if (movementDirection.y < 0) {
anim.SetFloat ("LastMoveY", -1f);
} else {
anim.SetFloat ("LastMoveY", 0f);
}
}
} else {
if (Mathf.Approximately (transform.position.x, target.x) && Mathf.Approximately (transform.position.y, target.y)) {
touched = false;
anim.SetBool ("walking", false);
} else {
transform.position = Vector3.MoveTowards (transform.position, target, speed * Time.deltaTime);
}
}
}
}
I'm just starting out with Unity, but hope I can help.
From the video you've posted I've noticed that the sprite is only 'wrong' when the oldPosition and newPosition click differs in the X component, e.g. when you would click straight down it would show the desired behavior and sprite.
Have you tried printing out the x and y values that your code is using to calculate which sprite it's setting?
At first I thought that maybe it's because the values you set in the Blend Tree were at -1, 1 etc, and due to normalizing you sometimes wound up with 0.9 for a certain value.
Can you maybe try debugging it with the animator window open, like you did at the end? Writing down the values and comparing them between Desired and Undesired behavior might tell you something more.
Sorry I don't have a concrete solution to your problem, but I hope this helps.
Edit for clarification:
Basically, what I'm recommending is to:
1) Print out the values you are getting when the behavior happens, an example of how to print these things out is by using LogFormat, for example:
Debug.LogFormat("X: {0}, Y: {1}", xPosition, yPosition);
Now, you will get the values printed out when it finishes moving.
2) Write down the values for when the moving finishes with the 'Wrong' sprite, and keep clicking to move until the 'Right' sprite shows up. Write down the values again.
3) Compare each values, and find the differences. Now deduce why the differences are as they are, using the values in conjunction with your blend trees.
4) Once you know why, go back through your code/blend trees and rewrite/fix it to work as you intended it to.

Physics body as an SKTexture does not rotate

I am making a game where the main character rotates, however, I want to use an SKTexture to set custom boundaries to the character sprite node. However, when I do this, my character does not rotate. When I use a regular circleOfRadius (because my sprite is round, but no perfectly round) it rotates, but is not accurate on collisions. Here is my code:
var mainSpriteTexture = SKTexture(imageNamed: "Main")
mainSprite = SKSpriteNode(texture: mainSpriteTexture)
mainSprite.setScale(0.2)
mainSprite.position = CGPoint(x: self.frame.width / 2, y: 100)
mainSprite.physicsBody = SKPhysicsBody(texture: mainSpriteTexture, size: mainSprite.size)
mainSprite.physicsBody?.categoryBitMask = PhysicsCatagory.player
mainSprite.physicsBody?.collisionBitMask = PhysicsCatagory.platform | PhysicsCatagory.ground
mainSprite.physicsBody?.contactTestBitMask = PhysicsCatagory.platform | PhysicsCatagory.ground | PhysicsCatagory.score
mainSprite.physicsBody?.affectedByGravity = true
mainSprite.physicsBody?.dynamic = true
mainSprite.physicsBody?.allowsRotation = true
self.addChild(mainSprite)
override func update(currentTime: CFTimeInterval) {
/* Called before each frame is rendered */
updateSpritePosition()
if gameStarted == true {
if died == false {
mainSprite.physicsBody?.velocity = CGVectorMake(direction, (mainSprite.physicsBody?.velocity.dy)!)
}
else if died == true {
mainSprite.physicsBody?.velocity = CGVectorMake(0, (mainSprite.physicsBody?.velocity.dy)!)
}
}
Here is the shape I am using: http://imgur.com/JCeEAbv
Not really sure how you rotate your sprite, or move it, but if I give it a certain default rotation and drop it on the surface, it rotates for me as it should (I used a picture you have provided). This way, it will not land perfectly on the ground (zRotation 0.0).
Try to use this:
mainSprite.zRotation = 0.1 //note that these are radians
and drop it on a flat surface.
Same thing you could probably achieve by applying a small force or impulse which should affect sprite's zRotation property... What you are going to use, depends on how you move your node actually - using physics, or by changing its position property directly (see my comment about mixing physics & manual node movement for important details).

Set SWT Check/Radio Button Foreground color in Windows

This is not a duplicate of How to set SWT button foreground color?. It's more like a follow up. I wrote follow-up questions as comments, but did not get any responses, so I thought I'd try to put it up as a question, and hopefully some expert will see it.
As is pointed in the referenced question, windows native button widgets do not support setting the foreground color (in fact, after more further research (more like experiments), it's been revealed that setForeground() works under the Classic Theme, but not others).
The answer/suggestion given in the referenced question is a good one (a.k.a providing a paint listener and drawing over the text with the correct color). I gave it a whirl but ran into a world of problems trying to decide the coordinate at which to draw the text:
It appears that - in addition to SWT attributes like alignment etc. - Windows has some rather hard-to-figure-out rule of deciding the location of the text. What makes it worse is that the location appears to be dependent on the windows theme in effect. Since I need to draw the text exactly over the natively-drawn windows text in order to override the color, this is a huge problem.
Please, can someone provide some much-needed help here? It'd be greatly appreciated!
Thank you!
On the same PaintListener you use to paint the coloured background, you have to calculate the position and draw the text. Here's how we do it here:
public void paintControl( PaintEvent event ) {
// Is the button enabled?
if ( !isEnabled() ) {
return;
}
// Get button bounds.
Button button = (Button)event.widget;
int buttonWidth = button.getSize().x;
int buttonHeight = button.getSize().y;
// Get text bounds.
int textWidth = event.gc.textExtent( getText() ).x;
int textHeight = event.gc.textExtent( getText() ).y;
// Calculate text coordinates.
int textX = ( ( buttonWidth - textWidth ) / 2 );
int textY = ( ( buttonHeight - textHeight ) / 2 );
/*
* If the mouse is clicked and is over the button, i.e. the button is 'down', the text must be
* moved a bit down and left.
* To control this, we add a MouseListener and a MouseMoveListener on our button.
* On the MouseListener, we change the mouseDown flag on the mouseDown and mouseUp methods.
* On the MouseMoveListener, we change the mouseOver flag on the mouseMove method.
*/
if ( mouseDown && mouseOver ) {
textX++;
textY++;
}
// Draw the new text.
event.gc.drawText( getText(), textX, textY );
// If button has focus, draw the dotted border on it.
if ( isFocusControl() ) {
int[] dashes = { 1, 1 };
evento.gc.setLineDash( dashes );
evento.gc.drawRectangle( 3, 3, buttonWidth - 8, buttonHeight - 8 );
}
}
In the end, I decided to implement it as a custom Composite with a checkbox/radio button and a label. Not ideal, but I'll have to make do.

Resources