unity 2D touch event blocked by something - events

I have my project setup like this:
Main Camera
Physics 2D Raycaster
Event Mask: Everthing
Canvas
Graphic Raycaster
Blocking Objects: None
Blocking Mask: Nothing
2 Objects setup:
GameObject
Sprite Renderer
Rigidbody 2D
Circle Collider 2D
(my GO script)
UI
Image
Button
(my UI script)
In both my GO and UI script, I add OnPointerEnter event. and both work fine on their own. I can receive OnPointerEnter event.
But when I use a joint to drag a GO object and move to top of the UI object. My UI object OnPointerEnter blocked by the GO. I cannot receive the UI OnPointerEnter event.
I search on web and everybody ask for blocking raycast on GO to UI. But I need the reverse, I want both GO and UI receive OnPointerEnter event no matter if they overlap or not. Any hints?
P.S. something like this in 2D version, GameObject block UI Object. But I still want to receive UI Object OnPointerEnter.

Finally get what I want. Now I have 2 solutions:
using OnTriggerEnter2D
turn off Physics2DRaycaster LayerMask GO when dragging GO
1. using OnTriggerEnter2D
When drag GO. Shoot event to UI and tell which GO is dragging.
Add rigibody2D(isKinematic and Freeze xyz) and 2Dcollider(is trigger) as component in UI object.
OnTriggerEnter2D receive my GO collider and check if it is the dragging GO.(doing this because I excatlly want the UI get my only dragging GO).
2. turn off Physics2DRaycaster LayerMask GO when dragging GO
use this code:
using UnityEngine.EventSystems;
public void turnOffLayerMask(string layerMaskName)
{
Physics2DRaycaster p2drc = Camera.main.GetComponent();
LayerMask layerMask = p2drc.eventMask;
LayerMask disableLayerMask = 1 << LayerMask.NameToLayer(layerMaskName);
p2drc.eventMask = layerMask & ~disableLayerMask;
}
public void turnOnLayerMask(string layerMaskName)
{
Physics2DRaycaster p2drc = Camera.main.GetComponent();
LayerMask layerMask = p2drc.eventMask;
LayerMask enableLayerMask = 1 << LayerMask.NameToLayer(layerMaskName);
p2drc.eventMask = layerMask | enableLayerMask;
}
turn off GO layerMask when dragging GO. Turn On back when drag end. The raycast can go through GO to UI and receive OnPointerXXX events.
I think the EventSystem auto. choose EITHER Physical raycast or Graphic raycast to detect GO/UI objects. So you can only receive either one set of event(Non-UI/ UI). Is it correct?
It seems that I search on web and many people using OnMouseXXX (or other method) instead of Event system(i.e. OnPointerXXX). So they can "touch through" GO to UI.

Related

Bullets not visible in the game view when I move left

So I am very new to unity. and I used this tutorial to create a shooting mechanic for my prototype. moving right and shooting projectiles works fine but when I move left the projectiles are not visible in the game view . I noticed they are in the scene when I run the game. I believe this has nothing to do with my code. since when I press the button it shoots and destroys. it's just not visible when I am moving left. is this a visual issue??
I have screenshots here to provide a better example. of what I'm experiencing. and the links to the movement and projectile code as well for any references. if anyone knows an easy fix it would be much appreciated. If you need to see my code i will include a follow up.
Movement:https://www.youtube.com/watch?v=n4N9VEA2GFo&t=20s[enter image description here]1
Projectile:https://www.youtube.com/watch?v=8TqY6p-PRcs&t=264s
#Nyssa Wootton Welcome to StackOverflow.Try the bullet spawning approach you no need to attach a bullet with the player. Just assign a bullet in inspector and it will spawn a bullet whenever the user clicks on the button. have a look at this code
public GameObject bulletPrefab;
public Transform spawnPos;
public float bulletSpeed = 100f;
void Update () {
if (Input.GetMouseButtonDown(0)){ // on fire btn press
var bullet = Instantiate(bulletPrefab , spawnPos , Quaternion.identity);
// make sure you're prefab have rigidbody on it
bullet.GetComponent<Rigidbody>().AddForce(transform.forward * bulletSpeed , ForceMode.Impulse);
Destory(bullet , 2); // destorys after 2 second
}
}

How I can open a door with animations?

I have this script:
Ray ray = new Ray (cam.transform.position, cam.transform.forward);
RaycastHit hit;
Debug.DrawRay (transform.position, ray.direction * 50f);
if (Input.GetKeyDown (KeyCode.E)) {
if (Physics.Raycast (ray, out hit, 50.0f)) {
if (hit.collider.gameObject.tag == "Door") {
Debug.Log ("YEAH");
}
}
}
How I can start animation open door?
There are a few things that you need to know before you can animate that door.
There are multiple ways to animate an object in Unity. For simple things like that you could decide to just rotate the object (but you need to understand that you will need a Coroutine or a Mathf.MoveTowards or Mathf.Lerp method to avoid instant moving when you call everything in the Update).
But you can also use an animation made by someone else in applications like Blender.
Or finally create an animation with the Unity in game editor and create an Animator to animate your door.
I think that you should use this way.
First follow this official manual:
https://docs.unity3d.com/Manual/animeditor-CreatingANewAnimationClip.html
to create the clip to open your door.
It's really intuitive and you don't even need to code.
After that you should create an Animator Controller:
https://docs.unity3d.com/Manual/class-AnimatorController.html
You will than create a new state with your animation and you will create a connection between the initial state to your "Open door" animation.
After that you just create a simple bool (in your Animator Controller).
And you will add
this.GetComponent().SetBool("nameofyourboolintheanimatorcontroller", true);
to your script (of course this is valid only if your Animator Controller is in the object assigned to your script... if not you should create a new Animator variable and assign it (for example you could make it public and assign it in the editor).

Unity UI Button not calling the Onclick method

I have a button under a canvas that is suppose to restart the scene. But instead its not calling the intended function of attached script.
On Unity Forums, I found solutions like it should be at higher hierarchy in canvas, and the canvas should have a graphic raycast and so-on.
But it still isn't working, although its on click array detects the intended method it is suppose to call.
Scene Editor with Canvas Selected:
Scene Editor with Button Selected:
Remove the canvas component from your button.
Make sure that there is a GraphicRaycaster on your canvas, and that there is an EventSystem object somewhere in the hierarchy.
(Both should have been added when you first added the canvas, but things get lost)
The screenshot that shows your button being selected, also shows, that you did pick a gameobject for the OnClick event, but you didn't pick a function from the drop down right next to that field, it says no function.
Instead of selecting the C# class, select the GameObject that
btnSceneSelect is attached to.
If btnSceneSelect is not attached to a GameObject, attach it to one
(other than the button).
Taken from this site

Need help setting up an interface where rotation is fixed for most elements, but alert and sharing boxes auto-rotate with the device

I'm working with Xcode 7 and Swift 2. I am working on an interface with a camera preview layer and controls that display in a manner similar to the native iOS camera app. The controls all stay in place as you turn the device, but the icons "pivot" in place to orient properly for the device orientation. (I hope I explained that in a way that makes sense. If not, open the native camera app on your iPhone and turn the device around a few times to see what I'm talking about.)
I have the basic interface working already by fixing the overall interface orientation using:
override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask {
return UIInterfaceOrientationMask.LandscapeRight
}
Then I use transform to rotate each button for the device orientation.
The problem is: I need to be able to present alert messages (UIAlertController) and a sharing interface (UIActivityViewController) in this same interface. How do I get those items to rotate to the correct orientation while still keeping the rest of the interface static?
As I see it, there are two possible approaches here -- I just don't know how to make either one work:
Set the interface to auto rotate and support all orientations, but disable auto-rotation for the views that I need to keep locked in place.
Set the interface to only allow .landscapeLeft (which is currently how it's set up) and find a way to rotate the alert messages and sharing dialog box.
Got it working. I needed to access presentedViewController.view to rotate the alert and share views.
//used to periodically trigger a check of orientation
var updateTimer: NSTimer?
//Checks if the device has rotated and, if so, rotates the controls. Also prompts the user that portrait orientation is bad.
func checkOrientation(timer: NSTimer) {
//Array of the views that need to be rotated as the device rotates
var viewsToRotate = [oneView, anotherView]
//This adds the alert or sharing view to the list, if there is one
if let presentedVC = presentedViewController?.view {
viewsToRotate.append(presentedVC)
}
//Rotate all of the views identified above
for viewToRotate in viewsToRotate {
switch UIDevice.currentDevice().orientation {
case UIDeviceOrientation.Portrait:
viewToRotate.transform = CGAffineTransformMakeRotation(CGFloat(-M_PI_2))
case UIDeviceOrientation.PortraitUpsideDown:
viewToRotate.transform = CGAffineTransformMakeRotation(CGFloat(M_PI_2))
case UIDeviceOrientation.LandscapeRight:
viewToRotate.transform = CGAffineTransformMakeRotation(CGFloat(2 * M_PI_2))
default:
viewToRotate.transform = CGAffineTransformMakeRotation(CGFloat(0))
}
}
}

using drag events to switch rectangles in a grid in windows phone 7.5

I've run in a bit of a pickle with my puzzle game for windows phone.
I want to change between two adjutant rectangles, both on the same grid.
The tap event was easily implemented, but implementing drag seems to be a really big pain.
I'm also using a custom user control to get the rectangles on the grid, so i need to create custom delegates before attaching events to my rectangle matrix.
I am currently using the manipulation completed and manipulation started events to implement the drag gesture, but there are a couple of problems:
1) i have to tell the difference between tap and actual drag, both which are covered by the manipulation completed event. This is the way I do it right now:
if (e.TotalManipulation.Translation.X == 0 && e.TotalManipulation.Translation.Y == 0)
{
}
else
{do drag stuff here}
however, the do drag stuff here part does not seem to work, even if the transitions are different from 0; It always executes the tap event.
I am currently stacked in using manipulation events, because, as i said, I am using a custom control as an object prototype for my rectangle matrix, and i need custom delegates for that, and apparently, the GestureListener has no constructors for its event classes.
So, any suggestion on how to do this?
I figured the answer just after posting this question.
You can actually attach a gesture listener to a custom control and create custom delegates, by sending the drag gesture event parameter from the gesture listener drag event to the delegate you create and it works.

Resources