Not able to run unity 3d app properly - windows

I have just started to learn Unity 3D. I am using this as first lesson. I created a terrain, added texture, plants and grasses, added first person controller and directional lighting. When I press play, the scene automatically starts moving along Y direction. I want the scene to move only when i give input using keyboard or mouse. How to achieve this ?

The first person controller has to be placed above the terrain completely. After doing this I get the desired effect.

Related

How to fix Blue Screen appearing after GameObject is removed in Unity 2d Project

I have been successfully building and running a Unity 2D game, but started receiving a Blue Screen during one of my operations. Specifically, when I close a popup and remove all of its child Game Objects, the entire Game Screen turns dark blue (the default background color of the main camera). The music for the game still plays and clicks are still registered if you click in the right area (I can still press the back button, just can't see it).
If I remove 1 gameobject, this problem doesn't come up. But once I have to remove 2 game objects, the entire screen turns blue.
This is my function for removing my game object in case it helps, which works perfectly when it comes to actually removing the gameObjects correctly (game objects to be removed are created from prefabs). I think the problem may be with the camera for some reason, but I have no clue as to why it happens on this function.
public void Remove(int index)
{
float toggleWidth = toggle.GetComponent<RectTransform>().sizeDelta.x;
DestroyImmediate(scrollSnap.pagination.transform.GetChild(scrollSnap.NumberOfPanels - 1).gameObject);
scrollSnap.Remove(index);
scrollSnap.pagination.transform.position += new Vector3(toggleWidth / 2f, 0, 0);
}
I don't receive any errors or warnings in the console. Just a blue screen once more than one GameObject is removed
EDIT:
Turns out my main Canvas's planeDistance was being changed from 100 to 3200. I still have no clue as to why this change occurred...but for anyone else having a similar issue with a dark blue screen appearing in the middle or start of their game, then please check the values your canvas and camera in the Inspector. Simply controlling the planeDistance did the trick for me!
Made a new scene next to the sample scene when i started my sprite learning game. Forgot all about that.
When I had to publish it, In the Build Settings, I had the wrong scene selected. So It published an empty scene instead of my game.
I solved it by putting game in 3d mode and realized that the camera is positioned very far from the sprites, just change the z-axis and again go to 2d mode.
Just Move your camera in Z-axis position, it could be too far from object or it is behind the object, you can also check it by 3D mode the check the object position and change it in positive and negative values.

HoloLens - UI/Slider and Cursor do not intersect during gaze

I'm trying to use UI/Slider in Unity app for HoloLens.
I used the steps described here - Unity UI on the HoloLens
So as a result I have following structure:
MainCamera properties:
SliderCanvas is using MainCamera:
Slider properties:
InteractiveMeshCursor is taken from HoloToolkit.
As a result I'm getting this picture:
When I move the head the Cursor behaves correctly - it stays in the middle of the scene. If I add other 3D objects on the Scene it also correctly changes its states so GazeManager looks like is working correctly.
However I cannot gaze at Slider because it moves with the Camera too and stays in the bottom/center of the scene where I want it to be. So in my case there is no way for them to intersect.
How can I fix this? Do I need to add an other Camera for the SliderCanvas but then how to control both cameras? I am definitely missing something and would appreciate your help.
As expected the solution turned out to be simple (I missed one step from the tutorial). For UI objects you need to set Render Mode property for the Canvas to World Space and changed the Position and Scale of the Slider. Now the Gaze is working.
SliderCanvas properties:
Slider properties:

Group of objects on top but added to a camera

As a precision I already noticed threads about this but didn't find a way to achieve exactly what I need.
Basicaly I have a board of objects that I need remaining always on top of everything but also attached to the camera.
I first tried to add the group to the camera and it stayed as wished always in the viewport. But in this configuration the group of objects still be a part of the scene so while zooming to regular objects in the "editor" the board goes into/among these objects of the scene.
My second trial was based on this thread and work wonderfully in order to get all of the board objects rendered above everything. But on such a configuration when rotating around the axis (with orbit control) both scenes rotates. So I tried to update the foreground scene with coordinates of the camera but the update was not immediate and this scene is flickering (I suppose that while rotating the update function is not called immediately).
My best wish would have been to "attach" the foreground scene to the camera so that it would stay on top and sticked on the screen/viewport but I don't even know if it is possible and how to do that (as only groups of objects seem to be capable to be attached to the camera).
I am really stuck on that point. Thanks you for any help!
If this is what you need,
just set object.material.depthTest = false; and object.renderOrder = 1000; for all objects you need to be always on top and attached to the camera.

Unity 3D Kudan "Place Markerless Object" whenever mesh leaves screen?

I´m kinda new to Unity 3D and C#. Also i´m not exactly sure how Kudans arbitrary tracking solution works in detail. I´m currently using the Unity Kudan SDK to build a VR positional tracking solution, atleast i will try it. Now my plan is:
Whenever the mesh is leaving the screen, i want to freeze it´s position and find new feature points (the "place markerless object" button is doing this: Find new feature point and place a mesh).
Once it found new feature points (which should be a matter of milliseconds) it defreezes the position of the mesh and use the new feature points to further alter it´s position.
The "find new feature point" idea is necessary because whenever the mesh and the old feature points are leaving the screen, tracking will get very inaccurate.
I already tried this in SampleApp.cs:
bool VRSignal;
public void Start()
{
//Get Bools from "KudanTracker"
GameObject g = GameObject.Find("Kudan Camera");
KudanTracker bScript = g.GetComponent<KudanTracker>();
bool VRSignal = bScript.ArbiTrackIsTracking();
}
public void Update()
{
if(VRSignal == false)
{
// from the floor placer.
Vector3 floorPosition; // The current position in 3D space of the floor
Quaternion floorOrientation; // The current orientation of the floor in 3D space, relative to the device
_kudanTracker.FloorPlaceGetPose(out floorPosition, out floorOrientation); // Gets the position and orientation of the floor and assigns the referenced Vector3 and Quaternion those values
_kudanTracker.ArbiTrackStart(floorPosition, floorOrientation); // Starts markerless tracking based upon the given floor position and orientations
}
}
But now it won´t track properly track anymore, also i´m pretty sure ArbiTrackIsTracking() won´t be the solution for that because it won´t lose tracking when the mesh left the screen.
Do you have any idea to solve this problem?
if I understand well, you want change position of 3d model with trigger as soon as your 3d model disappear of the screen.
And you are right, ArbiTrackIsTracking() remain true even if the 3d model go out the screen because if you move again your screen around the 3d model, the 3d model will be always tracked.
But if you move too much your smartphone logically the tracking stops.
My idea for your issue is to get the position of your 3d model markerless transform driver, because the 3d model moves in function the position and orientation of your smartphone to track.
So you can take the position of the moment where your 3d model begin to be tracked.
Then you give a value who will correspond to the difference of first and last position saved.
And if this difference is get, you stop the tracking with arbitrack stop.
if you have another question you can ask on my twitter account #ModeLolito I could answer faster.
And you can watch my youtube channel to see my works on kudan
https://www.youtube.com/user/modelisationLolito

Threejs Rotating object with device orientation control

I am trying to achieve an effect similar to one of the cardboard app examples that Google has put out with their cardboard app called the 'exhibit'. I have a 3D object that I want to rotate using device orientation control. Right now with just the device orientation control, I can view the 3D object but when I turn around, the camera rotates (it seems) causing the object to fall out of view until I turn all the way back around to where it was in the beginning. In other words the camera seems to rotate in its axis as I look around. What I want is to be able to rotate the object as I turn around.
Kinda like this example http://threejs.org/examples/#misc_controls_orbit except I want to rotate using device orientation control.
Any idea how I can incorporate this feature?
Thank you for your assistance.
The answer to my own question for future seekers is to replace camera in
controls = new THREE.DeviceOrientationControls(camera, true);
with the 3D object you are trying to rotate.

Resources