I am a total beginner to Unity, but I'm getting familiar with the interface and the way scripts work with Game Objects. Some days ago, I came across with an article regarding a 3D LED Matrix controlled by Unity and since then I've been trying to make it work with my project.
Original article: http://philippseifried.com/blog/2014/10/29/3d-led-matrix-with-unity/
Basically, once the script is attached to an Orthographic camera (or at least that's what I understood from the article), the camera layers and "slices" the scene, transforms it into a pixel matrix and paints the result into some preview layers dynamically generated.
I have accomplished to attach the camera and get the preview layers to show up. However, I'm unable to get the final result the article shows, as preview layers show absolutely nothing. I think it has to do with the fact that the author is using some kind of transparent planes I have been unable to replicate.
It would be great if someone could guide me a bit to get the exact same result of the article by reading it and watching the last Vine, as it shows his Unity screen with the transparent layers working up and running.
The script was looking at the background color to decided wether a pixel had to be painted or not.
Changing the camera background to transparent (RGBA) was enough to see the final result.
Related
It's been a week since I have been trying to find how to resolve the problem.
I am kind of new to Unity and I'm trying to curve an Image to a round shape.
For my example, the player imports an image from his hard drive, and then this image is curved to cover a bottle, which I created in Blender, the image should cover the girthier part of the bottle, like the bottom part.
Obviously, the image texture is changed to 2D and UI.
I have tried several things, such as downloading a script that curves the UI Image, but later realized the image can't be a UI because it should appear in the scene and be manipulated later by the player. I would like to add text that "covers" the bottle, in a curved way too but this is another problem.
So if anyone knows how I could curve an Image in Unity thank you in advance.
Ps: If I don't post any code it's because I haven't done anything yet in the project rather than the camera movement, therefore nothing interesting. I am just trying to figure out how to do this before continuing on anything else.
I am using Project Tango C API. I signed up to color image and depth image callbacks(TangoService_connectOnPointCloudAvailable and TangoService_connectOnFrameAvailable). So I do have TangoPointCloud and matching TangoImageBuffer. I've rendered them separately and they are valid. So I understand that I essentially can loop through each 3D point now in TangoPointCloud and do whatever I want with them. The trouble is I also want the corresponding color for each 3D point.
The Tango standard examples have lots of examples such as drawing depth and color separately or do OpenGL texture on depth image but they don't have a simple sample that maps 3D point to its color.
I tried TangoSupport_projectCameraPointToDistortedPixel but it gives weird results. I also tried TangoXYZij approach but it is obsolete.
Please those who achieved this help, I wasted two days going back and forth with this.
I'm developing a game with THREEjs and webvr-boilerplate. I'm struggling a bit with how to properly render a HUD (score, distance, powerups etc) that always stays at the top of the scene. I've tried to have a plane (with a texture that's brought in from a hidden canvas element) but positioning it in space proves difficult since I can't match the right depth.
Any clues please? :)
Well, you shouldn't have a classic HUD, VR doesn't work like that.
You're searching for something called diegetic or spatial UI - that is the scores and other icons are rendered as geometry in scene space in a fixed position or distance (this one is called spatial UI). For best results, draw the information on some game object mimicking real displays, for example a fuel gauge on the dashboard of a car or visible remaining bullets on a gun (this one is called diegetic UI).
Unity has made a nice page describing these concepts.
I created a performance statistics HUD specifically for WebVR & THREE.js Projects.
https://github.com/Sean-Bradley/StatsVR
While the default setup shows specific information, you can modify it to show custom graphics and other data.
And if you don't believe me, just check out the
StatVR video tutorial.
I create a THREE.PlaneGeometry with heights, in the highest point locate a THREE.PointLight, but this illuminates areas that are not seen from this point.
Why?
I want light from a point only the areas that are viewed from.
By default, the appearance of any given point on a surface is calculated using the lights, their properties and of course the material properties - it does not take the rest of the scene into account, as that would be very computationally expensive. Various ray tracing renderers do this, but they are really slow, and that's not how WebGL and Three.js work.
What you want is shadows. Three.js is capable of rendering shadows using Shadow Map method. There are various examples on using shadow maps both on the net and Three.js examples folder.
A word of warning though, getting shadows to work well can be hard if you don't have the basics down well - you may need to do some studying. Shadows can slow your application down (specially with many lighs) and look ugly if not properly configured and fine-tuned. Also I think shadow maps are only supported for SpotLight and DirectionalLight, PointLights are trickier.
I would like to put a photorealistic virtual scene on a tablet so when the user rotates the tablet, it shows as if the tablet is a window to an virtual world.
Pre-rendered scenes can be rendered photorealistic, while real-time rendering has a "computer-made look". Given that for one scene, the POV can be rotated but not translated in space, is it possible that a pre-rendered virtual panoramic scene give an immersive impression?
I doubt that this is easy, since rotating the view point will cause some sort of distortion. This kind of distortion is easy for apps like Starwalk, but difficult for photos. Can anyone point me out a direction?
I know that this will be tremendously easy for restricting motion in only one direction, but I would like the user to have a full 3d experience.
You need to either warp the photographs before applying them as textures to your "sky dome" or use non uniform texture coordinates. If done right this will even out most of the distortions giving a more realistic appearance.
Another alternative is to use more photographs so that you are only actually using the central area of each one.
I've found that http://code.google.com/p/panoramagl/ can render cubic, spherical and cylindrical panoramic images, so the problem transforms to how to make render a panorama which can be solved by stitching. I will still leave this answer open to see if anyone else has better answers.