How to make a slideshow of a texture image in Unity 5 - unityscript

I'm working on a visual city project and I've got a question which is running on my head! Basically I have made a simple advertisment board and I added a texture to it which is a poster. My question is that ,how can I be able to make this board changing it's texture after a period of time to another texture! I mean the board would be like a slideshow and the textures would be changeable to another. So it would not be static anymore. Here's the image of the board:
So if you know how to do this, please let me know and give some guidelines.. Thanks

You could prepare a List<Texture2D> of textures you want to be in your slideshow and then assign a script to your advertisement board, that assigns a new Texture2D from that List to the boards Material when a specific amount of time has passed.
For counting time you can accumulate Time.deltaTime in your Update function for example.
For setting the texture of the material you can use Material.SetTexture (https://docs.unity3d.com/ScriptReference/Material.SetTexture.html) if I am not mistaken.

Related

how to use texture masks in game Maker?

First off I'm not totally sure if "texture masks" is the correct term to use here so If someone knows what it is then please let me know.
so the real question. I want to have an object in GameMaker: Studio which as it moves around it's texture changes depending on its position by pulling from a larger static image behind it. I've made a quick gif of what it might look like.
It can be found here
Another image that might help explain this is the "source-in" section of this image.
This is a reply to the same question posted on the steam GML forum by MrDave:
The feature you are looking for is draw_set_blend_mode(bm_subtract)
Basically you will have to draw everything onto a surface and then using the code above you switch the draw mode to bm_subtract. What this will do is rather than drawing images to the screen it will remove them. So you now draw blocks over the background and this will remove that area. Then you can draw everything you just put on the surface onto the screen.
(Remember to reset the draw mode and the surface target after. )
Its hard to get your head around the first time, but actually it isn’t all that complex once you get used to it.

Efficiently rendering tiled map using SpriteKit

As an exercise, I decided to write a SimCity (original) clone in Swift for OSX. I started the project using SpriteKit, originally having each tile as an instance of SKSpriteNode and swapping the texture of each node when that tile changed. This caused terrible performance, so I switched the drawing over to regular Cocoa windows, implementing drawRect to draw NSImages at the correct tile position. This solution worked well until I needed to implement animated tiles which refresh very quickly.
From here, I went back to the first approach, this time using a texture atlas to reduce the amount of draws needed, however, swapping textures of nodes that need to be animated was still very slow and had a huge detrimental effect on frame rate.
I'm attempting to display a 44x44 tile map where each tile is 16x16 pixels. I know here must be an efficient (or perhaps more correct way) to do this. This leads to my question:
Is there an efficient way to support 1500+ nodes in SpriteKit and which are animated through changing their textures? More importantly, am I taking the wrong approach by using SpriteKit and SKSpriteNode for each tile in the map (even if I only redraw the dirty ones)? Would another approach (perhaps, OpenGL?) be better?
Any help would be greatly appreciated. I'd be happy to provide code samples, but I'm not sure how relevant/helpful they would be for this question.
Edit
Here are some links to relevant drawing code and images to demonstrate the issue:
Screenshot:
When the player clicks on the small map, the center position of the large map changes. An event is fired from the small map the central engine powering the game which is then forwarded to listeners. The code that gets executed on the large map the change all of the textures can be found here:
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/GameScene.swift#L489
That code uses tileImages which is a wrapper around a Texture Atlas that is generated at runtime.
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/TileImages.swift
Please excuse the messiness of the code -- I made an alternate branch for this investigation and haven't cleaned up a lot of residual code that has been hanging around from pervious iterations.
I don't know if this will "answer" your question, but may help.
SpriteKit will likely be able to handle what you need but you need to look at different optimizations for SpriteKit and more so your game logic.
SpriteKit. Creating a .atlas is by far one of the best things you can do and will help keep your draw calls down. Also as I learned the hard way keep a pointer to your SKTextures as long as you need them and only generate the ones you needs. For instance don't create textureWithImageNamed#"myImage" every time you need a texture for myImage instead keep reusing a texture and store it in a dictionary. Also skView.ignoresSiblingOrder = YES; helps a bunch but you have to manage your own zPosition on all the sprites.
Game logic. Updating every tile every loop is going to be very expensive. You will want to look at a better way to do that. keeping smaller arrays or maybe doing logic (model) updates on a background thread.
I currently have a project you can look into if you want called Old Frank. I have a map that is 75 x 75 with 32px by 32px tiles that may be stacked 2 tall. I have both Mac and iOS target so you could in theory blow up the scene size and see how the performance holds up. Not saying there isn't optimization work to be done (it is a work in progress), but I feel it might help get you pointed in the right direction at least.
Hope that helps.

Augmented reality: Rendering function

My question builds up on this thread: Computer Vision / Augmented Reality: how to overlay 3D objects over vision? and on its first answer. I want to build an application that projects on real time the position of a fictional 3D object into a video feed, but the first step I have to take is: How can I do this over a single image?
What I am going for at the moment is having some kind of function that given a picture, its 6D pose (position + orientation), a 3D object (on fbx, 3ds, or something easily convertable to or from others), and its own position and orientation, returns me the projection of the 3D object over the image. Once I have that, I should be able to apply it over every frame of the video feed (how will I get the 6D information of the camera is a problem I'll deal with later)
My problem is that I am unsure where to find such a function, if it even exists. It should be offered like some kind of script or API so an external program can make use of it. Where should I look? Unity? Some kind of OpenCL functionality? So far my reading has not given me any conclusive answers, and as I am a novice in the topic, I'm sure a steep learning curve is ahead and I'd rather put my efforts on the right direction. Thank you
Indeed there's an API for that.
https://developer.vuforia.com
read the GetStarted page.
On this site, there is a "Target Manager", you'll want to upload your target images. Those will allow you to display the 3D object that you want.
On the same "page" you can have several target images.
Example : One that display your 3D object when visible, one that makes it rotates when hided. etc ...
For the real time projection video part, I will make the assumption that, on Unity, you can have a movie texture running on a plane in background and sort your layers in a way that your 3D object is above.
Please update the topic whenever you find a way.
Bye

Animate 3D object disassembly in Unity3D

my question is just about choosing the right approach because i'm not sure about the solution.
i got 3d model in my project, at some point i want to show animated disassembly , the object is made of somthing like 200 pieces.
so animating with keyframe one by one is time consuming.
the animation i'm looking for is like explosion from the center of the object so the parts will just move out of its center.
example image:
what would you do?
what is the best way to manage such task?
I would code it. Maybe I am biased because I am a programmer, but animating it would be a pain.
So I would import the model into Unity3d. Then I would grab all the parts and store them in a list. Once I have the 200 parts then I can do anything I want to them.
I would then proceed to attach rigibodies and box colliders to them all -- this can be done programmatically. Then you can initiate the explosion by adding a velocity to each part. If you want to be fairly realistic and have something that is fairly random you can give each object mass and then use the equation F=ma for the explosion. That is, each part will get different acceleration depending on the mass they have.

3d model construction using multiple images from multiple points (kinect)

is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. The model would also have to be recreated with the textures.
What I would like to know is whether there are any similar projects/software already available and any links would be appreciated
Whether this is possible within perhaps 6 months
How would I proceed to do this? Such as any similar algorithm you could point me to and such
Thanks,
MilindaD
It is definitely possible and there are a lot of 3D scanners which work out there, with more or less the same principle of stereoscopy.
You probably know this, but just to contextualize: The idea is to get two images from the same point and to use triangulation to compute the 3d coordinates of the point in your scene. Although this is quite easy, the big issue is to find the correspondence between the points in your 2 images, and this is where you need a good software to extract and recognize similar points.
There is an open-source project called Meshlab for 3d vision, which includes 3d reconstruction* algorithms. I don't know the details of the algorithms, but the software is definitely a good entrance point if you want to play with 3d.
I used to know some other ones, I will try to find them and add them here:
Insight3d
(*Wiki page has no content, redirects to login for editing)
Check out https://bitbucket.org/tobin/kinect-point-cloud-demo/overview which is a code sample for the Kinect for Windows SDK that does specifically this. Currently it uses the bitmaps captured by the depth sensor, and iterates through the byte array to create a point cloud in a PLY format that can read by MeshLab. The next stage of us is to apply/refine a delanunay triangle algoirthim to form a mesh instead of points, which a texture can be applied. A third stage would then me a mesh merging formula to combine multiple caputres from the Kinect to form a full 3D object mesh.
This is based on some work I done in June using Kinect for the purposes of 3D printing capture.
The .NET code in this source code repository will however get you started with what you want to achieve.
Autodesk has a piece of software that will do what you are asking for it is called "Photofly". It is currently in the labs section. Using a series of images taken from multiple angles the 3d geometry is created and then photo mapped with your images to create the scene.
If you interested more in theoretical (i mean if you want to know how) part of this problem,
here is some document from Microsoft Research about moving depth camera and 3D reconstruction.
Try out VisualSfM (http://ccwu.me/vsfm/) by Changchang Wu (http://ccwu.me/)
It takes multiple images from different angles of the scene and outputs a 3D point cloud.
The algorithm is called "Structure from Motion".
Brief idea of the algorithm : It involves extracting feature points in each image; finding correspondences between them across images; building feature tracks, estimating camera matrices and thereby the 3D coordinates of the feature points.

Resources