how i can work programaticly with spell effect in game [closed] - animation

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
how i can work programaticly with spell effect in game
i have a effect look like this spell effect
i want to know how i can change the cale of the spell .
like change color in distance
or when u run some this effect goes to transparent and after 10 sec go to fade
or
look at this spell and wepon in this pic like when you have a small wepon and u add a animation effect on it ( like fire or ice effect )
how you change the animation base on the size of the wepon
i have no idea how i have to implement that
thanks in advance
[Edit by Spektre] acording to comments I would change the question text to something like this:
I need/want to program a spell effect visualization for game or whatever ...
what are the common/usual approaches to do this (algorithms,graphics techniques)
want to implement Ice/Fire/??? effects
in form of rays/waves/field/cones...
ideally by a single configurable effect routine (I guess this)
also what are the exact names for some of these effects so I can do a search for them myself
I want to use Unity3D environment
this is the way i want to know how i can make an dynamic spell effect

well there are many approaches for this. I am no expert in the field and not a Unity3D user so I stick to basics:
particle system
It is an engine which visualize particles (many small moving objects). It is used for many effects like: rocket throttle,fire,lighting,changing glow,and many more. The trick is the use of blending so each particle is usually semi transparent ball. More transparent on the outside and more solid on the inside. When you draw more particles close together they blend to the desired continuous effect.
This can be done by single textured QUAD or TRIANGLE. The texture can be white color with alpha channel coded transparency so color of the effect can be coded without change in the texture. Color,size and movement patterns can differ to each other and also with time. These three parameters define the effect look for example you want to cast a electric ray from caster to target so the movement pattern is LINE. Now distort that line a little by some random numbers so the LINE becomes a POLYLINE and on the Vertex of this POLYLINE you can sometimes free a particle in random direction with limited duration so it will be like some sparks (do not forget to lower their size in time so they dissipate). Also you have to experiment with the speed and size/color of the main particle stream along the POLYLINE so it looks right. Some effect need to combine few different particle streams together.
Search keywords: particle system,RGBA texture,blending,interpolation
this picture is taken from the link posted in the question. It is a nice example of two particle systems. Yellow straight LINE particle stream and green POLYLINE particle stream. Also some green sparks around the main stream are present.
texture animation
You can have a little cyclic movie (image by image) in an array of textures so you draw the texture on an plane (usually single QUAD or TRIANGLE) and change the texture after some time to the next , and usually after last go from first again. You can also use BLENDing or STENCIL techniques to draw only the effect area. If the textures are white (colorless) then the color can be modulated by code. This is mostly used for explosions, fire,...
this is simple explosion movie example it is not a cyclic animation so after last frame the effect stops (explosion has finite duration)

Related

Dimension changing of fbx model in unity 3D [duplicate]

This question already exists:
Change dimensions of Cubical Shower 3d model in unity 3D
Closed 2 years ago.
Is it possible to change width of any fbx model in 3D without changing its realistic look so that after changing its dimension, the model should not be stretched?
If 2 objects are placed beside each other then need to increase the size of one object and change position of other object with respect to first object.
Thanks in advance.
this breaks down to two problems, if you want to scale an object in just one dimension it will always stretch, for example your your table:
While the board looks fine the legs will get stretched and look unrealistic.
Now the question is what can you do?
It depends on your model.
First of all has your model only one mesh? or has every component a single mesh?
Preferably you want your components to have a independent mesh object. For your table it would be something like this:
This way you can only scale your board and then transform the position of your legs accordingly so that they fit to the new board size.
If you have only one mesh there is not a lot you can do in Unity. For that you would need to go into Blender or any other 3D modeling tool and split the components manually.
Now if you only stretched the board and your model has a texture you will notice that it will look stretched.
What can you do about that?
Go to your texture and first of all check the wrap mode
in this case we want it on repeat, after that we need to change the material setting
since we stretched the geometry we need to change the tiling, befor it was on y = 1 but we scaled the y dimensions so now we need to adapt this number aswell and make the texture repeat. For a table this is doable, if we for example work with more complex textures that have specific parts this will not work and you have to change the texture manually.
now the texture looks better but you probably will have abrupt color changes, this is because the texture is repeated, i "circled" it on the picture. For this problem you have to change the texture in a picture editing program and make it seamless.
I hope this helped a bit, i know this is only the basics and to get a perfect texture and image you have to put in a bit more work, but for that i would highly recommend to read a tutorial.

Is there any way to implement this beautiful image effect?

Recently I found an amazing APP called Photo Lab,and I'm curious about one effect called Paper Rose.In the pictures below,one is the original picture,the other is the effected picture.My question is what kind of algorithm can do this effect,and it would be better if you can show me some code or demo.Thanks in advance!
enter image description here
enter image description here
I am afraid that this is not just an algorithm, but a complex piece of software.
The most difficult part is to model the shape of the rose. The petals are probably a meshed surface. It is not so difficult to give them a curved shape, but the hard issue is to group them in such a way that they do not intersect.
It is not quite impossible that this can be achieved by first putting them in a flat geometry where you can master intersections, then to wrap it around an axis with a king of polar transform. But I don't really believe in that. I rather think that they have a collision-avoiding geometric modeller.
The next steps, which are more classical, are to texture-map the pictures onto the petals and to perform the realistic rendering of the whole scene.
But there's another option, which I'll call the "poor man's rendering".
You can start from a real picture of a paper rose, where the petals have an empty black, thick frame. Then on the picture, you detect (either in some automated way or just by hand) points that correspond to a regular grid on the flattened paper.
As the petals are not wholly visible, the hidden parts must be clipped out from the mesh, possibly by using a polygonal fence.
Now you can take any picture, fit it over the undistorted mesh, clip out the hidden areas and warp to the distorted position. Then by compositing tricks, you will give it a natural shaded appearance on the rose.
Note: the process is eased by drawing a complet grid inside the frame. Anyway, you will need to somehow erase it before doing the compositing, in order to retrieve just the shading information.
I would tend to believe that the second approach was used here, as I see a few mapping anomalies along some edges, which would not arise on a fully synthetic scene.
In any case, hard work.

Collision detection algorithm - Image inside a cylinder [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
With a camera inside a cylinder I capture a image. I want to detec if there are some deformation due to a collision outside. I also want to detec in which side the collision occurs. The image inside the cylinder have a lot of dots which forms a grid. Which is the better way to do this?
A simple way to detec the collision is to subtract the image without collision with the real image. If the result isn't "zero", something changed and probally a collision occured. But this doesn't give me which side the cylinder deformed.
I already tried to do a projection of the points in the plane, but i couldn't do it.
In this link you can find a question post by me with the problem of the projection: Projection of a image from inside a cylinder to a plane 2D [Matlab]
In that link you can see all the information about this problem.
An idea is to use region props in the image and see which part of the image deformed, but I want to do something a little more complex. I want to measure the deformation, to have an idea how much it deformed during the impact. This is the reason why i thought about doing some projection in the plane and measure the distance that the points deformed. Do you have any idea to do this in a more simple way? How can i do it
Someone can help me please?
Here's a little code/pseudo-code to try to help. In words:
I would subtract the before and after images and take the absolute value of the difference image. Then, I would have some sort of threshold for whether or not the difference is just due to variation in noise and not a real change. Next I find the center of mass (weighted by the magnitude of difference), which can be done easily with the image processing toolbox (regionprops). The center of mass of the variation would be a good estimate of where a "collision" occurred, i.e. a deformation in the cylinder
So that would be something along the lines of:
diffIm = abs(originalIm - afterIm)
threshold = someNumber
diffIm = diffIm(diffIm>threshold)
%Tell regionprops that the whole image is one region by passing it a array of ones the size of the image, and diffIm as the measurment image
props = regionprops(ones(size(diffIm)),diffIm,'WeightedCentroid')
%WeightedCentroid is the center of mass, and it is weighted by the grayscale image diffIm
You now have the location of the centroid of deformation in your image space, and all you would need is a map to convert that to cylinder space (if you needed that), otherwise you could just plot the centroid over the original image for a visual output of where the code expects the collision occurred.
On another note, if you have control of your expirimental setup, I would expect that a checkerboard pattern would give you better results than the dots (because the dots are very spaced out, and if the collison only affects the white space you might not be able to detect it at all). A checkerboard would mean you have more edges than can be displaced, which is the brunt of what would be detected anyways. A checkerboard may also be easier for mapping to a plane if you were still trying to do that, because you could know that all the edges are either parallel or intersecting at right angles, and also evenly spaced.

Do elements drawn outside the clip plane affect OpenGL performance?

OpenGL Question:I have something to ask about clip space transformation. I am reading an online tutorial and it says that everything you draw outside the clip space will be clipped. When it come to this, does the elements outside the clip space affects the performance or not? Because it will not be drawn and thus it doesn't affect.
Assuming that it will affect performance and in case of 2d game like super mario, I am thinking about not to draw the elements outside the clip space to achieve better performance. Please clarify. Thanks.
OpenGL has only a certain amount of knowledge about your scene and will clip very late in the pipeline. It can't apply a broad phase test. Assuming you can, you should.
Supposing you had a model with 30,000 triangles, OpenGL would transform each and every one of those 30,000 triangles before considering clipping. If you know something as simple as the bounding sphere for the model it's possible you could see that the whole thing is completely outside of the frustum in a single test and save almost 30,000 extra bits of effort.
In a 2d game like Mario what this usually means is using the scroll position to index into the map and to generate geometry only for potentially visible tiles and sprites that are within the visible area.
For the map that will generally just men figuring out the (x, y) of one corner and then generating geometry for the known width and height of the screen so it means discarding the vast majority of the geometry with zero processing.
For the sprites, this is generally why in those sort of games you often see enemies reset to their starting position if you walk a little way from them and then walk back: they're added to the active list based on a map location trigger and removed when you walk far enough away. While not active, no mutable storage is afforded to them.

Google Maps-style quad-tree of materials on a single plane in Three.js – 1x1, 2x2, 4x4 and 8x8

I'm trying and failing to work out how to achieve a quad-tree of materials (images) on a single plane, much like a Google Maps-style zoomable tile that gets more accurate the closer you get.
In short, I want to be able to have a 1x1 image texture (covering a plane that is 256 units wide and tall) that can then be replaced with a 2x2 texture, that can then be replaced with a 4x4 texture, and so on.
Like the image example below…
Ideally, I want to avoid having to create a different plane for each zoom level / number of segments. A perfect solution would allow me to break a single plane into 8x8 segments (highest zoom) and update the number of textures on the fly. So it would start with a 1x1 texture across all 64 (8x8) segments, then change into a 2x2 texture with each texture covering 4x4 segments, and so on.
Unfortunately, I can't work out how to do this. I explored setting the materialIndex for each face but you aren't able to update those after the first render so that wouldn't work. I've tried looking into UV coordinates but I don't understand how it would work in this situation, nor how to actually implement that in Three.js – there is little in the way of documentation / examples for this specific case.
A vertex shader is another option that came up in research, but again I don't know enough to understand how to construct that.
I'd appreciate any and all help with this, it will be a technique that proves valuable for other Three.js users I'm sure.
Not 100% sure what you are trying to do, whether you are talking about texture atlasing (looking up and different textures based on current setting/zooms) but if you are looking for quad-tree based texturing that increases in detail as you zoom in then this is essentially what mipmaping is and does.
(It can be also be used to do all sorts of weird things because of that, but that's another adventure entirely)
Generally mipmapping is automatic based on the filtering you use - however it sounds like you need more control over it.
I created an example hidden away in the three.js source tree which may help:
http://mrdoob.github.com/three.js/examples/webgl_materials_texture_manualmipmap.html
Which shows you how to load each mipmap level in manually, rather than have it just be automatically generated.
HTH

Resources