I am currently trying to get a particle effect to spawn when the character hits a surface, depending on the surface a different particle effect is chosen. I have included a picture of my BP for my character Animation. Currently the particle effect spawns but only in one place and does not follow the character as it moves. So every time i step on a surface the particle effect happens at the spawn location. I have followed and looked at many threads and videos already to get to this point, any help would be greatly appreciated, I am using UE4 4.9.2, thank you.
I believe that you would achieve the desired results by plugging in the actor location, into the 'Location' input of the 'Spawn Emitter At Location' node.
What you are doing right now is, tracing from (0, 0, 0) to the actor location, seeing if there is a collision, and then using this location of the collision to spawn the emitter.
I'm surprised that it even works quite honestly.
Take heart though! The use of the trace test for the Surface Type check is completely accurate.
What I would modify in the trace test is to plug in the actor location in both the start and end inputs, with the end input being a location that is offset in the Negative Z direction by your actor height. This will take some trial and error. What this would do is trace from the actor to the plane beneath the actor.
However, if the spawning works correctly as it is right now, perhaps the above change is not necessary.
TL;DR:
Simply use the Actor Location node to provide the location for the emitter to spawn, perhaps with an offset to make sure it spawns at the feet location, and you will be golden!
You may want to have the start of your line be somewhere other than 0,0,0. For example, have the trace start at the player and shoot it downwards to check the surface type. Message back if you need further help!
Related
I am working on a line follower bot that travels on a map consist of nodes But the confusion is how to let the bot know that at which node he is standing, in other words, what approach should be taken to feed the map to the bot so that it knows every node of the map and also know which node he is at present time.
I searched over the internet a lot but that doesn't seem to be worthy.
Line followers usually do not have any map. Instead they usually have a pair of front sensors pointing downwards (usually IR photo diodes and LEDs) which detects line crossing from left and right side and the robot just turns toward the line.
Its usually done by controlling the speed of left and right motor with brightness of detected light from right and left sensor (usually without any MCU or CPU, the analog version uses just 2 comparators and power amplifier to drive motors which results in much more smooth movement instead of the zig-zag like pattern)
Better bots have also in-build algorithms to search for line if it has gaps (that usually requires CPU or MCU).
If you insist on having map then you need interface to copy it in (ISP for example) however to detect where the robot is needs to actually follow the line remembering the trajectory and compare it against map until detected trajectory corresponds to only one location and orientation in the map however you will just end up with more complex and less reliable robot that has more or less the same or worse properties than simple line follower.
Another option is to use positioning system so either there is a build in positioning system on the maze or map (can be markers or transponders or whatever) or you place your robot to predetermined position and orientation and hit reset button Or you use accelerometers and gyros to integrate the position over time however as mentioned I see no benefit in any of this for line follower. This kind of stuff is better for unknown maze solver robots (they usually uses the SONAR or also IR photodiode+LED however oriented forward and to sides instead of downwards).
I have a game that requires the player to roll two die. As this is a multiplayer game, the way I currently do this is have 6 animations (1 for each die's outcome). When the player clicks a button, it sends a request to my server code. My server code determines the die's outcome and sends the results to the client. The client then plays the corresponding animations.
This works ok, but has some issues. For instance, if the server sends back two of the same values (two 6's, for example) then the animations don't work correctly. As both animations are the same, they overlay each other, and it looks like only one die was rolled.
Is there a better way to do this? Instead of animations, using "real" dice? If that's the case, I always need to be sure to "pre-determine" the outcome of the dice roll, on the server. I also need to make sure the dice don't fall off the table or jostle any of the other player pieces on the board.
thanks for any ideas.
The server only needs to care about the value result, not running physics calculations.
Set up 12 different rolling animations:
Six for the first die
Six for the second die
Each one should always end with the same modeled face pointing upwards (the starting position isn't relevant, only the ending position). For the latter steps you'll probably want to adjust the model's UV coordinates to use a very tall or very wide texture (or just a slice of a square one). So not like this but rather all in a line 1-2-3-4-5-6.
The next step is picking a random animation to play. You've already got code to run a given animation, just set it to pick randomly instead of based on the die-roll-value from the server:
int animNum = Mathf.Floor(Random.Next()*6);
Finally, the fun bit. Adjusting the texture so that the desired face shows when the animation is done. I'm going to assume that you arrange your faces along the top edge of your square texture. Material.SetTextureOffset().
int showFace = Mathf.Floor(Random.Next()*6); //this value should come from the server
die.renderer.material.SetTextureOffset(1f/6 * showFace,0);
This will set the texture offset such that the desired face will show on top. You'll even be able to see it changing in the inspector. Because of the UVs being arranged such that each face uses the next chunk over and because textures will wrap around when reaching the edge (unless the texture is set to Clamp in its import settings: you don't want this here).
Note that this will cause a new material instance to be instantiated (which is not very performant). If you want to avoid this, you'll have to use a material property block instead.
You could simulate the physics on the server, keep track of the positions and the orientations of the dice for the duration of the animation, and then send the data over to the client. I understand it's a lot of data for something so simple, but that's one way you can get the rolls to appear realistic and synced between all clients.
If only Unity's physics was deterministic that would be a whole lot easier.
I'm using the 3rd person blueprint template and I've added a custom sprint and custom crouch functionality to it.. when crouching I trigger the crouching animations according to the character speed and set the max walk speed to a low value, I can interrupt the crouch by sprinting and vice versa... I can stand up from the crouch by pressing the crouch key again or attempting to jump.
It all worked quite well, until I attempted to manipulate the capsule collider's half height according to the character's speed whenever crouch, jump, or sprint is pressed... I can see the collider working as expected, however when I try to crouch the character's feet sink into the ground and when I try to stand up again the character falls through the floor...
Any help would be greatly appreciated...
The problem is that just shrinking the half-height is probably not what you want when your character is crouching, because your collision capsule is shrinking from the top and the bottom.
So, the feet of your character start to sink into the ground and when you grow your capsule it will clip through your level and fall down due to gravity.
You have two possibilities to fix this:
Use two capsules on your character, one for crouching and one for standing and only activate the one you are using
Move the capsule down the same time you are shinking it.
The capsule needs to finish at the same point, so move it lower.
Since there is no accumulation buffer in opengles, what shroud I do to achieve a trail? If I use frame buffer to simulate it, will it cost too much to make the tail looks smoothly?
There is usually not enough data to create a trail from a single state. Even including the speed will make it look very poor if the ball changes the direction of movement. So some kind of information of the previous object states is quite necessary.
It would be possible to use a separate channel to hold the previous states such as stencil buffer or even the alpha channel on which you could create a decay system. That means you would draw the ball on this channel on every frame but before drawing it you would reduce the whole channel by some value so the "older" parts slowly fade out. This separate ball drawing would need to be something like a radial gradient so you will receive a relatively smooth trail but will be far from perfect and for relatively fast movement some additional post processing will be mandatory unless the result is incidently a desired effect.
A more suitable approach is to contain an object position trace on the CPU. Simply keep pushing current positions on the stack and removing those being too old (for instance keep 20 latest positions). Then use these positions to create a shape representing a ball tail. At this point the chances are limitless. For instance you may design a tail as an image and then create a rectangle-like shape from the positions which produces an awesome tail effect if done properly.
I am making a game and i have come across a hard part to implement into code. My game is a tile-bases platformer with lots of enemies chasing you. basically, in theory, I want my enemies to be able to, every frame/second/2 seconds, find the realistic, and shortest path to my player. I originally thought of A-star as a solution, but it leads the enemies to paths that defy gravity, which is not good. Also, multiple enemies will be using it every second to get the latest path, and then walk the first few tiles of it. So they will be discarding the rest of the path every second, and just following the first few tiles of it. I know this seems like a lot, to calculate a new path every second, all at the same time, if their is more than one enemy, but I don't know any other way to achieve what i want.
This is a picture of what I want:
Explanation: The green figure is the player, the red one is an enemy. the grey tiles are regular, open, nothing there tiles, the brown tiles being ones that you can stand on. And finally the highlighted yellow tiles represents the path that i want my enemy to be able to find, in order to realistically get to the player.
SO, the question is: What realistic path-finding algorithm can i use to acquire this? While keeping it fast?
EDIT*
I updated the picture to represent the most complicated map that their could be. this map represents what the player of my game actually sees, they just use WASD and can move around and they see themselves move through this 2d plat-former view. Their will be different types of enemies, all with different speeds and jump heights. but all will have enough jump height and speed to make the jumps in this map, and maneuver through it. The maps are generated by simply reading an XML file that has the level data in it. the data is then parsed and different types of tiles are placed in the tile holding sprite, acording to what the XML says. EX( XML node: (type="reg" graphic="grass2" x="5" y="7") and so the x and y are multiplied by the constant gridSize (like 30 or something) and they are placed down accordingly. The enemies get their frame-by-frame instruction from an AI class attached to them. This class is responsible for producing this path and return the first direction to the enemy, this should only happen every second or so, so that the enemies don't follow a old, wrong path. Please let me know if you understand my concept, and you have some thought/ideas or maybe even the answer that i'm looking for.
ALSO: the physics in this game is separate from the pathfinding, they work just fine, using a AABB vs AABB concept (the player and enemies also being AABBs).
The trick with using A* here is how you link tiles together to form available paths. Take for example the first gap the red player would need to cross. The 'link' to the next platform (aka brown tile to the left) is actually a jump action, not a move action. Additionally, it's up to you to determine how the nodes connect together; I'd add a heavy penalty when moving from a gray tile over a brown tile to a gray tile with nothing underneath just for starters (without discouraging jumps that open a shortcut).
There are two routes I see personally: running a quick prediction of how far the player can jump and where they'd jump and adjusting how the algorithm determines node adjacency or accept the path and determine when parts of the path "hang" in the air (no brown tile immediately below) and animate the enemy 'jumping' to the next part of the path. The trick is handling things when the enemy may pass through brown tiles in the even the path isn't a parabola.
I am not versed in either solution; just something I've thought about.
You need to give us the most complicated case of map, player and enemy behaviour (including jumping up and across speed) that you are going to either automatically create or manually create so we can give relevant advice. The given map is so simple, put the map in an 2-dimensional array and then the initial player location as an element of that map and then first test whether lower number column on the same row is occupied by brown if not put player there and repeat until false then same row higher column and so on to move enemy.
Update: from my reading of the stage generation- its sometime you create- not semi-random.
My suggestion is the enemy creates clones of itself with its same AI but invisible and each clone starts going in different direction jump up/left/right/jump diagonal right/left and every time it succeeds it creates a new clone- basically a genetic algorithm. From the map it seems an enemy never need to evaluate one path over another just one way fails to get closer to the player's initial position and other doesn't.