Unity 2D Simple directional blend: one direction ignored - animation

I'm working on a project in Unity that involves a Paper Mario-esque movement system, which entails four positions: facing right from the side, facing right from the back, and facing left for both. This means there is no specific animation for down.
What I've done is created a Blend Tree with a 2D Simple Directional type, transitioned to from an "Idle" default state. However, although I've set things up fairly simply (1,0; 1,1; -1,0; -1,1) and one of my directions (specifically, (1, 0)) doesn't work. After some experimentation I've discovered it's "clipping" the 1,0 off just above y=0. So (-1, 0) works as it should and (1, 0.1) works but (1, 0) does not.
This is a screenshot of the issue. What exactly is going wrong?
Is this related to what I assume is an error about the 180 degree separation?

Related

3D surface reconstruction from a sparse point cloud

I have an equipment which performs radial scan. It scans an object along the green lines showing in the image(three lines in the image, but the equipment can perform more).
Thus, I can get a point cloud, which contains points from the upper and lower surfaces of the object. And I want to make surface reconstruction using the point cloud.
I load the point cloud file(.txt format, xyz coordinates of each point) into Meshlab and it shows like this(yellow points are the points in the point cloud):
I then followed a blog teaching simple usage of Meshlab and clicked "Fiter——>Normals,Curtavures and Oreientation——>Smooths normals on a point set" and "Fiter——>Remeshing Simplication and Reconstruction——>Surface Reconstruction:Ball Pivoting". (both default setting)
However, the results was not I want:
It connects points within a scanned image, but the surface should be reconstructed by connecting points between adjacent scanned images.
I can think about two possible reasons:(1) I did not choose right setting in Meshlab. If so, which setting parameters can make reconstruction for the point cloud. (2)My point cloud is too sparse and I need to interpolate the point cloud to make it have more points and which interpolation method should I use?
————————————————————EDIT———————————————————
The normals computed in this image are computed using Neighbour num 10 and smooth iteration 8.
And, the normals computed in this image are computed using Neighbour num 60 and smooth iteration 8. When Neighbour num is greater than 20, normals are similar to the image below.
It is possible that you are having problems due to inequality directional sampling of the surface. You have high density of points in one direction, then a big gap in the other direction. This is a problem due to 'Compute normals for point sets' filter uses a parameter with the number of Neighbours to each point, so the computer normal will be byassed because there is unlikely to find neighbours in a different scanline.
So my proposal is (I will reuse parts from this tutorial )
Point Cloud Simplification and Normals Computation
Start by increasing the number of orientations in the scan. You want to fill those gaps.
If you need to reduce the number of point samples in the center of the object to reduce noise, go to Filters -> Point Set -> Point Cloud Simplification. Make sure Best Sample Heuristic is checked.
After point cloud simplification, make sure to select Simplified point cloud in the Show Layer Dialog on the right hand side. If not visible, it can be opened by navigating to View -> Show Layer Dialog. Now we need to compute normals for point set.
So go to Filters -> Point Set -> Compute normals for point sets . Enter Neighbour num between 10 - 100. Initially try with 10 and try to get a mesh and later see if this can be improved by increasing the neighbour number. For Smooth Iteration initially try with 0 value and may be later it can be tried with values between 5 - 10. I mostly use value 8.
Make sure if your normals are properly computed by going to Render -> Show Normal.
Meshing / Poisson Surface Reconstruction
Next we are going to use Poisson Surface reconstruction to do meshing.
So go to Filters ->Remeshing, Simplification and Reconstruction -> Screened Poisson Surface Reconstruction. Initially try with default parameters then later one can play around with reconstruction depth, number of samples and interpolation weight values.
This will create another mesh layer called Poisson in the Show layer Dialog which has surfaces now. Make sure to select that to peform further operations.
One can observe that it has also created some extra surfaces. To remove them go to Filters -> Selection -> Select Faces with edges longer than .... By default the value is automatically computed, just click on apply. Then click on delete face button (triangle face and three vertex with a cross over it). This will remove extra surfaces.
After this operation, still some noise faces can be seen. To remove them go to Filters -> Cleaning and Repairing -> Remove isolated pieces (wrt Face Num.). Use the default value and make sure Remove unreferenced vertices is checked. This will remove some noise faces.
Even after the above operation some noise faces are seen. To remove them go to Filters -> Selection -> Select non Manifold Vertices. Click apply. Then click on delete face button (triangle and threwe vertex with a cross over it). This will remove remaining extra faces.

Ceres Solver: Bundle adjustment

I have a calibrated camera. I took video with it in circular motion.
I want to find the camera extrinsic for each frame with bundle adjustment.
The matching use cv::findHomography(RANSAC) to remove the outliers. And the result is almost perfect.
After finding the matching points, I use google's ceres solver to do bundle adjustment.
if there are only two frames, the result is good. And I've reproject the point cloud. They looks correct.
However, I failed to do BA on multiple frames. I've tried several strategies:
(I must keep scaling factor constant)
Initialize all point at (0, 0, 100) in world coordinate. Initialize all camera in (0, 0, 0) with zero rotation. Fix the first view extrinsic. Start doing BA..
BA several times. First iteration for frame 1 and frame 2. second iteration for frame 2 and 3. Each iteration I put new observed point on (0, 0, 100) in camera space.(Reprojection function can't handle points behind the camera)
The first approach is bad.
The second one is better. But it is not allowed to have constant scaling factor..

Game Maker - Touch Event

I´m making my first game in Game Maker.
In the game i need to the user to draw a figure, for example a rectangle, and the game has to recognize the figure. How can i do this?
Thanks!
Well, that is a pretty complex task. To simplify it, you could ask him to place a succession of points, using the mouse coordinates in the click event, and automatically connect them with lines. If you store every point in the same ds_list structure, you will be able to check conditions of angle, distance, etc. This way, you can determine the shape. May I ask why you want to do this ?
The way I would solve this problem is pretty simple. I would create a few variables for each point when someone clicked on one of the points it would equal true. and wait for the player to click on the next point. If the player clicked on the next point i would call in a sprite as a line using image_angle to line both points up and wait for the player to click the next point.
Next I would have a step event waiting to see if all points were clicked and when they were then to either draw a triangle at those coordinates or place an sprite at the correct coordinates to fill in the triangle.
Another way you could do it would be to decide what those points would be and check against mouse_x, and mouse_y to see if that was a point and if it was then do as above. There are many ways to solve this problem. Just keep trying you will find one that works for your skill level and what you want to do.
You need to use draw_rectangle(x1, y1, x2, y2, outline) function. As for recognition of the figure, use point_in_rectangle(px, py, x1, y1, x2, y2).
I'm just wondering around with ideas cause i can't code right now. But listen to this, i think this could work.
We suppose that the user must keep his finger on touchscreen or an event is triggered and all data from the touch event is cleaned.
I assume that in future you could need to recognize other simple geometrical figures too.
1 : Set a fixed amount of pixels of movement defined dependent on the viewport dimension (i'll call this constant MOV from now on), for every MOV you store in a buffer (pointsBuf) the coordinates of the point where the finger is.
2 : Everytime a point is stored you calculate the average of either X and Y coordinates for every point. (Hold the previous average and a counter to reduce time complexity). Comparing them we now can know the direction and versus of the line. Store them in a 2D buffer (dirVerBuf).
3 : If a point is "drastically" different from the most plain average between the X and Y coordinates we can assume that the finger changed direction. This is where the test part of MOV comes critical, we must assure to calculate an angle now. Since only a Parkinsoned user would make really distorted lines we can assume at 95% that we're safe to take the 2nd point that didn't changed the average of the coordinate as vertex and let's say the last and the 2nd point before vertex to calculate the angle. You have now one angle. Test the best error margin of the user to find if the angle is about to be a 90, 60, 45, ecc.. degrees angle. Store in a new buffer (angBuf)
4 : Delete the values from pointsBuf and repeat step 2 and 3 until the user's finger leaves the screen.
5 : if four of the angles are of 90 degrees, the 4 versus and two of the directions are different, the last point is somewhat near (depending from MOV) the first angle stored and the two X lines and the Y lines are somewhat equal, but of different length between them, then you can connect the 4 angles using the four best values next to the 4 coordinates to make perfect rectangular shape.
It's late and i could have forgotten something, but with this method i think you could even figure out a triangle, a circle, ecc..
With just some edit and confronting.
EDIT: If you are really lazy you could instead use a much more space complexity heavy strategy. Just create a grid of rectangles or even triangles of a fixed dimension and check which one the finger has touched, connect their centers after you'have figured out the shape, obviously ignoring the "touched for mistake" ones. This would be extremely easy to draw even circles using the native functions. Gg.

having trouble setting up simple rectangular collisions in chipmunk

Recently I've been trying to create something I've always wanted, but never had the skill and time to do - a computer game. To be more precise, a homage / clone of one of many of my favourite games. To start with something simple I've decided to create a classic 2D platform based on the Castlevania series.
Being a Ruby programmer I've decided to use Gosu. Then I decided I don't want to reinvent the wheel so I'm going to use Chipmunk.
After a few days, I've ended up having inexplicable collision detection problems. I've added boundary-box drawing functions just to see what the hell is going on.
As you can see, Belmont collides with blocks of walls he's not remotely close to touching. Since the demo game included with gosu gem works fine, there must be something wrong I'm doing, I probably don't really udnerstand how a polygon Shape is defined and added to the space. I'm pretty sure it's not really where I draw it.
There's a public repo with the game, so you can see how walls (Brush < Entity) and player (Player < Entity) are defined and that they indeed have a simple, rectangular polygon shape. Walls are not added to the space (they are rogue), only the player is. I've tried debugging the game and see where the body position is, but all looked fine.
https://github.com/ellmo/castellvania
Player falls down slowly, but you can control him with up / left / right arrows. Tilde button (~) shows the boudning boxes and the collision boxes are supposed to be always visible.
I need some help trying to understand what am I doing wrong.
I probably don't really udnerstand how a polygon Shape is defined and added to the space. I'm pretty sure it's not really where I draw it.
That's it. Shape coordinates are added to the body position, not substracted from it.
In your Entity.boundaries replace the line
verts << CP::Vec2.new(#shape.body.p.x - #shape[vert].x, #shape.body.p.y - #shape[vert].y)
with
verts << CP::Vec2.new(#shape.body.p.x + #shape[vert].x, #shape.body.p.y + #shape[vert].y)
and you will get correct picture. (drawing will be still broken, but bounding boxes will be correct.

MonoGame: Some VertexColorPosition dissappear while drawing user primitives(DrawUserPrimitives)

I am a complete beginner in XNA/MonoGame developing. I started my own project using MonoGame with XAML for WinRT, hopefully that it will reach Windows App Store one day. I encountered a serious issue, see the video. I used wireframes so the missing vertices can be easily seen. Only the explosions created by user input are flawless. All of them use the same logic.
I am doing this game with ball collision, pretty simple, indeed. In certain conditions these balls explode and they start to expand following some rules. When the explosion is initiated by user input with the same type explosion, the following explosion do not appear well at all. Some of the vertices of the primitives disappear and they appear as some strange shapes but not circles at all. I tried disabling CullMode(setting it to None), DepthBuffer(setting to false), StencilEnable(setting to false). None of this helped. All of these primitives are in the same z-plane(z = 0). Does anyone have any suggestions? Your help is highly appreciated, thank you a lot. Below you can find the code which gives more details into the situation.
During the update I go through all the objects consecutively, do the necessary updates, and in the same order I call for each of them:
this.graphicsDevice.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.TriangleStrip, circleVertices, 0, primitiveCount);
This is the BasicEffect that I apply:
basicEffect.Projection = Matrix.CreateOrthographicOffCenter
(0, graphics.GraphicsDevice.Viewport.Width, // left, right
graphics.GraphicsDevice.Viewport.Height, 0, // bottom, top
0, 1); // near, far plane
This will be hard to answer without seeing more code. It appears from the video that there must be an issue when you are generating the circleVertices for an explosion that starts from user input. Would it be possible to post your code somewhere?

Resources