For the hot APP Draw Something, it will record all your drawing trace and send to your friend to convey the guessword for him to guess.
How does this app record the drawing trace? In what kind of data structure?
If develop a similar white board app for real time communication, how to design the data model for more efficient and better interactive of two or more participates? (how to record the drawing trace and send to other participates)
How they do it, I don't know. How I'd do it, is an array of points, where each point is either a break (which would include a colour value for the next line), or a (X, Y, T) tuple (coordinates and timing). This is just for freehand lines; if you need something else, obviously, it would need to be extended.
Related
I have a game that requires the player to roll two die. As this is a multiplayer game, the way I currently do this is have 6 animations (1 for each die's outcome). When the player clicks a button, it sends a request to my server code. My server code determines the die's outcome and sends the results to the client. The client then plays the corresponding animations.
This works ok, but has some issues. For instance, if the server sends back two of the same values (two 6's, for example) then the animations don't work correctly. As both animations are the same, they overlay each other, and it looks like only one die was rolled.
Is there a better way to do this? Instead of animations, using "real" dice? If that's the case, I always need to be sure to "pre-determine" the outcome of the dice roll, on the server. I also need to make sure the dice don't fall off the table or jostle any of the other player pieces on the board.
thanks for any ideas.
The server only needs to care about the value result, not running physics calculations.
Set up 12 different rolling animations:
Six for the first die
Six for the second die
Each one should always end with the same modeled face pointing upwards (the starting position isn't relevant, only the ending position). For the latter steps you'll probably want to adjust the model's UV coordinates to use a very tall or very wide texture (or just a slice of a square one). So not like this but rather all in a line 1-2-3-4-5-6.
The next step is picking a random animation to play. You've already got code to run a given animation, just set it to pick randomly instead of based on the die-roll-value from the server:
int animNum = Mathf.Floor(Random.Next()*6);
Finally, the fun bit. Adjusting the texture so that the desired face shows when the animation is done. I'm going to assume that you arrange your faces along the top edge of your square texture. Material.SetTextureOffset().
int showFace = Mathf.Floor(Random.Next()*6); //this value should come from the server
die.renderer.material.SetTextureOffset(1f/6 * showFace,0);
This will set the texture offset such that the desired face will show on top. You'll even be able to see it changing in the inspector. Because of the UVs being arranged such that each face uses the next chunk over and because textures will wrap around when reaching the edge (unless the texture is set to Clamp in its import settings: you don't want this here).
Note that this will cause a new material instance to be instantiated (which is not very performant). If you want to avoid this, you'll have to use a material property block instead.
You could simulate the physics on the server, keep track of the positions and the orientations of the dice for the duration of the animation, and then send the data over to the client. I understand it's a lot of data for something so simple, but that's one way you can get the rolls to appear realistic and synced between all clients.
If only Unity's physics was deterministic that would be a whole lot easier.
Since there is no accumulation buffer in opengles, what shroud I do to achieve a trail? If I use frame buffer to simulate it, will it cost too much to make the tail looks smoothly?
There is usually not enough data to create a trail from a single state. Even including the speed will make it look very poor if the ball changes the direction of movement. So some kind of information of the previous object states is quite necessary.
It would be possible to use a separate channel to hold the previous states such as stencil buffer or even the alpha channel on which you could create a decay system. That means you would draw the ball on this channel on every frame but before drawing it you would reduce the whole channel by some value so the "older" parts slowly fade out. This separate ball drawing would need to be something like a radial gradient so you will receive a relatively smooth trail but will be far from perfect and for relatively fast movement some additional post processing will be mandatory unless the result is incidently a desired effect.
A more suitable approach is to contain an object position trace on the CPU. Simply keep pushing current positions on the stack and removing those being too old (for instance keep 20 latest positions). Then use these positions to create a shape representing a ball tail. At this point the chances are limitless. For instance you may design a tail as an image and then create a rectangle-like shape from the positions which produces an awesome tail effect if done properly.
Hy!
I am working with huge vertice objects, I am able to show lots of modells, because I have split them into smaller parts(Under 65K vertices). Also I am using three js cameras. I want to increase the performance by using a priority queue, and when the user moving the camera show only the top 10, then when the moving stop show the rest. This part is not that hard, but I dont want to put modells to render, when they are behind another object, maybe send out some Rays from the view of the camera(checking the bounding box hit) and according hit list i can build the prior queue.
What do you think?
Also how can I detect if I can load the next modell or not.(on the fly)
Option A: Occlusion culling, you will need to find a library for this.
Option B: Use a AABB Plane test with camera Frustum planes and object bounding box, this will tell you if an object is in cameras field of view. (not necessarily visible behind object, as such a operation is impossible, this mostly likely already done to a degree with webgl)
Implementation:
Google it, three js probably supports this
Option C: Use a max object render Limit, prioritized based on distance from camera and size of object. Eg Calculate which objects are visible(Option B), then prioritize the closest and biggest ones and disable the rest.
pseudo-code:
if(object is in frustum ){
var priority = (bounding.max - bounding.min) / distanceToCamera
}
Make sure your shaders are only doing one pass. As that will double the calculation time(roughly depending on situation)
Option D: raycast to eight corners of bounding box if they all fail don't render
the object. This is pretty accurate but by no means perfect.
Option A will be the best for sure, Using Option C is great if you don't care that small objects far away don't get rendered. Option D works well with objects that have a lot of verts, you may want to raycast more points of the object depending on the situation. Option B probably won't be useful for your scenario, but its a part of c, and other optimization methods. Over all there has never been an extremely reliable and optimal way to tell if something is behind something else.
I'm not going to turn the images into files yet (and I don't know if I'm ever going to do this).
The drawings are made by a custom-made drawing program (where the user draws). When I resize the application, the drawing disappears, because it's not being redrawn. And that's because the image is not being memorized in any way. I need an algorithm for memorizing the drawing, so it can be redrawn after the whole application refreshes.
One algorithm I thought of is to memorize the location and color every pixel. But I don't think this is a good idea.
I'm currently using Java, but I need a language-agnostic algorithm. Still, I would accept a solution explained with code.
What algorithm should I use for memorizing the whole drawing?
You could memorize the user's actions: for example, if s/he draws a line, then memorize the starting and ending address. If s/he draws handsfree a drawing, then you memorize the single pixels (you have to!).
This allows to resize, rotate, etc. any drawing by just manipulating the coordinates.
The "drawing" becomes then a list of actions:
{
LINE_DRAWING,
x1, y1, x2, y2,
pen, color, thickness...
}
{
...
}
To redraw, just scan the same list and call again the appropriate subroutines. Depending on the language, you can represent the list as an array, a linked list, a doubly linked list, and implement things such as element deletion.
On file, I would suggest some sort of tagged format:
two bytes - element type
four bytes - this element's length
variable-size data depending on element type
Again, to "load" the drawing you just scan the file sequentially and populate the memory structures.
You can google 'vector drawing' for more details and hints.
There are lots of options. One is, as you say, to remember the image pixels. You can also simply remember all the user actions that generated the drawing and replay them when you need to reconstruct the drawing.
Another approach, depending on the tools that the drawing program offers the user, would be to build a more compact representation of the image. For instance, if the drawing program only offered the possibility of drawing lines, you could remember the set of line endpoints (and colors, line thicknesses, and whatever other line data was relevant). This generalizes in an obvious way to a larger set of geometric primitives.
For free-hand drawing, you can remember the stroke paths along with whatever stroke settings were set at the time. Depending on the complexity of the stroke tools your program offers, this may end up being more data than simply remembering the drawing pixels. However, it does allow, for instance, scaling the drawing if the canvas expands.
I am working on a drawing program and am trying to figure out the best way to imitate the 'magnet' behavior found in applications such as Omnigraffle. The idea is: as a line is drawn between two objects (visual objects on screen, not OOP objects), as the line from the first object approaches the second, a 'magnet' or 'node' on the second will highlight or the second object will highlight.
I was looking to keep all of the on-screen objects in an array and using notifications to send that array the position of the end of the line as it moves. This way, I could have each object do its own comparison and say "Hey, I have a node near the line, I think I'll light it up".
I was also wondering if it would be the same approach if I wanted to have two objects, say boxes for instance, that would snap together, side by side, when they came into proximity with each other. This way, it would be possible to line up the boxes on the same X or Y coordinate
I'm not concerned about the highlighting or having the line snap to the position of a node, I'm just wondering about the best way to implement the 'edge proximity detection' part of this problem.
If you are using CGRect types I'd suggest you use the two functions CGRectInset() and CGRectIntersectsRect()
Use CGRectInset() to expand one or both rects and then use CGRectIntersectsRect() to see if you have a match. You could also use (at the same time) CGRectIntersectsRect() on the original rects to see that you only have are close and not covering each other.