What kind of game maps can I use with SpriteKit? - xcode

I am building an OS X game using Swift and the SpriteKit framework. I don't know, what kind of game map I should use in the project. I started with a .sks file and it might work, but seems too time-consuming.
Screenshot of my crude .sks map.:
I need to have access to the position of the solar system nodes so I can center the map on any of them. I need to hide parts of the map. Also, I need to be able to get the angle between solar systems.
On a small scale, I could make this work with the .sks, but I am going to add maybe around a hundred solar systems, so individually placing solar system nodes and struggling with shape nodes to form the lines between the solar systems is simply ridiculous.
This is my first game map, so I am completely oblivious to the different kinds of file types and applications I could use to make the map.

I would go with a database (NSArray? NSDictionary?) of solar systems. Each entry in the db would have all information for that solar system (name, coordinates, planets, moons, etc.). Warping between solar system would change which entry is used to redraw the screen, etc. (Note: NSArrays and NSDictionarys can be easily loaded and saved to (text editable!) XML files.)

Related

How to manage 2D data in a procedural game world

I’m building a Starflight-inspired 2D space exploration game with a procedural world. The gameplay is divided into different « scenes » (to use Godot terminology) to manage the different « depths » of the game. For example, interstellar flight is a scene where the star systems are simply represented by star sprites. When the player gets in range, the view is moved to the solar system scene, where the player moves his ship inside the actual solar system.
So far so good, I generate the universe (the solar systems) from a hard coded array of coordinates and seeds. Now I also want to make the universe generation procedural, but I’m guessing that loading a whole universe (there is no real limit to the number of solar systems once it becomes procedural) in memory won’t be efficient.
I’m thinking of generating the universe on the first run and saving the data to a file, but I’m wondering how to load the relevant data in an efficient way that would let me load only a certain « radius » of data around the player’s ship. I feel like it would be the way to go if I use my generation algorithms that generate « realistic » galaxy shapes, since it implies many steps of data processing (different cluster shapes are generated, arms, blobs, etc. and then stars are spinned around the center to simulate the galaxy rotation, etc.) that would be probably too long to calculate in realtime.
I’m wondering which approach I should take on this problem. It’s not really language or engine dependant, so references to generic articles and algorithms on the subject would suffice.
I also read a bit about QuadTrees and I think I’m getting to something there, but I’m not exactly sure how to use that with a file on disk.
Thanks in advance for your help!
I have some suggestions:
Do not generate the whole universe on the first run, generate only the areas that are somehow visible. Then, instead of loading the whole universe from disk, you just generate it whenever your spaceship (or whatever) come within view distance of that area. This makes game initialization much faster and allows an (almost) infinite universe.
If you want the universe to be modifiable, store only the `edits' that a player makes. So if you want to show a part of the universe, generate the part from your seed and then overlay the stored edits. This makes storage much smaller.
For storage on disk, have a look at R-Tree, especially R*Tree and R+Tree, they are designed for storing data in disk pages.
as TilmannZ suggested, you should not be generating the whole dataset for the galaxy when you start the game, because there is likely no need (unless the player needs to see/interact with all the data at once - e.g. all stars). If this is the case, for example for a starmap, then you may be better loading all the data once and saving the result in an image file.
Instead, you should only genereated the data as needed around the player. The most obvious way to do this would be to construct a grid around the player, and keep this grid centered on the player as they move around. As the player moves around, you only need to update the conceptual galaxy coordinates of each cell (not the rendered coordinates). Then for each cell you can then use the coordinates as the input into a value or gradient generator like Perlin to determine what features should spawn in that location.
As for 'shaping' the galaxy or universe, one effective way is to sample the pixel data of a greyscale image of a galaxy which has the shape you want. You could load the image's RGB data at run time, and use the coordinates of your grid as you generate the stars to get the RGB value, which you can use as a density factor for the star generation; the whiter the pixel, the higher the star density at this location and visa-versa for black pixels. This method lets you effectively draw the shape of the galaxy in paint.
Maybe think about different layers of abstractions. Each layer uses the parent layer, designer input, events & procedural generation algorithms to generate the needed data.
The Universe layer contains user or randomly placed galaxy polygons & types.
The Galaxy layers can add more details (number & density of spiral arms) or a density map.
A cluster of solar systems.
The solar system adds the stars & planets.
And only create the details for currently needed elements.

Is there a way to create simple animations "on the fly" in modern OpenGL?

I think this requires a bit of background information:
I have been modding Minecraft for a while now, but I alway wanted to make my own game, so I started digging into the freshly released LWJGL3 to actually get things done. Yes, I know it's a bit ow level and I should use an engine and stuff...indeed, I already tried some engines and they never quite match what I want to do, so I decided I want to tackle the problem at its root.
So far, I kind of understand how to render meshes, move the "camera", etc. and I'm willing to take the learning curve.
But the thing is, at some point all the tutorials start to explain how to load models and create skeletal animations and so on...but I think I do not really want to go that way. A lot of things in working with Minecraft code was awful, but I liked how I could create models and animations from Java code. Sure, it did not look super realistic, but since I'm not great with Blender either, I doubt having "classic" models and animations would help. Anyway, in that code, I could rotate a box around to make a creature look at a player, I could use a sinus function to move legs and arms (or wings, in my case) and that was working, since Minecraft used immediate mode and Java could directly tell the graphics card where to draw each vertex.
So, actual question(s): Is there any good way to make dynamic animations in modern (3.3+) OpenGL? My models would basically be a hierarchy of shapes (boxes or whatever) and I want to be able to rotate them on the fly. But I'm not sure how to organize that. Would I store all the translation/rotation-matrices for each sub-shape? Would that put a hard limit on the amount of sub-shapes a model could have? Did anyone try something like that?
Edit: For clarification, what I did looked something like this:
Create a model: https://github.com/TheOnlySilverClaw/Birdmod/blob/master/src/main/java/silverclaw/birds/client/model/ModelOstrich.java
The model is created as a bunch of boxes in the constructor, the render and setRotationAngles methods set scale and rotations.
You should follow one opengl tutorial in order to understand the basics.
Let me suggest "Learning Modern 3D Graphics Programming", and especially this chapter, where you move one robot arm with multiple joints.
I did a port in java using jogl here, but you can easily port it over lwjgl.
What you are looking for is exactly skeletal animation, the only difference being the fact you do not want to load animations for your bones but want to compute / generate transforms on the fly.
You basically have a hierarchy of bones, and geometry attached to it. It looks like you want to manipulate this geometry "rigidly", so before sending your meshes / transforms to the GPU (the classic way), you want to start by computing the new transforms in model or world space, then send those freshly computed matrices to draw your geometries on the gpu the standard way.
As Sorin said, to compute each transform you simply have to iterate over your hierarchy and accumulate transforms given the transform of the parent bone and your local transform w.r.t the parent.
Yes and no.
You can have your hierarchy of shapes and store a relative transform for each.
For example the "player" whould have a translation to 100,100, 10 (where the player is), and then the "head" subcomponent would have an additional translation of 0,0,5 (just a bit higher on the z axis).
You can store these as matrices (they can encode translation, roation and scaling) and use glPushMatrix and glPop matrix to add and remove a matrix to a stack maintained by openGL.
The draw() function(or whatever you call it) should look something like :
glPushMatrix();
glMultMatrix(my_transform); // You can also just have glTranslate, glRotate or anything else.
// Draw my mesh
for (child : children) { child.draw(); }
glPopMatrix();
This gives you a hierarchical setup so that objects move with their parent. Alternatively you can have a stack in the main memory and do the multiplications yourself (use a library). I think the openGL stack may have a limit (implementation dependent), but if you handle it yourself the only limit is the amount of ram you can use. Once all the matrices are multiplied rendering is done in the same amount of time, that is it doesn't matter for performance how deep a mesh is in the hierarchy.
For actual animations you need to compute the intermediate transformations. For example for a crouch animation you probably want to have a few frames in between so that the camera doesn't just jump to the low position. You can do this with a time based linear interpolation between the start and end positions, but this only covers simple animations and you still have to implement it yourself.
Anything more complicated (i.e. modify the mesh based on the bone links) you would need to implement yourself.

How game engine rotates models?

if i do a human model and import him to game engine. does game engine knows all point cordinates on model and rotates each ones? all models consists million points and and if i rotate a model 90 degree , does game engine calculates millions point new location and rotate? how does it works. Thanks
This is a bit of a vague question since each game engine will work differently, but in general the game engine will not touch the model coordinates.
Models are usually loaded with model space (or local space) coordinates - this simply means that each vertex is defined with a location relative to the origin of that model. The origin is defined as (0,0,0) and is the point around which rotations take place.
Now the game engine loads and keeps the model in this coordinate space. Then you provide your transformations (such as translation and rotation matrices) to place that model somewhere in your "world" (i.e. the global coordinate space shared by all objects). You also provide the way you want to view this world with various other transforms such projection and view matrices.
The game engine then takes all of these transformations and passes them to the GPU (or software renderer, in some cases) - it will also setup other stuff such as textures, etc. These are usually set once per frame (or per object for a frame).
Finally, it then passes each vertex that needs to be processed to the renderer. Each vertex is then transformed by the renderer using all the transformations specified to get a final vertex position - first in world space and then in screen space - which it can use to render pixels based on various other information (such as textures and lighting).
So the point is, in most cases, the engine really has nothing to do with the rotation of the model/vertices. It is simply a way to manage the model and the various settings that apply to it.
Of course, the engine can rotate the model and modify it's vertices, but this is usually only done during loading - for example if the model needs to be converted between different coordinate spaces.
There is a lot more going on, and this is a very basic description of what actually happens. There are many many sources that describe this process in great detail, so I won't even try to duplicate it. Hopefully this gives you enough detail to understand the basics.

Lightweight 3D animation driven by external data

I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.
Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)
Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.

Add animation to head and arm (mesh) which is acquired from 3D scanner Kinect

I used ReconstructMe
to scan my first half body (arm and head). The result I got is a 3d mesh. I open them in 3dsmax. What I need to do now is to add animation/motion to the 3d arm and head.
I think ReconstructMe created a mesh. Do I need to convert that mesh to a 3d object before adding animation? If so, how to do it?
Do I need to seperate the head and arm to add different animation to them? How to do it?
I am a beginner in 3ds max. I am using 3ds max 2012, student edition.
Typically you would set up bones, and link the mesh to the bones with skin or physique modifier, then animate the bones as needed.
You can have 1 mesh, or separate meshes, depends on your needs.
For setting up the rigging, it would be good to utilize a tutorial like this
http://www.digitaltutors.com/11/training.php?pid=332
I find Digital Tutors to be very concise and detailed for anybody to grasp the concepts if your patient enough. Depending on the motion you will like some parts of the bones will require FK (forward kinematics) or IK (inverse kinematics) or a mixture of both FK/IK control in areas like the elbows of the arms etc.
Certain other parts of the character would also like the ability to utilize CAT controls. Through the whole rigging process the biggest foundation or theory to maintain is hierarchy and the process of parenting the controls/linking correctly.
Also your meshes topo needs to be correct, when scanning from an outside source you will get either a. a lot of triangles or b. bad edge flow, before the rigging process make sure to take the time to get your scan's topology to the correct state it should be in.

Resources