I would like to implement the effect that a short line segment revolving around a square (not knowing the exact effect name, it's just like the one in war3 to indicate auto-cast spell or in fishingjoy to indicate equipped weapon). Any advise/hint is welcomed. Thanks!
You have several variants. The first one, the easiest, to create frame animation of desired effect and run it on the empty CCSprite instance, that will be placed over your weapon icon. I think, 5 or 6 frames of animation will be enough. Big plus - you can create any desired effects on these frames in photoshop, and it is easy to add existing frames as animation to your project. Minus - it will take additional place in your texture cache, spriteframe cache and it will increase the size of your app. This is the good solution, if your square is quite small, because if your square will have large contentSize, it will take a lot of useless memory. For example, 6 frames of such animation with the size of screen( 640x960 pixels on retina screen ) wil take additional 16Mb of your memory.
The second variant, IMHO, much more interesting)) And it can help to save memory) This variant is to implement this animation with OpenGL) But it seems to be much more complicated)
Related
Since most devices today have a CPU and a GPU, the usual advice for programmers wishing to do animated vector graphics (like making a circle grow or move around) is to define the graphical item once and then use linear transformations to animate it. This way, (on most platforms and frameworks) the GPU can do the animation work, because rasterization with linear transformations can be done very fast on a GPU. If the programmer chooses to draw each frame on the CPU, it would most likely be much slower and consume more energy.
I understand that the Watch is not a device you want to overload with complex animations, but at least the Home Screen certainly seems to use exactly this kind of animated linear transformations:
Also, most Watch Faces are animated in a way, e.g. the moving seconds and minutes hands.
However, the WatchKit controls do not have a .transform property, and I could not find much in the documentation - the words "animation" and "graphics" are not even mentioned there.
So, the only way I currently see is to draw the vector graphics to a CGContext and then put the result as an UIImage to a image control, as described here. But this does not really seem energy-efficient. It is exactly the kind of "CPU pixel drawing" that we usually want to avoid if possible. I think it is not energy-efficient because if I draw on a 100x100 pixels image buffer, the image has to be scaled to the actual Watch screen size, so we have two actual drawing processes per frame.
Is there a officially recommended, energy-efficient way to do animations on the Apple Watch?
Or, in other words, can we animate things like they are animated on the Home Screen or Watch Faces?
Seems SpriteKit is the answer. You can create SKScene and node objects and then display them in a WKInterfaceSKScene.
Just a straight forward question. I´m trying to make the best possible choice here and there is too much information for a "semi-beginner" like me.
Well, at this point, I´m trying with screen size values for my layout (activity_main.xml (normal, large, small)) and with different densities (xhdpi, xxhdpi, mhdpi) and, if a can say so myself, it is a mess. Do I have to create every possible option to support all screen sizes and densities? Or am I doing something really wrong here? what is the best approach for this?
My layouts are now activity_main(normal_land_xxhdpi) and I have serious doubts about it.
I´m using last version of android studio of course. My app is a single activity with buttons, textview and others. Does not have any fragments or intents whatsoever, and for that reason I think this has to be an easy task, but not for me.
Hope you guys can help. I don't think i need to put any code here, but if needed, i can add it.
If you want to make a responsive UI for every device you need to learn about some things first:
-Difference between PX, DP:
https://developer.android.com/training/multiscreen/screendensities
Here you can understand that dp is a standard measure which android uses to calculate how much pixels, lets say a line, should have to keep the proportions between different screensizes with different densities.
-Resolution, Density and Ratio:
The resolution is how much pixels a screen has over height and width. This pixels can be smaller or bigger, so for instance, if you have a screen A with 10x10 px whose pixels are two times smaller than other screen B with 10 x 10 pixels too, A is two times smaller than B even when both have 10 x 10 px.
For that reason exists the meaning Density, which is how much pixels your screen has for every inch, so you can measure the quality of a screen where most pixels per inch (ppi) is better.
Ratio tells you how much pixels are for height as well as width, for example the ratio of a screen of 1000 x 2000 px is 1:2, a full hd screen of 1920 x 1080 is 16:9 (16 pixels height for every 9 pixels width). A 1:1 ratio is a square screen.
-Standard device's resolutions
You can find the most common measurements on...
https://material.io/resources/devices/
When making a UI, you use the DP measurements. You will realize that even when resolution between screens are different, the DP is the same cause they have different densities.
Now, the right way is going with constraint layout using dp measures to put your views on screen, with correct constraints the content will adapt to other screen sizes
Anyway, you will need to make additional XML for some cases:
-Different orientation
-Different ratio
-Different DP resolution (not px)
For every activity, you need to provide a portrait and landscape design. If other device has different ratio, maybe you will need to adjust the height or width due to the proportions of the screens aren't the same. Finally, even if the ratio is the same, the DP resolution could be different, maybe you designed an activity for a 640x360dp and other device has 853x480dp, which means you will have more vertical space.
You can read more here:
https://developer.android.com/training/multiscreen/screensizes
And learn how to use constraintLayout correctly:
https://developer.android.com/training/constraint-layout?hl=es-419
Note:
Maybe it seems to be so much work for every activity, but you make the first design and then you just need to copy the design to other xml with some qualifiers and change the dp values to adjust the views as you wants (without making from scratch) which is really faster.
I was shocked to find that a game I had just created takes up a whopping 330 megabytes. According to the Editor Log, my textures are to blame:
From the list I started at the top with the Chieftain Walk animation spritesheet. The file was huge, so I opened it in Photoshop and decreased the image resolution dramatically.
However, even after saving in Photoshop, the Editor Log claims that the texture takes up the same amount of memory. What am I doing wrong, and also, when does the Editor Log update? Is it upon building the game? Many thanks.
First of all, you don't need to reduce resolution on the actual PNG file. When Unity builds player, it will store the imported uncompressed file in its Data folder near the executable. The size of the texture will be as it is in your importer settings. By default it is 2048x2048 if I remember correctly. If you change importer settings for your texture, the PNG file will remain the same (which is in the editor), but the texture object (which is used in actual standalone) will become much smaller.
Also, is there any particular reason why you didn't make it squared? Like 512x512. Always make it a square and a power of 2. If not, Unity will be unable to make any optimizations for your sprites
EDIT:
This is the texture import settings, set max size to lower and your game will take less memory (both in hard drive and in RAM/GPU when game is running). You can also add compression level, it will take even less memory, but will take longer to load (in game). When loaded will take same amount of RAM/GPU memory as non-compressed. A win on app size, a lose on load performance. (Test it out and choose what is better for you)
Why power of 2 and square, well:
By ensuring the texture dimensions are a power of two, the graphics pipeline can take advantage of optimizations related to efficiencies in working with powers of two. For example, it can be faster to divide and multiply by powers of two. It will also be easier for unity to create mip-maps (they might take more memory if texture is not square). There are many sources on internet about mip-mapping.
I am working on a game and I need to have two characters talking to eachother. I know that XNA does not allow me to play a movie other than fullscreen so I need to actually "play" the animation inside my game app in a different manner. The characters have animated environments around them so the animations are not simple head movements and as such, animating the characters via keyframing in a 3d model is not an option. The dialogue between the two characters is a cut-scene between levels, so it is not part of the gameplay itself.
I am not sure what the best approach to this would be so if you have any ideas, please let me know.
This is what I thought of so far:
1. Create all the individual frames for the characters as images. Load these images in a spritesheet and go through each frame at my desired framerate.
The problem with this approach is that the maximum spritesheet texture of 2048x2048 would not allow for too many frames as the characters are something around 300x200. The other problem is that I have two characters so, the minimum scenario would require me to create in memory two 2048x2048 spritesheets... and I'd like to keep the memory requirements low.
2. Load a batch of frames (images), play them, then de-allocate them and load the next set. I know that in general it is not a good idea to load lots of small textures and switch between them in drawing calls (performance wise) but it seems as though I have no other choice in this case.
I am afraid that unloading stuff from memory and loading other stuff in while in the Update-Draw loop would slow down the entire scene... so not sure if this is a sane approach.
The other idea is to make an mp4/wmv with the whole thing [char animation, subtitles of the dialogue, etc] but the interface that hosts these characters would not be as "smooth" as when rendered directly, etc...
Thank you for all your suggestions,
Marius
EDIT 1:
I have tested scenario number 2 and it seems that the performance is OK.
I have used scenario 2. It works for my particular case but I am sure it won't work for all cases.
I'm writing iPad cocos2d game with animations.
Designer gave me frames for each animating character in png. I'm using TexturePacker to pack my textures. But one of character is very big (600x600 pixels). And there 200 frames of animation. So, it will be very big memory part if I will pack it with TP to some atlases. But really not all 600x600 pixels are changing. Character has only moving hands and legs.
I think, I should cut static part from frames and cut dynamic parts from each frame to decrease memory usage. Is there some existing instrument for this? Or there is some better way to do in my situation?
AFAIK, there is no instruments for such task. And 200 frames 600x600 pixels.... I am sure that you will not be able to place all these frames to memory with other textures as backgrounds, other effects, etc.. It is too much for mobile device. Even for iPad. You should ask your artist to reduce frames number and size as possible.
For example, few months ago I got animation with 200x300 pixel frames. And actually content was only about 100x100 pixels. All other place in these frames was filled with glow. After glow was removed, it was not look such cool as before, but it was good too. And reduced memory problems.
For others with same problem:
After all I refused Cocos2d and write game with video. Huge animations was prerendered in video file, small animations I overlayed with imageView.animationImages.
You can change video playbackTime to add interactions.