I need to draw 60 times per second 60 000 little circles in different positions. What's the fastest way to do it?
Win2d fillcircle and spritebatch are not fast enough.
Win2D is managed layer on top of DirectX. If you need to get closer to the metal than that, you'll have to use DirectX directly. You can get an overview of DirectX on UWP here.
Related
I have a platformer game, which is drawn using vector art. That is, I do not use any bitmaps of arbitrary size, but draw everything using draw.rectangle('fill', ...) and draw.polygon('fill', ...) (mainly for triangles).
However, I have run into performance issues. When I have about 80 blocks, platforms and spikes the framerate drops to 35 FPS, what is rather unpleasant to play. When don't render them, my FPS is about 110.
My blocks generally doesn't move, so I thought about using something like VertexArray in SFML, but Love2d hasn't anything like that. I found love.graphics.SpriteBatch, but it doesn't seem to support rectangles and triangles without texture.
In summary, how can I quickly draw lots of simple, static shapes in Love2d?
Well, it turns out it is just called a Mesh in Love2d, not vertex array.
Anyway, thanks for all your attention (4 views, that was sarcastic).
I need to visualize a design in windows application, and therefore need to draw diagonal lines very fast. I tried to work with GDI+ (because I need transparency) and the speed of diagonal lines is about 10 times slower than to draw vertical/horizontal lines. I need sometimes about 400ms to draw 2000 diagonals that cross the screen.
After this I tested Direct2D, and this was about 2x faster than GDI+, but way not fast enough. Now I am starting to look at OpenGL to draw 2D graphics. There I would look from above at the scene and use an orthogonal projection.
Can anybody tell me, what the right way is to draw high-speed diagonals?
Regards, Pete
You could manually rasterize the lines to a pixel buffer (simply a 2D integer array), preferably in a faster language (e.g. externall DLL in C), then blit the buffer as a bitmap to your window (GDI or DirectDraw are sufficient).
A good procedure for this would be Brensenham's line algorithm: https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm.
If you need anti-aliasing: https://en.wikipedia.org/wiki/Xiaolin_Wu%27s_line_algorithm.
I used this method for a software renderer project, and achieved a x2 performance increase (as you did with Direct2D, but I'm still using GDI+) when I migrated from VB.NET to C. I was able to get 40+ FPS at 800x600 on an old 1.7 GHz Pentium machine, and ~25 even with GDI+, so I'm not sure why performance is inadequate in your case.
For highest performance use OpenGL or Direct3D. You'll be able to take advantage of hardware accelerated drawing at the expense of more code to get things set up. You should be able to draw tens of thousands of lines in <16ms easily.
At work, I work with very large images.
I currently do my rendering via SDL2.
The max texture size on the graphics card my machine uses is 8192x8192.
Because my data sets are larger than what will fit in a single texture, I split my image into multiple textures after it is loaded, and tile them.
However, I have found that this comes at a very steep cost. Rendering only 4 textures around 5K by 5K (pixels) each completely tanks the framerate!
Conventional wisdom tells me that the fewer texture swaps the better, but with such large images I've found myself between a rock and a hard place.
One thing I've considered is that perhaps if I were to chunck the images up into many small textures, I could take advantage of culling which would hopefully be a net win. But there's a big problem with that approach - I need to be able to zoom out.
Another option would be to down scale the images. This seems promising as the analysis I am doing on the images do not require the high resolution that the images provide.
I know that OpenGL has mipmapping, but I am inexperienced with OpenGL and am weary of diving into it for a work project. I am not aware of a good way to downscale the images within the confines of SDL2, and for reasons specific to the work I am doing, scaling the images down offline (before I load them) is not appealing.
What is the best approach for me to get the highest framerate in this situation?
So, I've been working on a project for a while now in DirectX11, but some people have been suggesting that I should be doing it in Direct2D. So, I've been playing with the idea in my project. What I've ended up with is HORRIFIC performance. Is Direct2D intended for use with hundreds of thousands of verteces? Because that's what I'm using.
Direct2D gives you a simple to use API to draw high quality 2d graphics, but it comes at a cost in performance compared to some fine tuned dx11 rendering.
Here is a primitive cost per draw for reference
FillRoundedRectangle (1 pixel corner) : 96
DrawRoundedRectancle: 264
FillRectangle : 6
DrawRectangle : 190
FillEllipse : 204
DrawEllipse : 204
Line (whatever vertical/size...) : 46
Bezier : from 312 -> 620
Direct2d is built on a feature level 10 device, and builds vertices in an internal buffer (only case where it uses instancing is to draw text).
So if you need to batch primitives an instanced drawer can yield a pretty hefty performance gain (as personal example, my timeline keyframe rendering went down from 15ms to 2ms swapping the draws from d2d to custom dx11 instanced shader).
If you are on windows 8, you can easily mix direct2d and direct3d in the same "viewport", so it can be worth a look (in my use case I use dx11 and structured buffers based instancing for all heavy parts, and swap to a direct2d context for text and other small bits).
If you need to draw custom geometry (specially with a reasonably high polygon count),it's best to stick to Direct3D11,since it's totally designed for it.
I'm developing a card game in Android using SurfaceView and canvas to draw the UI.
I've tried to optimize everything as much as possible but I still have two questions:
During the game I'll need to draw 40 bitmaps (the 40 cards in the italian deck), is it better to create all the bitmaps on the onCreate method of my customized SurfaceView (storing them in an array), or create them as needed (every time the user get a new card for example)?
I'm able to get over 90 fps on an old Samsung I5500 (528 MHz, with a QVGA screen), 60 fps on an Optimus Life (800 MHz and HVGA screen) and 60 fps with a Nexus One/Motorola Razr (1 GHz and dual core 1GHz with WVGA and qHD screens) but when I run the game on an Android tablet (Motorola Xoom dual core 1 GHz and 1 GB of Ram) I get only 30/40 fps... how is that possible that a 528 MHz cpu with 256 MB of RAM can handle 90+ fps and a dual core processor can't handle 60 fps? I'm not seeing any kind of GC calling at runtime....
EDIT: Just to clarify I've tried both ARGB_888 and RGB_565 without any changes in the performance...
Any suggestions?
Thanks
Some points for you to consider:
It is recommended not to create new objects while your game is running, otherwise, you may get unexpected garbage collections.
Your FPS numbers doesn't sound good, you may have measurement errors, However my guess is that you are resizing the images to fit the screen size and that affects the memory usage of your game and may cause slow rendering times on tablets.
You can use profiling tools to confirm: TraceView
OpenGL would be much faster
last tip: don't draw overlapping cards if you can, draw only the visible ones.
Good Luck
Ok so it's better to create the bitmap in the onCreate method, that is what I'm doing right now...
They are ok, I believe that the 60 fps on some devices are just some restrictions made by producers since you won't find any advantage in getting more than 60 fps (I'm making this assumption since it doesn't change rendering 1 card, 10 cards or no card... the OnDraw method is called 60 times per second, but if I add for example 50/100 cards it drops accordingly) I don't resize any card cause I use the proper folder (mdpi, hdpi, ecc) for each device, and I get the exact size of the image, without resizing it...
I've tried to look at it but from what I understand all the time of the app execution is used to draw the bitmap, not to resize or update its position here it is:
I know, but it would add complexity to the developing and I believe that using a canvas for 7 cards on the screen should be just fine….
I don't draw every card of the deck.. I just swap bitmap as needed :)
UPDATE: I've tried to run the game on a Xoom 2, Galaxy Tab 7 plus and Asus Transformer Prime and it runs just fine with 60 fps…. could it be just a problem of Tegra 2 devices?