Drawing atop a scrollable, zoomable image in Qt - image

I'm sorry if my question is somewhat vague. It's been a few years since I did anything with Qt, and back then I never did any fancy image stuff. What I'm asking for below is just some general suggestions on which classes to consider using. I'm trying to avoid barking up the wrong tree from the very start.
The situation: I'm writing a Qt-based program in which I need to display a somewhat large (let's say 5000x5000) raster image. The user should be able to zoom (quickly) in and out, and pan around the image in a way similar to for example Google maps. So far, this is not very different from the Qt ImageViewer example, except perhaps for the requirement that zooming happens quickly. However, I need to draw on the order of 50k simple geometric shapes (let's say circles) on top of the image, and be able to add and remove some of these in a simple way. The circles should have the same size no matter the zoom level, and should thus either be redrawn whenever the user zooms, or should be drawn with vector graphics. Think of the circles as map annotations. These should look the same at any zoom level, and also behave nicely with respect to panning.
I guess my question is twofold:
Can Qt draw vector graphics on top of a raster image?
In general, which classes should I consider for the above?
Thanks in advance. I don't like answering vague questions myself, but maybe someone with experience with Qt's graphics capabilities has an answer.

I suggest you use QGraphicsView and friends for this. It helps handling all the view/world transformation and the vector items can be achieved with various QGraphicsItems.
You can change the sizes of the items whenever the zoom level changes to maintain constant apparent sizes.

Related

Is there a way to create simple animations "on the fly" in modern OpenGL?

I think this requires a bit of background information:
I have been modding Minecraft for a while now, but I alway wanted to make my own game, so I started digging into the freshly released LWJGL3 to actually get things done. Yes, I know it's a bit ow level and I should use an engine and stuff...indeed, I already tried some engines and they never quite match what I want to do, so I decided I want to tackle the problem at its root.
So far, I kind of understand how to render meshes, move the "camera", etc. and I'm willing to take the learning curve.
But the thing is, at some point all the tutorials start to explain how to load models and create skeletal animations and so on...but I think I do not really want to go that way. A lot of things in working with Minecraft code was awful, but I liked how I could create models and animations from Java code. Sure, it did not look super realistic, but since I'm not great with Blender either, I doubt having "classic" models and animations would help. Anyway, in that code, I could rotate a box around to make a creature look at a player, I could use a sinus function to move legs and arms (or wings, in my case) and that was working, since Minecraft used immediate mode and Java could directly tell the graphics card where to draw each vertex.
So, actual question(s): Is there any good way to make dynamic animations in modern (3.3+) OpenGL? My models would basically be a hierarchy of shapes (boxes or whatever) and I want to be able to rotate them on the fly. But I'm not sure how to organize that. Would I store all the translation/rotation-matrices for each sub-shape? Would that put a hard limit on the amount of sub-shapes a model could have? Did anyone try something like that?
Edit: For clarification, what I did looked something like this:
Create a model: https://github.com/TheOnlySilverClaw/Birdmod/blob/master/src/main/java/silverclaw/birds/client/model/ModelOstrich.java
The model is created as a bunch of boxes in the constructor, the render and setRotationAngles methods set scale and rotations.
You should follow one opengl tutorial in order to understand the basics.
Let me suggest "Learning Modern 3D Graphics Programming", and especially this chapter, where you move one robot arm with multiple joints.
I did a port in java using jogl here, but you can easily port it over lwjgl.
What you are looking for is exactly skeletal animation, the only difference being the fact you do not want to load animations for your bones but want to compute / generate transforms on the fly.
You basically have a hierarchy of bones, and geometry attached to it. It looks like you want to manipulate this geometry "rigidly", so before sending your meshes / transforms to the GPU (the classic way), you want to start by computing the new transforms in model or world space, then send those freshly computed matrices to draw your geometries on the gpu the standard way.
As Sorin said, to compute each transform you simply have to iterate over your hierarchy and accumulate transforms given the transform of the parent bone and your local transform w.r.t the parent.
Yes and no.
You can have your hierarchy of shapes and store a relative transform for each.
For example the "player" whould have a translation to 100,100, 10 (where the player is), and then the "head" subcomponent would have an additional translation of 0,0,5 (just a bit higher on the z axis).
You can store these as matrices (they can encode translation, roation and scaling) and use glPushMatrix and glPop matrix to add and remove a matrix to a stack maintained by openGL.
The draw() function(or whatever you call it) should look something like :
glPushMatrix();
glMultMatrix(my_transform); // You can also just have glTranslate, glRotate or anything else.
// Draw my mesh
for (child : children) { child.draw(); }
glPopMatrix();
This gives you a hierarchical setup so that objects move with their parent. Alternatively you can have a stack in the main memory and do the multiplications yourself (use a library). I think the openGL stack may have a limit (implementation dependent), but if you handle it yourself the only limit is the amount of ram you can use. Once all the matrices are multiplied rendering is done in the same amount of time, that is it doesn't matter for performance how deep a mesh is in the hierarchy.
For actual animations you need to compute the intermediate transformations. For example for a crouch animation you probably want to have a few frames in between so that the camera doesn't just jump to the low position. You can do this with a time based linear interpolation between the start and end positions, but this only covers simple animations and you still have to implement it yourself.
Anything more complicated (i.e. modify the mesh based on the bone links) you would need to implement yourself.

Efficiently rendering tiled map using SpriteKit

As an exercise, I decided to write a SimCity (original) clone in Swift for OSX. I started the project using SpriteKit, originally having each tile as an instance of SKSpriteNode and swapping the texture of each node when that tile changed. This caused terrible performance, so I switched the drawing over to regular Cocoa windows, implementing drawRect to draw NSImages at the correct tile position. This solution worked well until I needed to implement animated tiles which refresh very quickly.
From here, I went back to the first approach, this time using a texture atlas to reduce the amount of draws needed, however, swapping textures of nodes that need to be animated was still very slow and had a huge detrimental effect on frame rate.
I'm attempting to display a 44x44 tile map where each tile is 16x16 pixels. I know here must be an efficient (or perhaps more correct way) to do this. This leads to my question:
Is there an efficient way to support 1500+ nodes in SpriteKit and which are animated through changing their textures? More importantly, am I taking the wrong approach by using SpriteKit and SKSpriteNode for each tile in the map (even if I only redraw the dirty ones)? Would another approach (perhaps, OpenGL?) be better?
Any help would be greatly appreciated. I'd be happy to provide code samples, but I'm not sure how relevant/helpful they would be for this question.
Edit
Here are some links to relevant drawing code and images to demonstrate the issue:
Screenshot:
When the player clicks on the small map, the center position of the large map changes. An event is fired from the small map the central engine powering the game which is then forwarded to listeners. The code that gets executed on the large map the change all of the textures can be found here:
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/GameScene.swift#L489
That code uses tileImages which is a wrapper around a Texture Atlas that is generated at runtime.
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/TileImages.swift
Please excuse the messiness of the code -- I made an alternate branch for this investigation and haven't cleaned up a lot of residual code that has been hanging around from pervious iterations.
I don't know if this will "answer" your question, but may help.
SpriteKit will likely be able to handle what you need but you need to look at different optimizations for SpriteKit and more so your game logic.
SpriteKit. Creating a .atlas is by far one of the best things you can do and will help keep your draw calls down. Also as I learned the hard way keep a pointer to your SKTextures as long as you need them and only generate the ones you needs. For instance don't create textureWithImageNamed#"myImage" every time you need a texture for myImage instead keep reusing a texture and store it in a dictionary. Also skView.ignoresSiblingOrder = YES; helps a bunch but you have to manage your own zPosition on all the sprites.
Game logic. Updating every tile every loop is going to be very expensive. You will want to look at a better way to do that. keeping smaller arrays or maybe doing logic (model) updates on a background thread.
I currently have a project you can look into if you want called Old Frank. I have a map that is 75 x 75 with 32px by 32px tiles that may be stacked 2 tall. I have both Mac and iOS target so you could in theory blow up the scene size and see how the performance holds up. Not saying there isn't optimization work to be done (it is a work in progress), but I feel it might help get you pointed in the right direction at least.
Hope that helps.

Best low level canvas library for making interactive animations?

I'm evaluating canvas libraries, and my needs are:
I want to make it easy to build nice looking buttons that move
around and on which I can easily capture events. Button drawing
helpers would be cool
I'll be building a system for others to use to create animated
scenes combining moving test, images, and sound. I won't ever be
drawing complex shapes myself, the most I might be drawing is
buttons around some text.
I do not want to be totally insulated from the low level machinery
of the per-frame drawing callback. Helped along sure, but
I'm going to be syncing with Web Audio API stuff and want to keep
access to super tight timing control
I'm comfortable with pretty low level scripting of animation, would rather not have it be something that changes Canvas into some
totally different paradigm, but not sure on this point
needs to work well for touch on iOs
I'd ideally like to be using one with good docs and a high truck number. The state of Canvas libs reminds me of the state of JS libs
10 years ago, and I'd rather not invest in something that doesn't
have an actual "team" behind it. Truck number == 1 worries me.
You flagged KineticJS, so I can say a little bit about how that would work.
1) It's a great tool for tracking shapes on a canvas, capturing clicks, and moving them around. It's easy to place an image on any shape, but I would use another program to make those images.
2) Even if you don't do a lot beyond buttons, KineticJS provides some nice features for manipulating the canvas, and I'm sure you'd use a lot of them in making tools for others.
3) KineticJS provides an animation object that repeatedly calls the draw() method for you. You define your draw method in order to create animations.
4) It's more of a wrapper around canvas. You work with a Stage and Layers, but there is still a lot of transparency to the canvas itself, and you can always do direct manipulation as well.
5) You can capture a broad range of events including "touch", "click", etc. It's easy to treat them the same when appropriate or differently if you need to. Furthermore, you can simply mark shapes as "draggable" and it handles all that appropriately.
6) Kinetic has had spectacular documentation and examples, but in looking now, the tutorials seem to be missing from http://kineticjs.com/ and I can't find them elsewhere. That's minorly worrisome, but the docs are still there and my guess is that they'll be back up soon since KineticJS is still under active development.
I'll weigh in on #1:
Nice looking buttons:
Hands-down...use Adobe Illustrator to create a set of button vector images (.svg).
If you need low level control over the button design at run-time then convert the Illustrator images to canvas drawing commands with this great plugin from Mike Swanson:
http://blog.mikeswanson.com/post/29634279264/ai2canvas.
The key here is that canvas will scale the vector button for you so you're always getting a professional, polished look both on a small mobile screen and a large desktop screen.
You could use canvas to build each part of a button from scratch, but don't reinvent the wheel.
A good animation library is Greensock. It also helps you build timelines (kind of like Flash timelines).
http://www.greensock.com/gsap-js/
As to canvas libraries, check out Stackoverflow's sister site that offers software recommendations:
http://softwarerecs.stackexchange.com
Good luck with your project!

SDL accelerated rendering

I am trying to understand the whole 2D accelerated rendering process using SDL 2.0.
So my question is which would be the most efficient way to draw circles in the screen and why?
Some ways would be:
First to create a software surface and then draw the necessary pixels on that surface then create a texture out of that surface and lastly copy that texture to the rendering target.
Also another implementation would be to draw a circle using multiple times SDL_RenderDrawLine.And I think this is the way it is being implemented in SDL 2.0 gfx
Or there is a more efficient way to do all of this?
Take this question more generally in means of if I would wanted to draw other shapes manually, which probably, couldn't be rendered easily with the 2D rendering API that SDL provides(using draw line or rectangle).
With the example of circles this is a fairly complicated question, it is more based on the visual quality you wish to achieve which will drive performance. Drawing lots of short lines will vary vastly based on how close to a circle you wish to get, if you are happy to use say, 60 lines, which will work on small shapes nearly seamlessly but if scaled up will begin to appear not to be a circle, the performance will likely be better (depending on the user's hardware). Note also SDL_RenderDrawLines will be much much faster for many lines as it avoids lots of context switches for rendering calls.
However if you need a very accurate circle with thousands of lines to get a good approximation it will be faster to simply use a bitmap and scale and blit it. This will also give you a 'smoother' feel to the circle.
In my personal opinion I do not think the hardware accelerated render API has much use outside of some special uses such as graph rendering and perhaps very simple GUI drawing. For anything more complex I would usually use bitmap based drawing.
With regards to the second part, it again depends on the accuracy of any arcs you need to draw. If you can easily approximate the shape into a few tens of lines it will be fast, otherwise the pixel method is better.

What's the best way to "smudge" an image programmatically?

I'm messing around with image manipulation, mostly using Python. I'm not too worried about performance right now, as I'm just doing this for fun. Thus far, I can load bitmaps, merge them (according to some function), and do some REALLY crude analysis (find the brightest/darkest points, that kind of thing).
I'd like to be able to take an image, generate a set of control points (which I can more or less do now), and then smudge the image, starting at a control point and moving in a particular direction. What I'm not sure of is the process of smudging itself. What's a good algorithm for this?
This question is pretty old but I've recently gotten interested in this very subject so maybe this might be helpful to someone. I implemented a 'smudge' brush using Imagick for PHP which is roughly based on the smudging technique described in this paper. If you want to inspect the code feel free to have a look at the project: Magickpaint
Try PythonMagick (ImageMagick library bindings for Python). If you can't find it on your distribution's repositories, get it here: http://www.imagemagick.org/download/python/
It has more effect functions than you can shake a stick at.
One method would be to apply a Gaussian blur (or some other type of blur) to each point in the region defined by your control points.
One method would be to create a grid that your control points moves and then use texture mapping techniques to map the image back onto the distorted grid.
I can vouch for a Gaussian Blur mentioned above, it is quite simple to implement and provides a fairly decent blur result.
James

Resources