EDIT: The original question is still contained below, but I decided to re-title to a form that will be more useful to developers in a variety of cases, some of which described in my answer below, as the solution to the original problem turned out to provide a much wider area of application.
I have a set of greyscale icons for an application, and a requirement that the icon color can be changed by the user.
So, the obvious solution is to use the stock Colorize element from QtGraphicalEffects.
The effect itself has a cached property - which caches the result of that particular effect so that it is not continuously calculated. However, this only applies to that particular instance of the effect, meaning if there are multiple icon instances, and every one has a colorize effect, this cache will not be shared between the different instances.
Obviously, one single cache would be enough, considering that all icons are the same size and color, that data from VRAM can be reused, saving on VRAM and GPU time.
So the big question is how to reuse that single cache of that single effect and display it multiple times without any overheads.
Also, the previous question is regarding the current course I've taken with colorizing icons. However, there might be another approach I am missing.
Naturally, efficiency is key, but simplicity is also desired, I mean I can think of several low level ways to do that very efficiently, but they all require more complex low level implementations, they are not possible to do in QML.
The solution turned out to be unexpectedly simple.
In the case, specific to the OP - that is colorize icons, the most efficient way is to simply use a custom ShaderEffect with a trivial fragment shader - set gl_FragColor to the desired color, passed as a vec4 and the alpha value from the source image. There is really no need to cache anything, as the shader is really simple and fast, as fast as it gets.
There is only one thing to consider - the possibility that the QML scenegraph might allocate the original image in a texture atlas, the default implementation will then copy the texture from the atlas to another texture. We do not want this as it defeats the purpose - VRAM usage will rise, as this will be done for every "instance" and there is also the possibility that the newly allocate textures will be larger than they need to be, since on some platforms, there is a limitation to how small a texture can be, and in this case we are talking icons, so they won't be all that big.
The solution is to explicitly set supportsAtlasTextures to true. This means you must also pass the offset of the texture in the atlas and calculate the offset - still very little overhead. This will ensure efficiency, textures from atlases will not be duplicated in memory, and furthermore, the render engine will actually allow different shader effects using different texture from the same atlas to be batched together in one call.
A similar approach can be used to cache pretty much anything, and use that cache to display an "image" - use a ShaderEffectSource to "capture" the desired image, and then use a ShaderEffect with an even more trivial fragment shader - simply output the data from the source sampler. Several extremely useful use-cases immediately come to mind:
it can be used to "instantiate" images, the result of computationally intenssive shaders, keep in mind that ShaderEffectSources and ShaderEffects can be chained in arbitrary order
it can be used to instantiate procedurally generated images, once again using shaders, such can be used as tiling textures, and even animated very efficiently
it can be used together with a QML Canvas to use a complex canvas drawing as a cache and source for multiple "images"
it can be used as an image, produced by the composition of complex QML Items - those are actually quite heavy on RAM, imagine a scenario where you have a 1000 objects, and each of them is made out of 20 different QML items - rectangles, texts, images, god forbid animations, that's 20000 object in memory - that's like 500 MBs of RAM usage based on my tests, but if they are identical, a single object can be used to provide cache, and all the other objects can only use a single shader effect to display that cache. It has implications on CPU time as well - say your design is bound to changing values - a very usual scenario, if you have 20000 objects in memory, that's 20000 evaluated bindings - even for trivial expressions this may take several seconds on a mobile device, freezing the screen for that duration. Caching that will reduce the freeze time 1000 times, practically to non-existent.
it can can also be used to cache and instantiate animations, significantly reducing the needed CPU time, and also it can work with video as well
What I did to optimize QtGraphicalEffects was using Item.grabToImage() :)
This function returns QQuickItemGrabResult item which has url() function. This function returns an QUrl which can be set as a source for an Image object.
So what you need to do is create one Image with Colorize applied on it. When it is ready use grabToImage() and after successful grab save the QUrl somewhere safe and destroy the source object.
I suppose that you will need to change the color of the icons from time to time while the application is running. If so keep in mind that just changing the source of the Image objects so that no one uses grabbed image url will make it be released from memory. Not instantaneously but when it is needed.
Because of some incompability my applications only manage the memory correctly if I use QGuiApplication::setAttribute(Qt::AA_UseOpenGLES); in main.cpp file.
Also here is an important fact:
Images are cached and shared internally, so if several Image items have the same source, only one copy of the image will be loaded. (source)
Here is working example. The source object is Rectangle but it can easily be changed to an Image.
import QtQuick 2.3
import QtQuick.Window 2.2
import QtGraphicalEffects 1.0
Window {
visible: true
width: 500
height: 500
property string urlOfMyIcon: ""
Grid {
columns: 1
spacing: 10
Image {
width: 100
height: 100
source: urlOfMyIcon
}
Image {
width: 100
height: 100
source: urlOfMyIcon
}
Image {
width: 100
height: 100
source: urlOfMyIcon
}
}
Component.onCompleted: {
component.createObject(this)
}
Component {
id: component
Item {
id: yourImageWithLoadedIconContainer
Rectangle {
id: yourImageWithLoadedIcon
width: 80
height: 80
color: "white"
visible: false
// needed because I used Rectangle instead of Image
Component.onCompleted: {
colorizeEffect.grabToImage(function(result) {
urlOfMyIcon = result.url;
yourImageWithLoadedIconContainer.destroy()
}, Qt.size(width, height));
}
// use this when using Image instead of Rectangle
// onStatusChanged: {
// if (status === Image.Ready)
// colorizeEffect.grabToImage(function(result) {
// urlOfMyIcon = result.url;
// yourImageWithLoadedIconContainer.destroy()
// }, yourImageWithLoadedIcon.sourceSize);
// }
}
Colorize {
id: colorizeEffect
anchors.fill: yourImageWithLoadedIcon
source: yourImageWithLoadedIcon
hue: 0.8
saturation: 0.5
lightness: -0.2
visible: false
}
}
}
}
Related
Echarts custom series are really flexibel and convenient.
But unfortunately the performance is really slow (compared to the "native" series).
e.g. here is a jsfiddle example (using echarts 4.8.0) that draws 5 custom series (with only 500 points each).
To test with more data, you can simply adjust the variables at the top of the jsfiddle code:
/**
* the number of data-samples
*/
var dataCount = 2000;
/**
* we create one custom series for each item in the csColors array
*/
var csColors = [
'#FF9800', '#9C27B0', '#512DA8', '#4CAF50', '#448AFF'
// , '#d32f2f', '#F1C40F', '#8bc6ff', '#00bc91', '#992f1c'
];
Even with this little data, zoomig (using the mouse wheel) or using the brush is already really slow.
For our application we would need up to 10 charts with ~5 series and 1k data samples each. And with that many samples the custom series is just not usable, because rendering takes way too long.
Any ideas how we could improve the performance?
e.g. when we use the brush it seems that almost every mouse-move redraws the whole series: I guess it may be related to the emphasis settings: is there a way to deactivate this?
Or is there maybe another way we could get fast custom series (i.e. directly draw on the canvas, ..)?
In Windows World, a dedicated render thread would loop something similar to this:
void RenderThread()
{
while (!quit)
{
UpdateStates();
RenderToDirect3D();
// Can either present with no synchronisation,
// or synchronise after 1-4 vertical blanks.
// See docs for IDXGISwapChain::Present
PresentToSwapChain();
}
}
What is the equivalent in Cocoa with CAMetalLayer? All the examples deal with updates being done in the main thread, either using MTKView (with it's internal timer) or using CADisplayLink in the iOS examples.
I want to be in control of the whole render loop, rather than just receiving a callback at some non-specified interval (and ideally blocking for V-Sync if it's enabled).
At some level, you're going to be throttled by the availability of drawables. A CAMetalLayer has a fixed pool of drawables available, and calling nextDrawable will block the current thread until a drawable becomes available. This doesn't imply you have to call nextDrawable at the top of your render loop, though.
If you want to draw on your own schedule without getting blocked waiting on a drawable, render to an off-screen renderbuffer (i.e., a MTLTexture with dimensions matching your drawable size), and then blit from the most-recently-drawn texture to a drawable's texture and present on whatever cadence you prefer. This can be useful for getting frame timings, but every frame you draw and then don't display is wasted work. It also increases the risk of judder.
Your options are limited when it comes to getting callbacks that match the v-sync cadence. Your best is almost certainly a CVDisplayLink scheduled in the default and tracking run loop modes, though this has caveats.
You could use something like a counting semaphore in concert with a display link if you want to free-run without getting too far ahead.
If your application is able to maintain a real-time framerate, you'll normally be rendering a frame or two ahead of what's going on the glass, so you don't want to literally block on v-sync; you just want to inform the window server that you'd like presentation to match v-sync. On macOS, you do this by setting the layer's displaySyncEnabled to true (the default). Turning this off may cause tearing on certain displays.
At the point where you want to render to screen, you obtain the drawable from the layer by calling nextDrawable. You obtain the drawable's texture from its texture property. You use that texture to set up the render target (color attachment) of a MTLRenderPassDescriptor. For example:
id<CAMetalDrawable> drawable = layer.nextDrawable;
id<MTLTexture> texture = drawable.texture;
MTLRenderPassDescriptor *desc = [MTLRenderPassDescriptor renderPassDescriptor];
desc.colorAttachments[0].texture = texture;
From here, it's pretty similar to what you do in an MTKView's drawRect: method. You create a command buffer (if you don't already have one), create a render command encoder using the descriptor, encode drawing commands, end encoding, tell the command buffer to present the drawable (using a -presentDrawable:... method), and commit the command buffer. Whatever was drawn to the drawable's texture is what will end up on-screen when it's presented.
I agree with Warren that you probably don't really want to sync your loop with the display refresh. You want parallelism. You want the CPU to be working on the next frame while the GPU is rendering the most current frame (and the display is showing the last frame).
The fact that there's a limit on how many drawables may be in flight at once and that nextDrawable will block waiting for one will prevent your render loop from getting too far ahead. (You'll probably use some other synchronization before that, like for managing a small pool of buffers.) If you want only double-buffering and not triple-buffering, you can set the layer's maximumDrawableCount to 2 instead of its default value of 3.
how can I set height and width in scaling and can I depend on the image generated (quality and professional scale generation).
how can I set height and width in scaling
You can't. Specify a maxSize for each scaling.sizes entry and Fine Uploader will proportionally scale the image.
can I depend on the image generated (quality
Quality will be limited if you rely on the browser only. There is an entire section in the documentation that explains how you can generate higher-quality resizes by integrating a third-party resize library. I also discuss why you may or may not want to do this. From the documentation:
Fine Uploader's internal image resize code delegates to the drawImage
method on the browser's native CanvasRenderingContext2D object. This
object is used to manipulate a element, which represents a
submitted image File or Blob. Most browsers use linear interpolation
when resizing images. This can lead to extreme aliasing and moire
patterns which is a deal breaker for anyone resizing images for
art/photo galleries, albums, etc. These kinds of artifacts are
impossible to remove after the fact.
If speed is most important, and precise scaled image generation is not
paramount, you should continue to use Fine Uploader's internal scaling
implementation. However, if you want to generate higher quality scaled
images for upload, you should instead use a third-party library to
resize submitted image files, such as pica or limby-resize. As of
version 5.10 of Fine Uploader, it is extremely easy to integrate such
a plug-in into this library. In fact, Fine Uploader will continue to
properly orient the submitted image file and then pass a properly
sized to the image scaling library of your choice to receive
the resized image file, along with the original full-sized image file
drawn onto a for reference. The only caveat is that, due to
issues with scaling larger images in iOS, you may need to continue to
use Fine Uploader's internal scaling algorithm for that particular OS,
as other third-party scaling libraries most likely do not contain
logic to handle this complex case. Luckily, that is easy to account
for as well.
If you'd like to, for example, use pica to generate higher-quality
scaled images, simply pull pica into your project, and contribute a
scaling.customResizer function, like so:
scaling: {
customResizer: !qq.ios() && function(resizeInfo) {
return new Promise(function(resolve, reject) {
pica.resizeCanvas(resizeInfo.sourceCanvas, resizeInfo.targetCanvas, {}, resolve)
})
},
...
}
I'm implementing in my program the gaussian blur effect. To do the job I need to render the first blur information (the one on Y axis) in a specific texture (let's call it tex_1) and use this same information contained in tex_1 as input information for a second render pass (for the X axis) to fill an other texture (let's call it tex_2) containing the final gaussian blur result.
A good practice should be to create 2 frame buffers (FBOs) with a texture attached for each of them and linked both to GL_COLOR_ATTACHMENT0 (for example). But I just wonder one thing:
Is it possible to fill these 2 textures using the same FBO ?
So I will have to enable GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT1 and bind the desired texture to the correct render pass as follow :
Pseudo code:
FrameBuffer->Bind()
{
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT0)->Bind(); //tex_1
{
//BIND external texture to blur
//DRAW code (Y axis blur pass) here...
//-> Write the result in texture COLOR_ATTACHEMENT0 (tex_1)
}
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT1)->Bind(); //tex_2
{
//BIND here first texture (tex_1) filled above in the first render pass
//Draw code (X axis blur pass) here...
//-> Use this texture in FS to compute the final result
//within COLOR_ATTACHEMENT1 (tex_2) -> The final result
}
}
FrameBuffer->Unbind()
But in my mind there is a problem because I need for each render pass to bind an external texture as an input in my fragment shader. Consequently, the first binding of the texture (the color_attachment) is lost!
So does it exist a way to solve my problem using one FBO or do I need to use 2 separate FBOs ?
I can think of at least 3 distinct options to do this. Where the 3rd one will actually not work in OpenGL ES, but I'll explain it anyway because you might be tempted to try it otherwise, and it is supported in desktop OpenGL.
I'm going to use pseudo-code as well to cut down on typing and improve readability.
2 FBOs, 1 attachment each
This is the most straightforward approach. You use a separate FBO for each texture. During setup, you would have:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo2, ATTACHMENT0, tex2)
Then for rendering:
bindFbo(fbo1)
render pass 1
bindFbo(fbo2)
bindTexture(tex1)
render pass 2
1 FBO, 1 attachment
In this approach, you use one FBO, and attach the texture you want to render to each time. During setup, you only create the FBO, without attaching anything yet.
Then for rendering:
bindFbo(fbo1)
attach(fbo1, ATTACHMENT0, tex1)
render pass 1
attach(fbo1, ATTACHMENT0, tex2)
bindTexture(tex1)
render pass 2
1 FBO, 2 attachments
This seems to be what you had in mind. You have one FBO, and attach both textures to different attachment points of this FBO. During setup:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo1, ATTACHMENT1, tex2)
Then for rendering:
bindFbo(fbo1)
drawBuffer(ATTACHMENT0)
render pass 1
drawBuffer(ATTACHMENT1)
bindTexture(tex1)
render pass 2
This renders to tex2 in pass 2 because it is attached to ATTACHMENT1, and we set the draw buffer to ATTACHMENT1.
The major caveat is that this does not work with OpenGL ES. In ES 2.0 (without using extensions) it's a non-starter because it only supports a single color buffer.
In ES 3.0/3.1, there is a more subtle restriction: They do not have the glDrawBuffer() call from full OpenGL, only glDrawBuffers(). The call you would try is:
GLenum bufs[1] = {GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 1);
This is totally valid in full OpenGL, but will produce an error in ES 3.0/3.1 because it violates the following constraint from the spec:
If the GL is bound to a draw framebuffer object, the ith buffer listed in bufs must be COLOR_ATTACHMENTi or NONE.
In other words, the only way to render to GL_COLOR_ATTACHMENT1 is to have at least two draw buffers. The following call is valid:
GLenum bufs[2] = {GL_NONE, GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 2);
But to make this actually work, you'll need a fragment shader that produces two outputs, where the first one will not be used. By now, you hopefully agree that this approach is not appealing for OpenGL ES.
Conclusion
For OpenGL ES, the first two approaches above will work, and are both absolutely fine to use. I don't think there's a very strong reason to choose one over the other. I would recommend the first approach, though.
You might think that using only one FBO would save resources. But keep in mind that FBOs are objects that contain only state, so they use very little memory. Creating an additional FBO is insignificant.
Most people would probably prefer the first approach. The thinking is that you can configure both FBOs during setup, and then only need glBindFramebuffer() calls to switch between them. Binding a different object is generally considered cheaper than modifying an existing object, which you need for the second approach.
Consequently, the first binding of the texture (the color_attachment)
is lost!
No, it isn't. Maybe your framebuffer class works that way, but then, it would be a very bad abstraction. The GL won't detach a texture from an FBO just because you bind this texture to some texture unit. You might get some undefined results if you create a feedback loop (rendering to a texture you are reading from).
EDIT
However, as #Reto Koradi pointed out in his excellent answer, (and his comment to this one), you can't simply render to a single color attachment in unextended GLES1/2, and need some tricks in GLES3. As a result, The fact I'm pointing out here is still true, but not really helpful for the ultimate goal you are trying to achieve.
Up until now, when a user has uploaded an image, I have been saving several different versions of it for use throughout my site. As the site has grown, so have the numbers of sizes needed.
At the moment each uploaded image is sized in to about 6 new images and saved on the server.
The downside is that every time I need to create a new size (right now, for instance, I'm making a new size for an image gallery), I have to cycle through all the thousands of images and re-cut a new size for each.
Whereas, when I started, it was a nice quick way to avoid resizing images on the fly, now it's starting to turn into a nightmare.
Is it better to continue saving different sizes, and just deal with the overhead, or is it better at this point to get maybe 3 general sizes, and resize them on the fly as needed?
"Resizing" images using html/css (e.g., specifying height & width) is generally not what you want to do - it results in poorly scaled images with artifacts from the resize, and is inefficient as the user is potentially downloading a much larger file than they actually need.
Rather, having some kind of server-side solution to allow for on-the-fly resizing is probably what you want. I'd recommend using ImageMagick - combined with the implementation for your favorite language and some web-server voodoo (e.g., using .htaccess for Apache), you can easily have /path/to/yourimage.png?50x50 fire a call to a resize script that resizes the image, saves it in a cache folder, and outputs the resized file to the browser. This is better all around - you get proper resizing, your user only downloads the exact file they need, and the end-result is cached so your resize action only occurs once. Check out Image::Magick::Thumbnail for an example (in perl)
Edit - if you respond to this with what server-side language/framework you are using, I would be happy to point you in the direction of a thumbnail/resizing implementation of ImageMagick or something else for your platform.
Multiple versions.
Some browsers simply don't scale these things well and you end up with choppy nasty in the image, bad pixelation, etc...
The exception could be if you know all the images are photographic. Then have versions for your larger sizes, but shrinking could be ok. But if these have illustration or text, the effect will be noticeable.
.resize {
width: 200px;
height : auto;
}
.resize {
width: auto;
height : 300px;
}