Description
I noticed a significant difference in the performance of OxyPlot when a LineSignal of a high frequency sine is plotted vs. a LineSignal of a low frequency sine. (It doesn't matter if it is a sine or any other signal. I just used a sine.)
When I use low frequency (where you clearly see the wave of the sine) rendering, panning and zooming are very fast. However when I use high frequency (where you can not identify a curve, the whole plot area is covered by the signal) rendering, panning and zooming are significantly slow.
Apparently the amount of points is not the root of the problem, since I use 100_000 points for each scenario. It looks like the area which is covered by color is crucial. The more of the area is covered the longer the reaction is.
Using the ScreenPoint Decimator helps a bit but the difference is still huge.
I was not able to test the behavior with OxyPlot.SkiaSharp, since mouse operations seems to not work there.
How to reproduce
Checkout the github project on https://github.com/chriglburri/OxyPlotRenderingPerformanceDemo
Read the README.MD how to proceed
As written in the README, comment the lines where a signal with high or low frequency is rendered to
compare the performance of the application
Does anyone have an idea how I can improve the rendering performance?
(This question is basically a duplicate of a github issue for which I got no response: https://github.com/oxyplot/oxyplot/issues/1895)
Using the XamlRenderContext improves the performance significantly. You can create your own PlotView class like this
public class XamlPlotView : PlotView
{
protected override IRenderContext CreateRenderContext()
{
return new XamlRenderContext(this.Canvas);
}
}
Then you use it in the xaml file of your view like this:
<local:XamlPlotView x:Name="Plot">
</local:XamlPlotView>
Related
I'm developing in j2me and using canvas to drawing some images.
Now, my question is : what is difference of below sample codes in speed of drawing?
drawing after clipping area rectangle:
g.clipRect(x, y, myImage.getWidth(), myImage.getHeight());
g.drawImage(myImage, x , y, Graphics.TOP | Graphics.LEFT);
g.setClip(0, 0, screenWidth, screenHeight);
drawing without clip:
g.drawImage(myImage, x, y, Graphics.TOP | Graphics.LEFT);
is the first one is faster? I'm drawing on screen a lot.
Well the direct answer to your question would be Mu I'm afraid - because you appear to approach the issue from the wrong direction.
Thing is, clipping API is not intended for performance considerations / optimizations. You can find full coverage of its purpose in API documentation (available online), it does not state anything related to performance impact:
Clipping
The clip is the set of pixels in the destination of the Graphics object that may be modified by graphics rendering operations.
There is a single clip per Graphics object. The only pixels modified by graphics operations are those that lie within the clip. Pixels outside the clip are not modified by any graphics operations.
Operations are provided for intersecting the current clip with a given rectangle and for setting the current clip outright...
Attempting to use clipping API for imaginary performance considerations will make your code a nightmare to understand for future maintainers. Note this future maintainer may be you yourself, just few weeks / months / years later - I for one had my nose broken on my own code written some time ago without clearly understandable intent - trust me, it hurts the same as messing with poor code written by anyone else.
Don't get me wrong - there is a chance that clipping may have substantial performance impact in some particular case on specific device - why not, everything is possible given the variety of MIDP implementations. Know what? there is even a chance of it having an opposite impact on some other device, why not.
If (if) that happens, if (if) you'll somehow get a clear, solid, tested and proven justification of specific performance impact - then (then), go ahead, implement whatever tricks necessary to reach required performance, no matter how perverse they may be (BTDTGTTS). Until then, though, drop any baseless assumptions that just may come to your mind.
Until then... Just. Drop. It.
Developers love to optimize code and with good reason. It is so satisfying and fun. But knowing when to optimize is far more important. Unfortunately, developers generally have horrible intuition about where the performance problems in an application will actually be... Most performance tuning reminds me of the old joke about the guy who's looking for his keys in the kitchen even though he lost them in the street, because the light's better in the kitchen... (Brian Goetz)
This will almost certainly vary between platforms, and will depend on how much you're actually drawing.
I suggest you measure performance yourself by logging the number of paints per second, or the average duration of a paint method, and painting this on screen.
Drawing without clip should be faster on any platform for the simple reason that you are not calling two clip methods. But I might ask, why are you using clip to begin with?
You usually use clipping when you have an animation sprite or an icon variation in the same file. In this case you can create a file for each frame/icon. It will increase your jar file size and will use more heap space to hold these images on memory, but will be drawn faster.
I have a web cam that takes a picture every N seconds. This gives me a collection of images of the same scene over time. I want to process that collection of images as they are created to identify events like someone entering into the frame, or something else large happening. I will be comparing images that are adjacent in time and fixed in space - the same scene at different moments of time.
I want a reasonably sophisticated approach. For example, naive approaches fail for outdoor applications. If you count the number of pixels that change, for example, or the percentage of the picture that has a different color or grayscale value, that will give false positive reports every time the sun goes behind a cloud or the wind shakes a tree.
I want to be able to positively detect a truck parking in the scene, for example, while ignoring lighting changes from sun/cloud transitions, etc.
I've done a number of searches, and found a few survey papers (Radke et al, for example) but nothing that actually gives algorithms that I can put into a program I can write.
Use color spectroanalisys, without luminance: when the Sun goes down for a while, you will get similar result, colors does not change (too much).
Don't go for big changes, but quick changes. If the luminance of the image changes -10% during 10 min, it means the usual evening effect. But when the change is -5%, 0, +5% within seconds, its a quick change.
Don't forget to adjust the reference values.
Split the image to smaller regions. Then, when all the regions change same way, you know, it's a global change, like an eclypse or what, but if only one region's parameters are changing, then something happens there.
Use masks to create smart regions. If you're watching a street, filter out the sky, the trees (blown by wind), etc. You may set up different trigger values for different regions. The regions should overlap.
A special case of the region is the line. A line (a narrow region) contains less and more homogeneous pixels than a flat area. Mark, say, a green fence, it's easy to detect wheter someone crosses it, it makes bigger change in the line than in a flat area.
If you can, change the IRL world. Repaint the fence to a strange color to create a color spectrum, which can be identified easier. Paint tags to the floor and wall, which can be OCRed by the program, so you can detect wheter something hides it.
I believe you are looking for Template Matching
Also i would suggest you to look on to Open CV
We had to contend with many of these issues in our interactive installations. It's tough to not get false positives without being able to control some of your environment (sounds like you will have some degree of control). In the end we looked at combining some techniques and we created an open piece of software named OpenTSPS (Open Toolkit for Sensing People in Spaces - http://www.opentsps.com). You can look at the C++ source in github (https://github.com/labatrockwell/openTSPS/).
We use ‘progressive background relearn’ to adjust to the changing background over time. Progressive relearning is particularly useful in variable lighting conditions – e.g. if lighting in a space changes from day to night. This in combination with blob detection works pretty well and the only way we have found to improve is to use 3D cameras like the kinect which cast out IR and measure it.
There are other algorithms that might be relevant, like SURF (http://achuwilson.wordpress.com/2011/08/05/object-detection-using-surf-in-opencv-part-1/ and http://en.wikipedia.org/wiki/SURF) but I don't think it will help in your situation unless you know exactly the type of thing you are looking for in the image.
Sounds like a fun project. Best of luck.
The problem you are trying to solve is very interesting indeed!
I think that you would need to attack it in parts:
As you already pointed out, a sudden change in illumination can be problematic. This is an indicator that you probably need to achieve some sort of illumination-invariant representation of the images you are trying to analyze.
There are plenty of techniques lying around, one I have found very useful for illumination invariance (applied to face recognition) is DoG filtering (Difference of Gaussians)
The idea is that you first convert the image to gray-scale. Then you generate two blurred versions of this image by applying a gaussian filter, one a little bit more blurry than the first one. (you could use a 1.0 sigma and a 2.0 sigma in a gaussian filter respectively) Then you subtract from the less-blury image, the pixel intensities of the more-blurry image. This operation enhances edges and produces a similar image regardless of strong illumination intensity variations. These steps can be very easily performed using OpenCV (as others have stated). This technique has been applied and documented here.
This paper adds an extra step involving contrast equalization, In my experience this is only needed if you want to obtain "visible" images from the DoG operation (pixel values tend to be very low after the DoG filter and are veiwed as black rectangles onscreen), and performing a histogram equalization is an acceptable substitution if you want to be able to see the effect of the DoG filter.
Once you have illumination-invariant images you could focus on the detection part. If your problem can afford having a static camera that can be trained for a certain amount of time, then you could use a strategy similar to alarm motion detectors. Most of them work with an average thermal image - basically they record the average temperature of the "pixels" of a room view, and trigger an alarm when the heat signature varies greatly from one "frame" to the next. Here you wouldn't be working with temperatures, but with average, light-normalized pixel values. This would allow you to build up with time which areas of the image tend to have movement (e.g. the leaves of a tree in a windy environment), and which areas are fairly stable in the image. Then you could trigger an alarm when a large number of pixles already flagged as stable have a strong variation from one frame to the next one.
If you can't afford training your camera view, then I would suggest you take a look at the TLD tracker of Zdenek Kalal. His research is focused on object tracking with a single frame as training. You could probably use the semistatic view of the camera (with no foreign objects present) as a starting point for the tracker and flag a detection when the TLD tracker (a grid of points where local motion flow is estimated using the Lucas-Kanade algorithm) fails to track a large amount of gridpoints from one frame to the next. This scenario would probably allow even a panning camera to work as the algorithm is very resilient to motion disturbances.
Hope this pointers are of some help. Good Luck and enjoy the journey! =D
Use one of the standard measures like Mean Squared Error, for eg. to find out the difference between two consecutive images. If the MSE is beyond a certain threshold, you know that there is some motion.
Also read about Motion Estimation.
if you know that the image will remain reletivly static I would reccomend:
1) look into neural networks. you can use them to learn what defines someone within the image or what is a non-something in the image.
2) look into motion detection algorithms, they are used all over the place.
3) is you camera capable of thermal imaging? if so it may be worthwile to look for hotspots in the images. There may be existing algorithms to turn your webcam into a thermal imager.
I have about 5000 markers I need to render on Google Map. I'm currently using the API (v3) and there are performance issues on slower machines, especially in IE. I have done the following already to help speed things up:
Used a simple marker class that extends OverlayView and renders a single DIV element per marker
Implemented the MarkerClusterer library to cluster the markers at different levels
Render GIFs for IE, instead of alpha PNGs
Are there faster clustering classes? Any other tips? I'm trying to avoid server-side clustering unless this is the only option left to squeeze performance out of the system.
Thanks
I used a method that loads all the markers onto the page, and then listens for the map to finish panning.
When the map has finished panning, I first check the zoom level - if it's too high I don't display anything. If it's at an acceptable level, I then loop through the markers I have stored and see if they fall into the bounding box of the map. If they do, they get added. A second loop then removes any that have moved out of the view.
The highest number I've used is about 30,000 markers with this method, although I have it so you must be zoomed in quite far to see them. In areas of higher concentration of markers it's obviously a little slower but it's useable.
The solution mentioned above works for much higher number of markers. We use it for millions of GPS points at backend (including polygons etc). The only problem is some logic behind like proper caching of spatial queries, or fetching new results only, if user moves a map for more than X meters. There is a lot of work to make it done, but for viewing real high number of points, there is nothing better.
Marker clusteres are usually working at browser side, so these is still need to load all points at once - and this makes this method unusable for large numbers.
You can check it out at http://www.tixik.com/london-2354567.htm live (just click ,,plan a trip " and start planning. Just try to move a map, zoom in or out and all points will show/hide on map zoom/drag.
What is the principle behind creating rain effect or water drops regardless of using any particular language. I've seen a few impressive rain and water effects done in Flash, but how does it actually work?
Rain Effect Example
Rain Drop Water Effect Example
You are asking a question as if the two examples were related, but you actually have
1) simulating drops of rain as seen in air (drop trails; simple but the realism depends on lighting very much)
for this you to simulate following events:
for each time step:
create new drops
move existing drops vertically down
remove (or/and animate) the drops hitting the ground
as pointed in other answers new drops (size and position) can be created with various algorithms.
as for speed they move with constant speed.
finally to show your trails you need to look at simple projections
2) simulating splash waves (water simulation, and in the example a reflective surface is shown)
For this you only need to know where the drops fall and how big they are, the rest is wave propagation. However that's only really visible if there is a reflection and that can be a bit tricky.
NOTES:
There are many things that determines realism, but mostly it boils down to detail.
For example rain is usually seen clearly only in strange lighting conditions - close to lamps or on high contrast background. Otherwise it is quite bleak.
Also the details in interaction - splashing on surfaces that it hits, which can leave bubbles (if close enough to notice), or create waves.
Another example - if you look at this tutorial, which is not really realistic, but it does illustrate one point, you will see that even though the rain looks more like a snow it exposes the 'flatness' of your first example (which has absolutely no depth).
So, it is all about detail.
Try to model what you have in terms of events that you have to simulate and then solve simulating each one separately - for example using fractals for seeding rain might be an overkill, but if you nicely model your work you start with random seeding and latter substitute with more accurate/complex methods.
Here's a paper by Mandelbrot and Lovejoy which is one of the most cited works on developing fractal models to represent rain.
The second one (Rain Drop Water Effect Example) is probably done with a wave equation simulator
They probably use particle effects mostly.
An old school way that is dirt cheap is to use palette cycling. Basically, you setup a ramp of colors and move one color into the next in fixed intervals. The moving colors give the illusion of motion. I've worked on games where rain, wind, snow, waterfalls, fire, etc. have all been animated using palette cycling. It's a dying art, but it still works. :)
Modern UI's are starting to give their UI elments nice inertia when moving. Tabs slide in, page transitions, even some listboxes and scroll elments have nice inertia to them (the iphone for example). What is the best algorythm for this? It is more than just gravity as they speed up, and then slow down as they fall into place. I have tried various formulae's for speeding up to a maximum (terminal) velocity and then slowing down but nothing I have tried "feels" right. It always feels a little bit off. Is there a standard for this, or is it just a matter of playing with various numbers until it looks/feels right?
You're talking about two different things here.
One is momentum - giving things residual motion when you release them from a drag. This is simply about remembering the velocity of a thing when the user releases it, then applying that velocity to the object every frame and also reducing the velocity every frame by some amount. How you reduce velocity every frame is what you experiment with to get the feel right.
The other thing is ease-in and ease-out animation. This is about smoothly accelerating/decelerating objects when you move them between two positions, instead of just linearly interpolating. You do this by simply feeding your 'time' value through a sigmoid function before you use it to interpolate an object between two positions. One such function is
smoothstep(t) = 3*t*t - 2*t*t*t [0 <= t <= 1]
This gives you both ease-in and ease-out behaviour. However, you'll more commonly see only ease-out used in GUIs. That is, objects start moving snappily, then slow to a halt at their final position. To achieve that you just use the right half of the curve, ie.
smoothstep_eo(t) = 2*smoothstep((t+1)/2) - 1
Mike F's got it: you apply a time-position function to calculate the position of an object with respect to time (don't muck around with velocity; it's only useful when you're trying to figure out what algorithm you want to use.)
Robert Penner's easing equations and demo are superb; like the jQuery demo, they demonstrate visually what the easing looks like, but they also give you a position time graph to give you an idea of the equation behind it.
What you are looking for is interpolation. Roughly speaking, there are functions that vary from 0 to 1 and when scaled and translated create nice looking movement. This is quite often used in Flash and there are tons of examples: (NOTE: in Flash interpolation has picked up the name "tweening" and the most popular type of interpolation is known as "easing".)
Have a look at this to get an intuitive feel for the movement types:
SparkTable: Visualize Easing Equations.
When applied to movement, scaling, rotation an other animations these equations can give a sense of momentum, or friction, or bouncing or elasticity. For an example when applied to animation have a look at Robert Penners easing demo. He is the author of the most popular series of animation functions (I believe Adobe's built in ones are based on his). This type of transition works equally as well on alpha's (for fade in).
There is a bit of method to the usage. easeInOut start slow, speeds up and the slows down. easeOut starts fast and slows down (like friction) and easeIn starts slow and speeds up (like momentum). Depending on the feel you want you choose the appropriate one. Then you choose between Sine, Expo, Quad and so on for the strength of the effect. The others are easy to work out by their names (e.g. Bounce bounces, Back goes a little further then comes back like an elastic).
Here is a link to the equations from the popular Tweener library for AS3. You should be able to rewrite these in JavaScript (or any other language) with little to no trouble.
It's playing with the numbers.. What feels good is good.
I've tried to develop magic formulas myself for years. In the end the ugly hack always felt best. Just make sure you somehow time your animations properly and don't rely on some kind of redraw/refresh rate. These tend to change based on the OS.
Im no expert on this either, but I beleive they are done with quadratic formulas, that, when given the correct parameters, start fast or slow and dramatically increase or decrease towards the end until a certain point is reached.