Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have some issue regarding the performance of d3.js to display points on map.
I am using this beautiful piece of code that I found here as a starting point for my code https://gist.github.com/bobuss/1985684
Basically what the code on the link does is to draw points on maps and draw curves to connect the lines.
However, when I tried to add more data points (around 300) it will somehow either crash my browser or it will lag ALOT
So I was wondering if there's anyway to actually improve the performance in this case..
Thanks!
I considered using d3 to show some genomic data (23k points on scatter plot).
It just couldn't work, most of the browser will crash when you add 23k dom nodes and if you try to have some interactivity (hover, click) you end up with too many event listeners and everything dies.
I love d3, I'm using it since protovis days, but in my experience, it become unusable after few thousands of elements, and every time I had to create chart like that i ended up building it from scratch and implementing it on canvas. And there you end up with entirely new set of problems - implementing your own hit tests, events, simulating hover...
Its a nightmare.
There is no good solution to "big data" charting in JS, at least not to my knowledge.
And that is a shame to be honest, seeing my MacBookPro spinning his fan at max speed and browsers
being unresponsive - because im trying to plot 25k points - on a i7 8GB ram machine, thats a nonsense.
But that is what we get when we try to use browser for something its not meant to be.
And yes, 2D GPU acceleration helped, but not that much.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
It's obvious that using CSS3 shadows, tranparency, effects and other features can dramatically lower overall page performance on some phones/tablets, but I can't find out a single way to determine if device performance is not enough to handle "heavy" CSS3 features. I'd create two versions of CSS: lite for budget phones/tablets and full for PCs and so on; but how should I choose? Using media queries doesn't look like a moderate option.
This is quite a sophisticated topic and hard to answer correctly without epics. I try to keep it short and simple:
Tooling
Chrome Dev Tools offer many convenient features for profiling JS and CSS Features. As a first step I'd recommend the Timeline, activating the Capturing of 'JS Profile', Memory and Paint and then analyse the largest colored blocks (where a lot of calcuations/layouting/painting needed to be done. More detailled information: https://developer.chrome.com/devtools/docs/timeline
How to (CSS performance focused)
A quite popular misconception is the rumour of "long and a high amount of selectors" cause Perf. Issues. From what I've seen, debugged and read by great performance Pros like Paul Lewis, this is usually NOT the bottleneck. Computing and applying CSS values is extremely fast. Instead Painting and Layouting, especially when using animations is costly. There are tricks like using translate instead of left/top positioning, using the will-change property and many more. You may find an overview about what causes what here: https://csstriggers.com/
Conclusion: Media Queries might not be a perfect solution, but not bad either, as the huge amount of selectors probably won't matter.
Maybe try to chose by screen resolution? Budget devices are almost always with bad screens :)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have a question and must finish it in 2 days:
Considering the stages of the rendering pipeline, if you have a low
FPS, what stage will come first in your mind as being more probable to
be the origin of your problem?
Anyone could help me understand, or give me a clue,
Thanks,
things to consider:
Draw Calls: do you use glBegin/glEnd, call glVertex several thousand times? Or maybe you are using too many glDrawArrays? maybe you send too many data each frame from sys memory to the GPU?
Vertex shader: do you have simple vertex shader or complex, change it to simple one and check fps... is is better or still it is to low?
Fragment shader: number of texture reads, if statements, instruction complexity. Change resolution of the window and check the fps.
Buffer usage: do you use buffers on the GPU, or maybe transfer everything from mem to gpu mem? try using 1x1 textures to check the performance.
Tools: use tools to perform measurments: geDebugger, glIntercept, etc...
there are other things like (and probably I forgot to list even more): geometry shaders, tesselation, but first check the above list.
https://gamedev.stackexchange.com/questions/1068/opengl-optimization-tips
gpu gems, Chapter 28. Graphics Pipeline Performance
in general: measure, measure, measure :)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is a general question that comes to answer an issue that interests me for general knowledge and not to answer a specific problem.
I was wondering what are the available ways to implement a physical engine where objects interact with each other and with outside forces. As an example we can look at Angry Birds, or games like TIM. Where there are objects that "fly" through the air, collide and interact with each other, and are affected by the potential of the environment like gravity, winds and other "forces".
The model that I have thought about is that each object has an object (as an object of some class) and a thread that relate to it. Each time slot the thread get it "advances" the object in the space for some small dt. In this case you could have an "environment" object that can get a position in space and give you the equivalent force that is applied by the environment potential. What I can't exactly get is how the objects interact with each other?
Also, am I close in my direction? Are there other solutions and models for those problems, and are they better? What are the things I'm missing (I must be missing some things)?
the implementation is typically nothing like you describe, which is way too expensive. instead, everything is reduced to matrix transformations. points are lists of coordinates that are operated on by matrices that update them to the next time interval. the matrices are themselves calculated from the physics (they are a linear solution of the forces at that moment, more or less).
things get more complicated when you have very large differences in scale (say, for simulating stars in a galaxy). then you may use a more hierarchical approach so that critical (eg fast moving or, more accurately, strongly accelerating) points are updated more often than non-critical points. but even then the in-memory representation is very abstract and nothing like as direct as "one object per thing".
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
4 years ago, I've built a webapp which is still used by some friends. the problem with that app, is that now it has a huge database, and it loads very slow. I know that is just my fault, mysql queries are mixted all over the places(even in the layout generation time).
ATM I know some about OO. I'll like to use this knowledge in my old app, but I don't know how to do it without rewriting all the from the beginning. Using MVC for my app, is very difficult at this moment.
If you were in my place, or if you will had the task to improve the speed of my old app, how you will do it? Do you have any tips for me? Any working scenarios?
It all depends on context. The best would be to change the entire application, introducing best practices and standards at once. But perhaps would be better to adopt an evolutionary approach:
1- Identify the major bottlenecks in the application using a profiling tool or load test.
2 - Estimate the effort required to refactoring each item.
3 - Identify the pages for which performance is more sensitive to the end user.
4 - Based on the information identified create a task list and set the priority of each item.
Attack one prolem at a time, making small increments. Always trying to spend 80% of your time solving the 20% more critical problems.
Hard to give specific advice without a specific question, but here are some general optimization/organization techniques:
Profile to find hot spots in your code
you mention mysql queries being slow to load, try to optimize them
possibly move data base access to stored procedures to help modularize your code
look for repeated code and try to move it to objects one piece at a time
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I have always thought the way they zoom in and enhance on TV and movies was strictly impossible. Essentially because you cannot create more information that there is to begin with.
There were ways to get better looking or clearer images like with re-sampling, but never to the extent seen on film.
Now, it seems that is not true.
I was reading this article, and it seems they have a way to do that now?
Or, is this just a better version of what was already possible? You still need to have a fairly clear image to start with? Otherwise, what are the limits of this technique?
There is something called Super-resolution. Some companies claim to use fractal theory to enhance images when they are upscaled. But, what you see in most movies is just fiction.
Image enhancement always involves pixel interpolation (aka. prediction) - in one way or the other. Interpolation can be good, bad or whatever, but it will never out-perform real pixel which was recorded by imaging device at greater resolution.