Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have a question and must finish it in 2 days:
Considering the stages of the rendering pipeline, if you have a low
FPS, what stage will come first in your mind as being more probable to
be the origin of your problem?
Anyone could help me understand, or give me a clue,
Thanks,
things to consider:
Draw Calls: do you use glBegin/glEnd, call glVertex several thousand times? Or maybe you are using too many glDrawArrays? maybe you send too many data each frame from sys memory to the GPU?
Vertex shader: do you have simple vertex shader or complex, change it to simple one and check fps... is is better or still it is to low?
Fragment shader: number of texture reads, if statements, instruction complexity. Change resolution of the window and check the fps.
Buffer usage: do you use buffers on the GPU, or maybe transfer everything from mem to gpu mem? try using 1x1 textures to check the performance.
Tools: use tools to perform measurments: geDebugger, glIntercept, etc...
there are other things like (and probably I forgot to list even more): geometry shaders, tesselation, but first check the above list.
https://gamedev.stackexchange.com/questions/1068/opengl-optimization-tips
gpu gems, Chapter 28. Graphics Pipeline Performance
in general: measure, measure, measure :)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
It seems like for special tasks GPU can be 10x or more powerful than the CPU.
Can we make this power more accessible and utilise it for common programming?
Like having cheap server easily handling millions of connections? Or on-the-fly database analytics? Map/reduce/Hadoop/Storm - like stuff with 10x throughput? Etc?
Is there any movement in such direction? Any new programming languages or programming paradigms that will utilise it?
CUDA or OpenCL are good implementations of GPU programming.
GPU programming uses Shaders to process input buffers and almost instantly generate result buffers. Shaders are small algorithms units, mostly working with float values, which contains their own data context (input buffers and constants) to produce results. Each Shader is isolated from the other Shaders during a task, but you can chain them if required.
GPU programming won't be good at handling HTTP requests since this is mostly a complex sequential process, but it will be amazing to process, for example, a photo or a neural network.
As soon as you can chunk your data into tiny parallel units, then yes it can help. The CPU will remain better for complex sequential tasks.
Colonel Thirty Two links to a long and interesting answer about this if you want more informations : https://superuser.com/questions/308771/why-are-we-still-using-cpus-instead-of-gpus
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
What of this algorithms is output sensitive ? (their base algorithm)
ray tracing
gpu rendering
splatting
How can we make them with acceleration method to be likely output sensitive ?
I think ray tracing and gpu is not output sensitive.
http://en.wikipedia.org/wiki/Output-sensitive_algorithm
For the folks who didn't understand the question, in computer science, an output-sensitive algorithm is an algorithm whose running time depends on the size of the output, instead of or in addition to the size of the input.
Ray Tracing is output sensitive, in fact many ray tracing programs can generate smaller size images or movies in faser time.
GPU rendering is output sensitive, the fact that the GPU can parallelise the task, can speed up, but far less computations are required to render a smaller size image than a bigger one.
Texture splatting, is also output sensitive, since typically textures are repeated, so you can generate a huge image joining many of them, thus requiring more cpu power (and memory).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have some issue regarding the performance of d3.js to display points on map.
I am using this beautiful piece of code that I found here as a starting point for my code https://gist.github.com/bobuss/1985684
Basically what the code on the link does is to draw points on maps and draw curves to connect the lines.
However, when I tried to add more data points (around 300) it will somehow either crash my browser or it will lag ALOT
So I was wondering if there's anyway to actually improve the performance in this case..
Thanks!
I considered using d3 to show some genomic data (23k points on scatter plot).
It just couldn't work, most of the browser will crash when you add 23k dom nodes and if you try to have some interactivity (hover, click) you end up with too many event listeners and everything dies.
I love d3, I'm using it since protovis days, but in my experience, it become unusable after few thousands of elements, and every time I had to create chart like that i ended up building it from scratch and implementing it on canvas. And there you end up with entirely new set of problems - implementing your own hit tests, events, simulating hover...
Its a nightmare.
There is no good solution to "big data" charting in JS, at least not to my knowledge.
And that is a shame to be honest, seeing my MacBookPro spinning his fan at max speed and browsers
being unresponsive - because im trying to plot 25k points - on a i7 8GB ram machine, thats a nonsense.
But that is what we get when we try to use browser for something its not meant to be.
And yes, 2D GPU acceleration helped, but not that much.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is a general question that comes to answer an issue that interests me for general knowledge and not to answer a specific problem.
I was wondering what are the available ways to implement a physical engine where objects interact with each other and with outside forces. As an example we can look at Angry Birds, or games like TIM. Where there are objects that "fly" through the air, collide and interact with each other, and are affected by the potential of the environment like gravity, winds and other "forces".
The model that I have thought about is that each object has an object (as an object of some class) and a thread that relate to it. Each time slot the thread get it "advances" the object in the space for some small dt. In this case you could have an "environment" object that can get a position in space and give you the equivalent force that is applied by the environment potential. What I can't exactly get is how the objects interact with each other?
Also, am I close in my direction? Are there other solutions and models for those problems, and are they better? What are the things I'm missing (I must be missing some things)?
the implementation is typically nothing like you describe, which is way too expensive. instead, everything is reduced to matrix transformations. points are lists of coordinates that are operated on by matrices that update them to the next time interval. the matrices are themselves calculated from the physics (they are a linear solution of the forces at that moment, more or less).
things get more complicated when you have very large differences in scale (say, for simulating stars in a galaxy). then you may use a more hierarchical approach so that critical (eg fast moving or, more accurately, strongly accelerating) points are updated more often than non-critical points. but even then the in-memory representation is very abstract and nothing like as direct as "one object per thing".
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Does any have an algorithm for creating infinite terrain/landscape/surface?
Constraints
The algorithm should start by a random seed
The algorithm should be one to one, (the same seed gives the same result)
Other input parameter are allowed as long as 2 is fulfilled
The algorithm may output a 2d map
It suppose to create only surface with varying height (mountains), not three, ocean etc.
I’m looking for an algorithm and not a software.
It should be fast
None of other related questions in here answers this question.
If anything is unclear please let me know!
I would suggest something like Perlin noise, I've used it before for something like you're describing above, and it fits the bill. Check out this Example and you can see the sort of output you would expect from the noise generator.Here is a link to algorithm p-code too.
http://freespace.virgin.net/hugo.elias/models/m_perlin.htm
As others already said perlin noise is a possibility. Gpugems 3 has a nice capter about procedual generation using (IIRC, it has been some time since I read this) 3D Perlin noise.
Of course there are other methods too, e.g. Vterrain.org might be worth a look.