I have a 3d model of a road with textures and without.
When I load road without textures everything works fine ~ 60fps. But when I load road with textures there are 2 variants:
1) If 3D model is not big then it loads and works but with very low fps ~ 10-20
2) If 3D model is big it loads without any errors and warnings but after that I get this error:
WebGL: CONTEXT_LOST_WEBGL: loseContext: context lost
THREE.WebGLShader: gl.getShaderInfoLog() null
This error here:
animate = function() {
renderer.render(scene, camera); <--- error occurs here
stats.update();
controls.update(clock.getDelta());
return requestAnimationFrame(animate);
};
I've read that this error means: 'browser or the OS decides to reset the GPU to get control back' but how can I solve this problem?
1. The reason
It's exactly happening what you already said.
Your browser is hanging while rendering the WebGL scene and loses its context because the browser has effectively lost control on your GPU.
Either you have a huge memory leak in your application or your machine does not have enough power to run your texturized model on a WebGL scene. Squeezing too much by trying to render a heavy object with a really big texture resolution, can lead to a context loss.
2. Diagnosis
If 3D model is not big then it loads and works but with very low fps ~ 10-20
makes me think your machine actually can't handle 3D on a browser very well.
3. Troubleshooting
The first advice is to decrease the resolution of your scene, you can do this halving by 2 or 3 the setSize of your renderer object.
For performance intensive games, you can also give setSize smaller values, like window.innerWidth/2 and window.innerHeight/2, for half the resolution. This does not mean that the game will only fill half the window, but rather look a bit blurry and scaled up.
This means you need to add this to your renderer:
renderer.setSize( window.innerWidth/2, window.innerHeight/2 );
Also tweaking the far distance rendering of your camera helps to gain some performance too. Generally those are the most used values: new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );, if you bring down the far to 800 or 700 you can squeeze extra FPS from your scene (at the price of rendering distance, of course.)
If your application after these tweaks then start running fine then you're actually facing resource-starving problems, which means your computer is not fit to run WebGL or you have a huge memory leak
You can also test your application on another, better computer and see how it performs and how smooth it runs.
If your computer is cutting edge high-end monster gaming machine then the only thing I can suggest you is to start also looking at the resolution of your texture and scale it a bit down.
I'll leave also this link: WebGL - HandlingContextLost (probably you have already seen this), which provides some troubleshooting and ways to recover a crashed instance of WebGL.
4. Solution (to this specific answer)
After a quick chat with Eugene, the problem at the root of his project was Ram usage, his 3D model wasted a lot of Ram that Chrome was taking up to 1.6GB.
The blue bumps are when his WebGL app is running.
After flagging this to him he came back with his solution:
I’ve solved my problem. I’ve concatenated roads to one file and changed renderer size
Related
function render(time, scene) {
if (useFramebuffer) {
gl.bindFramebuffer(gl.FRAMEBUFFER, scene.fb);
}
gl.viewport(0.0, 0.0, canvas.width, canvas.height);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.enable(gl.DEPTH_TEST);
renderScene(scene);
gl.disable(gl.DEPTH_TEST);
if (useFramebuffer) {
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
copyFBtoBackBuffer(scene.fb);
}
window.requestAnimationFrame(function(time) {
render(time, scene);
});
}
I'm not able to share the exact code I use, but a mockup will illustrate my point.
I'm rendering a fairly complex scene and am also doing some ray tracing in WebGL. I've noticed two very strange performance issues.
1) Inconsistent frame rate between runs.
Sometimes, when the page starts the first ~100 frames render in 25ms, then it suddenly drops to 45ms, without any user input or changes to the scene. I'm not updating any buffer or texture data in a frame, only shader uniforms. When this happens the GPU memory stays constant.
2) Rendering to the default framebuffer is slower than using an extra pass.
If I render to a created frambuffer and then blit to the HTML canvas (the default framebuffer), I get 10% performance increase. So in the code snippet, if useFramebuffer == true performance is gained, which seems very counter intuitive.
Edit 1:
Due to changes in requirements, the scene will always be rendered to a framebuffer and then copied to the canvas. This makes question 2) a non-issue.
Edit 2:
System specs of the PC this was tested on:
OS: Win 10
CPU: Intel i7-7700
Nvidia GTX 1080
RAM: 16 GB
Edit 3:
I profiled the scene using chrome:tracing. The first ~100-200 frames render 16.6ms.
Then it starts dropping frames.
I'll try profiling everything with timer queries, but I'm afraid that each render actually takes the same amount of time, and buffer swap will randomly take twice as long.
Another thing I noticed is that this starts happening when I use Chrome for a while. When the problems start, clearing the browser cache or killing the Chrome process don't help, only a system reboot does.
Is it possible that Chrome is throttling the GPU on a whim?
P.S.
The frame times changed because of some optimizations, but the core problem persists.
I'm working with Cocos2d-x to port my PC game to Android.
For the sprites part, I wanted to optimize the rendering process so I decided to dynamically create sprites sheets that contain the frames for all the sprites.
Unfortunately, this makes the rendering process about 10-15 times slower than using small textures containing only the frames for the current sprite (on mobile device, on Windows everything runs smoothly).
I initially thought it could be related to the switching between the sheets (big textures like 4096*4096) when the rendering process would display one sprite from one sheet, then another from another sheet and so on... making a lot of switches between huge textures.
So I sorted the sprites before "putting" their frames in the sprites sheets, and I can confirm that the switches are now non-existent.
After a long investigation, profiling, tests etc... I finally found that one Open GL function takes all the time:
glBufferData(GL_ARRAY_BUFFER, sizeof(_quadVerts[0]) * _numberQuads * 4, _quadVerts, GL_DYNAMIC_DRAW);
Calling this function takes a long time (profiler says more than 20 ms per call) if I use the big texture, quite fast if I use small ones (about 2 ms).
I don't really know Open GL, I'm using it because Cocos2d-x uses it, and I'm not at ease to try to debug/optimize the engine because I really think they are far better than me for that :)
I might be misunderstanding something and I'm stuck on this since several days and I have no idea of what I can do now.
Any clues ?
Note: I'm talking about glBufferData but I have the same issue with glBindFramebuffer, very slow with big textures. I assume this is all the same topic.
Thanks
It is normally a costly call to do as glBufferData involves CPU to GPU transfer.
But the logic behind Renderer::drawBatchedQuads is to flush the quads that have been buffered in a temporary array. The more quads you have to render, the more data have to be transferred.
Since the quads properties (positions, texture, colors) are likely to change each frame, a CPU to GPU transfer is required every frame as hinted by the flag GL_DYNAMIC_DRAW.
According to specs:
GL_DYNAMIC_DRAW: The data store contents will be modified repeatedly and used many times as the source for GL drawing command.
There are possible alternatives to glBufferData such as glMapBuffer or glBufferSubData that could be used for comparison.
I'm working on the visualizations of an interactive installation as seen here: http://vimeo.com/78977964. But I'm running into some issues with the smoothness of the animation. While it tells me it runs on a steady 30 or 60 fps, the actual image is not smooth at all; imagine a 15fps animation with an unsteady clock. Can you guys give me some pointers on where to look in optimizing my sketch?
What I'm doing is receiving relative coordinates (0.-1. on x and y axis) through oscP5. This goes through a data handler to check if there hasn't been input in that area for x amount of time. If all is ok, a new Wave object is created, which will draw an expanding (modulated) circle on its location. As the installation had to be very flexible, all visual parameters are adjustable through a controlP5 GUI.
All of this is running on a computer with i7 3770 3.4Ghz,8 GB RAm and two Radeon HD7700's to drive 4 to 10 Panasonic EX600 XGA projectors over VGA (simply drawing a 3072x1536 window). The CPU and GPU load is reasonable ( http://imgur.com/a/usNVC ) but the performance is not what we want it to be.
We tried a number of solutions including: changing rendering mode; trying a different GPU; different drawing methods; changing process priority; exporting to application; etc. But nothing seemed to make a noticeable improvement. So now I'm guessing its either just processing/java not being able to run smoothly over multiple monitors or something is causing this in my code...
How I draw the waves within the wave class (this is called from the main draw loop for every wave object)
public void draw(){
this.diameter = map(this.frequency, lowLimitFrequency, highLimitFrequency, speedLowFreq, speedHighFreq) * (millis()-date)/5f;
strokeWeight(map(this.frequency, lowLimitFrequency, highLimitFrequency, lineThicknessLowFreq, lineThicknessHighFreq)*map(this.diameter, 0, this.maxDiameter, 1., 0.1)*50);
stroke(255,255,255, constrain((int)map(this.diameter, 0, this.maxDiameter, 255, 0),0,255));
pushMatrix();
beginShape();
translate(h*this.x*width, v*this.y*height);
//this draws a circle from line segments, and is modified by a sinewave
for (int i = 0;i<segments;i++) {
vertex(
(this.distortion*sin(map(i, 0, segments, 0, this.periods*TWO_PI))+1)* this.diameter*sin(i*TWO_PI/segments),
(this.distortion*sin(map(i, 0, segments, 0, this.periods*TWO_PI))+1)* this.diameter* cos(i*TWO_PI/segments)
);
}
vertex(
(this.distortion*sin(map(0, 0, segments, 0, this.periods*TWO_PI))+1)* this.diameter*sin(0*TWO_PI/segments),
(this.distortion*sin(map(0, 0, segments, 0, this.periods*TWO_PI))+1)* this.diameter* cos(0*TWO_PI/segments)
);
endShape();
popMatrix();
}
I hope I've provided enough information to grasp whats going wrong!
My colleagues and I have had similar issues here running a PowerWall (6x3 monitors) from one PC using an Eyefinity setup. The short version is that, as you've discovered, there are a lot of problems running Processing sketches across multiple cards.
We've tended to work around it by using a different approach - multiple copies of the application, which each span one monitor only, render a subsection and sync themselves up. This is the approach people tend to use when driving large displays from multiple machines, but it seems to sidestep these framerate problems as well.
For Processing, there're a couple of libraries that support this: Dan Shiffman's Most Pixels Ever and the Massive Pixel Environment from the Texas Advanced Computing Center. They've both got reasonable examples that should help you through the setup phase.
One proviso though, we kept encountering crashes from JOGL if we tried this with OpenGL rendering - this was about 6 months ago, so maybe that's fixed now. Your draw loop looks like it'll be OK using Java2D, so hopefully that won't be an issue for you.
When I am changing the texture of my mesh, on some computers, the application freeze for like half a second. I do that on 100 different mesh. On the Chrome profiler I see that the Three.js method setTexture is on top of the CPU usage.
The method I use to apply the next texture is the simplest:
this.materials.map = this.nextTexture;
This is working but I have no idea how to optimize this.
If use a particle system instead, would it improve something?
Thanks a lot
Are you really using 100 different textures?
Try sorting your objects according to texture, to minimize texture swapping.
Texture-change is one of the more expensive GPU operations.
I'm trying to create a platformer game, and I am taking various sprite blocks, and piecing them together in order to draw the level. This requires drawing a large number of sprites on the screen every single frame. A good computer has no problem handling drawing all the sprites, but it starts to impact performance on older computers. Since this is NOT a big game, I want it to be able to run on almost any computer. Right now, I am using the following DirectX function to draw my sprites:
D3DXVECTOR3 center(0.0f, 0.0f, 0.0f);
D3DXVECTOR3 position(static_cast<float>(x), static_cast<float>(y), z);
(my LPD3DXSPRITE object)->Draw((sprite texture pointer), NULL, ¢er, &position, D3DCOLOR_ARGB(a, r, g, b));
Is there a more efficient way to draw these pictures on the screen? Is there a way that I can use less complex picture files (I'm using regular png's right now) to speed things up?
To sum it up: What is the most performance friendly way to draw sprites in DirectX? thanks!
The ID3DXSPRITE interface you are using is already pretty efficient. Make sure all your sprite draw calls happen in one batch if possible between the sprite begin and end calls. This allows the sprite interface to arrange the draws in the most efficient way.
For extra performance you can load multiple smaller textures in to one larger texture and use texture coordinates to get them out. This makes it so textures don't have to be swapped as frequently. See:
http://nexe.gamedev.net/directknowledge/default.asp?p=ID3DXSprite
The file type you are using for the textures does not matter as long as they are are preloaded into textures. Make sure you load them all in to textures once when the game/level is loading. Once you have loaded them in to textures it does not matter what format they were originally in.
If you still are not getting the performance you want, try using PIX to profile your application and find where the bottlenecks really are.
Edit:
This is too long to fit in a comment, so I will edit this post.
When I say swapping textures I mean binding them to a texture stage with SetTexture. Each time SetTexture is called there is a small performance hit as it changes the state of the texture stage. Normally this delay is fairly small, but can be bad if DirectX has to pull the texture from system memory to video memory.
ID3DXsprite will reorder the draws that are between begin and end calls for you. This means SetTexture will typically only be called once for each texture regardless of the order you draw them in.
It is often worth loading small textures into a large one. For example if it were possible to fit all small textures in to one large one, then the texture stage could just stay bound to that texture for all draws. Normally this will give a noticeable improvement, but testing is the only way to know for sure how much it will help. It would look terrible, but you could just throw in any large texture and pretend it is the combined one to test what performance difference there would be.
I agree with dschaeffer, but would like to add that if you are using a large number different textures, it may better to smush them together on a single (or few) larger textures and adjust the texture coordinates for different sprites accordingly. Texturing state changes cost a lot and this may speed things up on older systems.