We have used Google Map API V3 in our application and seemed to have found a strange problem, which is not reproducible on all machines.
The following is the configuration of the machine where we could reproduce it :
OS : Windows7
Graphic Card : 128MB (I guess this should be enough, because I feel the issue is related to these parameters.)
When Zooming out from Google maps in our application we get the following error from Firefox 19 and it crashes completely:l
AdapterVendorID: 0x8086, AdapterDeviceID: 0x 166GL Context? GL Context+ GL Layers? GL Layers+ xpcom_runtime_abort(###!!! ABORT: Framebuffer not complete – error 0x8cd6, mFBOTextureTarget 0xde1, aRect.width 4736, aRect.height 1967: file /builds/slave/rel-m-rel-osx64_bld-0000000000/build/gfx/layers/opengl/LayerManagerOGL.cpp, line 1446)
ProductID: {ec8030f7-c20a-464f-9b0e-13a3a9e97384}
After looking out for a similar issue on Google we found that on turning off the hardware acceleration, the maps no more crash.
However this is not a viable solution in our case, since we have a website and we cannot ask every user to turn hardware acceleration off.
Can anybody think of some possible reasons as to why Firefox crashes while zooming out from Google maps in our application? What does hardware acceleration exactly try to do?
Please let me know if any further information is required.
Any suggestions would be highly appreciated.
It's just started crashing for me, but it seems to only happen if I'm using Open GL. If I turn it off, it works fine ... so far!
Related
Foreword
Since this appears to be a bug in the d3d9 Emulation on the Windows side, this would probably best addressed to Microsoft. If you know where I could get into contact with the DirectX Team, please tell me.
For the time being, I'm assuming that the only real chance we have is working around the bug.
What
We're investigating an inresponsiveness found in the Game Test Drive Unlimited 2.
Only when opening the Map and only when having an "RTX" Card (I think the most precise we got is GDDR6, because AMD also seems affected).
After long debugging, we found out, that it's not a simple fault of the game, but instead Present returning D3DERR_DEVICELOST even when having the game in Windowed Mode.
When the Game is in Fullscreen Mode, it properly does the required roundtrip over TestCooperativeLevel and Reset, but after the next frame is rendered, Present has lost the device again, causing the Hangup.
Now I'm looking for pointers on how to solve this issue. While it's probably some internal state corruption of some sorts, it's definitely triggered by an API Call only present when rendering the Map in that Game.
We will try to dig into d3d9.dll, but my suspicion is that the error code just comes from some Kernel/Driver Call, where our knowledge and tooling ends.
Ideally I'd like to fuzzy-find the drawcall by just hooking everything and omitting random apicalls, but I guess it's just not so easy and causes a lot more errors in most conditions.
Also note that an APITrace we did, showed D3D_OK for every single call including EndScene, up until Present, so it's not as simple as checking the return codes.
Trying to use Direct X 9 in Debug/Diagnostic Mode is also not really possible on Windows 10 anymore apparently, even when installing the SDK from June 2010
Thanks in Advance for any idea and maybe addresses to direct this problem to.
I have problem on my game after I build it and install on my device it's lagging and has low fps(I tested on different devices and everywhere is the same). I tried unity's built in profiler which shows that everything is fine and always displays 100(or more) fps. So I think profiling game after installation can help me, but I can't find any proper profiler to use and can someone give me any suggestion?
Thanks in advance
Unity's profiler is still a valid tool. You can find the slow parts in your scripts.
The main difference on mobile devices are bad graphic cards.
To have good performance on those cards you need to bring down the polygon count and number of draw calls.
You find those infos in the stats window of the Game tab.
Also mobile shaders and baked lighting helps.
Find more hints in Unity's Mobile Optimization Guide.
Look at Graphy plugin. May be it will be useful in your case.
https://assetstore.unity.com/packages/tools/gui/graphy-ultimate-fps-counter-stats-monitor-debugger-105778
So I ran into the dreaded 'unfortunately....has stopped working' issue where art loads 2 classes and the debugger promptly tanks out - see this
So, in utter desperation, I switched from ART to Dalvik, half dreading a long session with ADB if the tablet got sour about the switch. Seemed to work. Tango works, albeit with a whole new set of head scratchers (flakier about getting XyzIj, flash is running, surface binding working, hell I can see the camera flashes in the surface showing the camera view - and if I try again and again, I do get tango point data :-)
Can I assume all the tango issues are of my own doing and keep using Dalvik, or must I switch back to ART and try to do all of my debugging through logcat ?
The answer to the question in title: Can we use Dalvik with Tango?
You should always use ART instead of Delvik on Tango, Delvik will work but NOT stable on Tango device, it might cause the issue you experienced like depth out-of-sync.
Same problem here,
What helps is switching to Dalvik for debugging non tango-related issues, but this really slows development process down, as all apps have to be optimized for each switch between debugging and testing session.
I'm currently writing an OpenGL renderer on my 2011 13" MacBook Pro with a Sandybridge graphics chip.
I'm finding that I'm encountering a lot of kernel panics and reboots when developing OpenGL code. Frequently, whenever I have an error, my system just reboots, rather than gives me chance to catch the error and retrieve an error code.
I know that it is related to the graphics driver as the resultant problem reporting app displayed at reboot identifies it as the entity that crashed.
The specific issue seems closely related to texture creation. Clearly there is some bug in my code, but regardless, this really shouldn't be rebooting the OS under a high-level API like OpenGL.
Does OS X have any kind of debug mode functionality that I might enable, similar to that of D3D, so that I can catch the error earlier, rather than have to use russian roulette debugging?
(I'm aware of the OpenGL profiler, Driver Monitor and so on, yet have had little success with using these tools to catch these sorts of problems)
As you mention, OpenGL Profiler is the tool to use. You should check the box marked, "Break on VAR error" and "break on thread error," at least. If you have trouble with it, let me know and I might be able to help. (I'm no expert but have had some luck with it.)
Beyond that, the crashes you're seeing are probably related to you giving a pointer to OpenGL, and it attempting to read or write memory from that pointer, but the pointer is bad (or the length of the data is wrong). If it's texture related, then perhaps you're attempting to upload or download a texture and passing the wrong width and height, or have the wrong format. I've seen this happen when passing an incorrect number of elements to glDrawElements(). I was confused about whether an "element" was a vertex or an actual object (like a QUAD or TRIANGLE) when it happened to me. The VAR error reporting helped me find that issue.
Just to come back to this for anyone looking... it turns out, that the problem was entirely related to failing to set the current context as different threads begun issuing OpenGL commands.
So, each thread needed to lock a mutex, set the open gl context, and then begin its work. It would then release the context and then the lock, guaranteeing non-simultaneous access to the one OpenGL context.
So, no deeply unknown behaviour here really, just an inexperienced newbie not fully implementing the guidelines out there. :-)
Others have responded with potential workarounds. But note that your application should never be able to cause the machine to panic (which these days simply reboots the machine and presents a dialog to submit the report to Apple).
At a minimum, you should send the report to Apple. Additionally, you should file a bug report at http://bugreport.apple.com including the panic log, a system profiler report, and any details you can provide about how to reproduce (ideally, a sample app binary and/or source code). Filing your own bug report will help in many ways -- prioritizing your bug (dupes bump priority), providing reproduction steps in case the problem & fix aren't obvious from the backtrace in the panic report, and opening a channel between you and Apple in case they need more information from you to track it down.
I'm currently porting a 3D C++ game from iOS to Android using NDK. The rendering is done with GLES2. When I finished rewriting all the platform specific stuff and ran the full game for the first time I noticed rendering bugs - sometimes only parts of the geometry would render, sometimes huge triangles would flicker across the screen, and so on and so on...
I tested it on a Galaxy Nexus running 4.1.2. glGetError() returned nothing. Also, the game ran beautifully on all iOS devices. I started suspecting a driver bug and after hunting for many hours I found out that using VAOs (GL_OES_vertex_array_object) caused the trouble. The same renderer worked fine without VAOs and produced rubbish with VAOs.
I found this bug report at Google Code. Also I saw the same report at IMG forums and a staff member confirmed that it's indeed a driver bug.
All this made me think - how do I handle cases of confirmed driver bugs? I see 2 options:
Not using VAOs on Android devices.
Blacklisting specific devices and driver revisions, and not using VAOs on these devices.
I don't like both options.
Option number 1 will punish all users who have a good driver. VAOs really boost performance and I think it's a really bad thing to ignore them because one device has a bug.
Option number 2 is pretty hard to do right. I can't test every Android device for broken drivers and I expect the list to constantly change, making it hard to keep up.
Any suggestions? Is there perhaps a way to detect such driver bugs at runtime without testing every device manually?
Bugs in OpenGL ES drivers on Android is a well-known thing, so it is entirely possible to have a bug in a driver. Especially if you are using some advanced (not-so-well-tested) features like GL extensions.
In a large Android project we usually fight this issues using the following checklist:
Test and debug our own code thoroughly and check against OpenGL specifications to make sure we are not doing any API-misuses.
Google for the problem (!!!)
Contact the chip-set vendor (usually they have a form on their website to submit bugs from developers, but once you have submitted 2-3 real bugs successfully you will know the direct emails of people who can help) and show them your code. Sometimes they find bugs in the driver, sometimes they find API-misuse...
If the feature doesn't work on a couple of devices, just create a workaround or fallback to a traditional rendering path.
If the feature is not supported by the majority of the top-notch devices - just don't use it, you will be able to add it later once the market is ready for it.