Directx11 handle device removed - directx-11

I get the DXGI_ERROR_DEVICE_REMOVED error on some machines. According to this (https://msdn.microsoft.com/en-us/windows/uwp/gaming/handling-device-lost-scenarios) MSDN article this can happen and it should be handled by your application.
I've managed to recreate the device but I'm unsure how to handle all content. It seems I have to create all vertex buffers and textures again, which essentially means I have to reload almost the entire scene. Is this really the correct way?

Yes, the resources are really gone. You can either:
Recreate your GPU device and hot reload everything from storage.
Keep CPU memory copies of all GPU resources and reload from these rather than storage.
Save application state and restart your whole process as the cleanest option.

Related

Xamarin high memory usage

I'm developing an app in Xamarin, but when I show a map on a page it doubles the memory usage.
When I pop the page and open it again, it increases even more.
Now It seems to be fixed when I use the garbage collector GC.Collect();
I was wondering when it's the best moment to call it.
Personally I was thinking about putting it in onAppearing().
But I'm not sure if it can cause problemns (like it collecting things I still need) or if this is the right way to solve the high memory usage problem.
Calling GC.Collect() is something that shouldn't be done by the developer manually but sometimes necessary when it comes to Xamarin. But instead of calling it in OnAppearing I would recommend to call it in OnDisappearing instead.
If you are in the mood I would suggest that you use OnDisappering to clean-up any references or instances you don't need any more for that page. Than compare your memory-usage and see if that helps too. The reason why I prefer that way over calling GC.Collect is: GC.Collect halts every process your app runs. The more work the collecting has to do the longer your app pauses and my be feeling unresponsive and slow to the user.

Restore application state using ReadProcessMemory/WriteProcessMemory

Before spending several hours to hit a dead end, I thought I should consult the experts here. I want to have a background app in Win32 which can capture the memory of other apps and dump it to a text file, then have the background app restore the app to it's exact state.
It seems like ReadProcessMemory/WriteProcessMemory are functions that were created specifically for that. I understand new memory addresses will have to be created, but is there any reason why using WriteProcessMemory with new addresses could not do this? If you completely replace the process's memory, it should bring it back, right? Even if the app was closed and reopened using CreateProcess, you should get the exact same state when you completely rewrite the process's memory with WriteProcessMemory. Would the newly created addresses know how to point to each other, or is there any way to "make them know"?
Please shed some light on this to me. Thanks to all.

Out of Memory Message box

I have an MFC application developed with VS2003
It is working fine in XP vista etc.
But when i have executed it in windows 8, and we use it for some time,
then no window is displayed. Instead of that the a MessageBox with a message 'Out of Memory' is displayed. And the Message box is Having the caption of my application.
This issue is rarely occurred in windows 7 too.
I have tried watching the handles using tools like processexplorer and it is not increasing.
Also many forums says that it is because of increase in unclosed handles or resources.
Can any one suggest how can i find where the issue is. Or any one provide possible reason for this.
I cant setup the devenv in the machine causing the issue. I am confused how to diagnose by executing a test build in that.
Please provide your findings.
Thanks in advance.
You clearly have a memory leak somewhere. It's hard to be any more specific without seeing the code.
A debugger is really the best way to solve this problem. If you can reproduce the problem on your development machine, that would be the easiest case. If not, you can attach a debugger to the running process on another machine, either locally or remotely.
The MFC libraries also support some basic memory leak detection, turned on by default for Debug builds and controllable for other builds using the AfxEnableMemoryTracking function. You can use this feature to obtain information about which blocks of memory were allocated but not properly deallocated (i.e. were leaked).
Like you mentioned, Process Explorer is another good way to track down resource leaks. Are you sure that the handle counts are remaining constant rather than trending upwards over time? If the values in the columns are never changing like the question suggests, then you are surely doing something wrong. Your application has to be creating objects in order to do its job. The point is to make sure that it disposes of them when it is finished.
If you can't reproduce the problem with the running application and have only the source code available, you'll need to go through the code and make sure that every use of new has a corresponding use of delete (and that new[] matches up with delete[]). And in general in C++, you should avoid explicit dynamic memory allocation wherever possible. Instead, use the container classes that are provided either by MFC or the standard library. For example, don't allocate arrays manually, use std::vector to do it for you. These container classes ensure that the memory is automatically deallocated in the destructor when the object goes out of scope.

Flash game lagging in chrome on Mac

I have created a simple flash game similar to shooting balloons. You can find one example on this site. My game is working fine on windows in all browsers(locally & on server) and it is also working fine on Mac in safari & firefox but game is completely unplayable in chrome on Mac.
Is this because of memory leaks in my game or is this the problem of chrome on Mac.
And how can i trace memory leaks in my game. It is coded in AS3 using Flash CS5.
Thanks
Indeed.. not playable with mac/chrome (ver 15.0.874.120) flash (ver 11.1.102.55).
You can try Monster Debugger:
http://demonsterdebugger.com/
Its quite easy to use.
But first.. my guess is that you have something in your main gameloop (onEnterFrame, timer). You can try to remove things from your gameloop piece by piece until you catch the problem.
EDIT:
I tried your game again, and i believe the problem is somewhere how you handle the hearts, perhaps you have forgot to removeChild or some listeners when hearts are dropped, and off the screen.
Flash uses a garbage collector for managing memory. That means that memory leaks do not exist by definition, everything that is in memory is being referenced somewhere. There's no way for an external tool or application to know whether or not an object in memory should be referenced.
What you can do is run your game through a profiler (flash builder has one) that will track your application's memory usage and with that you might figure out what you're forgetting to clear reference (var = null or mc.removeChild(var)).
it seems like you're not clearing or recycling bitmap data, or some other object, when it's no longer useful, which results in adding more and more data to memory (often in an Enter Frame event handler) and causes major performance issues as we see here.

Comparing cold-start to warm start

Our application takes significantly more time to launch after a reboot (cold start) than if it was already opened once (warm start).
Most (if not all) the difference seems to come from loading DLLs, when the DLLs' are in cached memory pages they load much faster. We tried using ClearMem to simulate rebooting (since its much less time consuming than actually rebooting) and got mixed results, on some machines it seemed to simulate a reboot very consistently and in some not.
To sum up my questions are:
Have you experienced differences in launch time between cold and warm starts?
How have you delt with such differences?
Do you know of a way to dependably simulate a reboot?
Edit:
Clarifications for comments:
The application is mostly native C++ with some .NET (the first .NET assembly that's loaded pays for the CLR).
We're looking to improve load time, obviously we did our share of profiling and improved the hotspots in our code.
Something I forgot to mention was that we got some improvement by re-basing all our binaries so the loader doesn't have to do it at load time.
As for simulating reboots, have you considered running your app from a virtual PC? Using virtualization you can conveniently replicate a set of conditions over and over again.
I would also consider some type of profiling app to spot the bit of code causing the time lag, and then making the judgement call about how much of that code is really necessary, or if it could be achieved in a different way.
It would be hard to truly simulate a reboot in software. When you reboot, all devices in your machine get their reset bit asserted, which should cause all memory system-wide to be lost.
In a modern machine you've got memory and caches everywhere: there's the VM subsystem which is storing pages of memory for the program, then you've got the OS caching the contents of files in memory, then you've got the on-disk buffer of sectors on the harddrive itself. You can probably get the OS caches to be reset, but the on-disk buffer on the drive? I don't know of a way.
How did you profile your code? Not all profiling methods are equal and some find hotspots better than others. Are you loading lots of files? If so, disk fragmentation and seek time might come into play.
Maybe even sticking basic timing information into the code, writing out to a log file and examining the files on cold/warm start will help identify where the app is spending time.
Without more information, I would lean towards filesystem/disk cache as the likely difference between the two environments. If that's the case, then you either need to spend less time loading files upfront, or find faster ways to load files.
Example: if you are loading lots of binary data files, speed up loading by combining them into a single file, then do a slerp of the whole file into memory in one read and parse their contents. Less disk seeks and time spend reading off of disk. Again, maybe that doesn't apply.
I don't know offhand of any tools to clear the disk/filesystem cache, but you could write a quick application to read a bunch of unrelated files off of disk to cause the filesystem/disk cache to be loaded with different info.
#Morten Christiansen said:
One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
That makes the customer pay for initializing our app at every boot even when it isn't used, I really don't like that option (neither does Raymond).
One succesful way to speed up application startup is to switch DLLs to delay-load. This is a low-cost change (some fiddling with project settings) but can make startup significantly faster. Afterwards, run depends.exe in profiling mode to figure out which DLLs load during startup anyway, and revert the delay-load on them. Remember that you may also delay-load most Windows DLLs you need.
A very effective technique for improving application cold launch time is optimizing function link ordering.
The Visual Studio linker lets you pass in a file lists all the functions in the module being linked (or just some of them - it doesn't have to be all of them), and the linker will place those functions next to each other in memory.
When your application is starting up, there are typically calls to init functions throughout your application. Many of these calls will be to a page that isn't in memory yet, resulting in a page fault and a disk seek. That's where slow startup comes from.
Optimizing your application so all these functions are together can be a big win.
Check out Profile Guided Optimization in Visual Studio 2005 or later. One of the thing sthat PGO does for you is function link ordering.
It's a bit difficult to work into a build process, because with PGO you need to link, run your application, and then re-link with the output from the profile run. This means your build process needs to have a runtime environment and deal cleaning up after bad builds and all that, but the payoff is typically 10+ or more faster cold launch with no code changes.
There's some more info on PGO here:
http://msdn.microsoft.com/en-us/library/e7k32f4k.aspx
As an alternative to function order list, just group the code that will be called within the same sections:
#pragma code_seg(".startUp")
//...
#pragma code_seg
#pragma data_seg(".startUp")
//...
#pragma data_seg
It should be easy to maintain as your code changes, but has the same benefit as the function order list.
I am not sure whether function order list can specify global variables as well, but use this #pragma data_seg would simply work.
One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
Another note, is that .NET 3.5SP1 supposedly has much improved cold-start speed, though how much, I cannot say.
It could be the NICs (LAN Cards) and that your app depends on certain other
services that require the network to come up. So profiling your application alone may not quite tell you this, but you should examine the dependencies for your application.
If your application is not very complicated, you can just copy all the executables to another directory, it should be similar to a reboot. (Cut and Paste seems not work, Windows is smart enough to know the files move to another folder is cached in the memory)

Resources