I develop in Visual Studio (with monogame for Windows Phone 8.1). When I launch my app with "Run without debug" it starts pretty fast, but in debug it launches very slow (about 5 minutes, not counting build time!). The problem I see (beyound slow loading external symbols) is that my app loads many graphics files, but before load a picture it searches its hd version, its hd and localized version, and only localized. Most files don't have hd versions, some of them localized, some are not. So in log I see many messages:
A first chance exception of type 'System.IO.FileNotFoundException' occurred in mscorlib.ni.dll
A first chance exception of type 'System.IO.FileNotFoundException' occurred in MonoGame.Framework.DLL
Of course when starting w/o debugging all that debug stuff is not working and app launches fast.
The only way to check if file is in Content folder is trying to open it (TitleContainer.OpenStream) and catching an exception. So I can't avoid generating those exceptions. How can I speed-up debug launch disabling somehow this stupid slow FileNotFoundException handling?
In my case with annoying exception handling I solved problem by preloading filenames recursively and then searching in stringlist:
private static List<string> mContentFilenames = new List<string>();
private static void preloadContentFilenamesRecursive(StorageFolder sf)
{
var files = sf.GetFilesAsync().AsTask().ConfigureAwait(false).GetAwaiter().GetResult();
if (files != null)
{
foreach (var f in files)
{
mContentFilenames.Add(f.Path.Replace('\\','/'));
}
}
var folders = sf.GetFoldersAsync().AsTask().ConfigureAwait(false).GetAwaiter().GetResult();
if (folders != null)
{
foreach (var f in folders)
{
preloadContentFilenamesRecursive(f);
}
}
}
private static void preloadContentFilenames()
{
if (mContentFilenames.Count > 0)
return;
var installed_loc = Windows.ApplicationModel.Package.Current.InstalledLocation;
var content_folder = installed_loc.GetFolderAsync("Content").AsTask().ConfigureAwait(false).GetAwaiter().GetResult();
if (content_folder != null)
preloadContentFilenamesRecursive(content_folder);
}
private static bool searchContentFilename(string name)
{
var v = from val in mContentFilenames where val.EndsWith(name.Replace('\\', '/')) select val;
return v.Any();
}
Update
But use this code only when debugger is attached. I underestimated microsoft workers - they are happy to turn your debugging to hell. Their lame error handling and lack of isFileExist function makes you to recursively check files to implement isFileExist without an exception yourself, but - surprize - if debugger is not attached, this code makes app silently exit whithout any exception. And before app crashes it check random number of files, so the problem is not in particular method - it just "can't check many files", how many - varies from time to time.
Taking into account that VS crashes in 50% of cases launching any (small or big) app on real WP8.1 device, and the fact that there is no access to device filesystem, imagine how it was difficult to find the root of crashes. I've spent almost a day for that!
Btw, after all those twenty years working with windows, windows store and windows phone apps I'd really like to meet someone from msft just to look in the eye. To see what is it - stupidity or sadism? Do they really hate us, developers? Why?
Related
Situation:
We have an MFC/C++ Visual Studio (2005) application consisting of a lot of executables and a lot of dlls, all using MFC and interconnected with DCOM. Some are running on a controller (w2012) which controls slave computers running on WES2009.
The problem:
For diagnostic purposes we are embedding minidumps in all of our processes. This mechanism works fine in all processes except for one: the GUI exe. All processes including the GUI make dmp files BUT the dmp file contents of the GUI seems to be different/wrong. When I intentionally crash our application with e.g. a null pointer dereference, all dmp files of all processes/dlls (except GUI) point to the cause (the null pointer dereference)! The dmp file of the GUI process is created and can be openend in Visual Studio but non of the threads point to the cause (the null pointer dereference). Also windbg does not find the cause! The strange thing is that when we manually use WriteStackDetails() to dump the callstack it returns the correct problematic line! So why can't MinidumpWriteDump() do the same for only this one process? What could be the discriminating factor? Anyone any idea?
What we tried:
We tried crashes in all other process and dlls and they all seem to work ok except the GUI process! Unicode / non-Unicode does not seem to matter. A seperate test application works well, also when I link our production code library which contains the UnhandledExceptionFilter() and MinidumpWriteDump(). Crashes in sub(-sub) dlls does not seem to matter. The project settings wrt exception handling appear to be all the same. Anyone any idea?
Some more info and remarks:
Our production code (controller and slaves) is running in separate virtual boxes for development purposes.
yes we understand that the minidump should ideally be created from another process (some example somewhere? wrt process querying and security?) but doing it in-process seems to work 'always ok' for now. So we accept the risk for now that it might hang in rare situations.
What I mean with the dmp file contents is different/wrong is the following:
For our non-GUI exe we get the following OK thread / callstack information:
0xC0000005: Access violation reading location 0x00000000.
Studio automatically opens the correct source and the "breakpoint" is set to the faulty line of code.
In the Call stack tab I see my own functions in my own dll which has caused the crash: my null pointer dereference.
In the Threads tab I also see my active thread and Location which points also to the faulty function which crashed.
So all is fine and usable in this situation! Super handy functionality!
For our GUI exe, which links to the same production library code wrt MinidumpWriteDump() and ExceptionHandlingFiler() code, we get the following NOK thread / callstack information:
Unhandled exception at 0x77d66e29 (ntdll.dll) in our_exe_pid_2816_tid_2820_crash.dmp: 0xC0150010: The activation context being deactivated is not active for the current thread of execution
Visual Studio 2005 does not show my faulty code as being the cause!
In the Call stack tab I don't see my own faulty function.
The Call stack tab shows that the problem is in ntdll.dll!RtlDeactivateActivationContextUnsafeFast()
The top most function call which is shown of our code is in a totally different gui helper dll, which is not related to my intentionally introduced crash!
The Threads tab also shows the same.
For both situations I use the same visual studio 2005 (running on w7) with the same settings for symbol paths!!! Also visual studio 2017 cannot analyze the 'wrong' dmp files. In between of both test above, there is no rebuild so no mismatch occurs between exe/dlls and pdbs. In one situation it works fine and in another not!?!
The stripped-down-to-essentials code we use is shown below
typedef BOOL (_stdcall *tMiniDumpWriteDump)(HANDLE hProcess, DWORD dwPid, HANDLE hFile,
MINIDUMP_TYPE DumpType, CONST PMINIDUMP_EXCEPTION_INFORMATION ExceptionParam,
CONST PMINIDUMP_USER_STREAM_INFORMATION UserStreamParam,
CONST PMINIDUMP_CALLBACK_INFORMATION CallbackParam);
TCHAR CCrashReporter::s_szLogFileNameDmp[MAX_PATH];
CRITICAL_SECTION CCrashReporter::s_csGuard;
LPTOP_LEVEL_EXCEPTION_FILTER CCrashReporter::s_previousFilter = 0;
HMODULE CCrashReporter::s_hDbgHelp = 0;
tMiniDumpWriteDump CCrashReporter::s_fpMiniDumpWriteDump = 0;
CCrashReporter::CCrashReporter()
{
LoadDBGHELP();
s_previousFilter = ::SetUnhandledExceptionFilter(UnhandledExceptionFilter);
::InitializeCriticalSection(&s_csGuard);
}
CCrashReporter::~CCrashReporter()
{
::SetUnhandledExceptionFilter(s_previousFilter);
...
if (0 != s_hDbgHelp)
{
FreeLibrary(s_hDbgHelp);
}
::DeleteCriticalSection(&s_csGuard);
}
LONG WINAPI CCrashReporter::UnhandledExceptionFilter(PEXCEPTION_POINTERS pExceptionInfo)
{
::EnterCriticalSection(&s_csGuard);
...
GenerateMinidump(pExceptionInfo, s_szLogFileNameDmp);
::LeaveCriticalSection(&s_csGuard);
return EXCEPTION_EXECUTE_HANDLER;
}
void CCrashReporter::LoadDBGHELP()
{
/* ... search for dbghelp.dll code ... */
s_hDbgHelp = ::LoadLibrary(strDBGHELP_FILENAME);
if (0 == s_hDbgHelp)
{
/* ... report error ... */
}
if (0 != s_hDbgHelp)
{
...
s_fpMiniDumpWriteDump = (tMiniDumpWriteDump)GetProcAddress(s_hDbgHelp, "MiniDumpWriteDump");
if (!s_fpMiniDumpWriteDump)
{
FreeLibrary(s_hDbgHelp);
}
else
{
/* ... log ok ... */
}
}
}
void CCrashReporter::GenerateMinidump(const PEXCEPTION_POINTERS pExceptionInfo,
LPCTSTR pszLogFileNameDmp)
{
HANDLE hReportFileDmp(::CreateFile(pszLogFileNameDmp, GENERIC_WRITE, 0, 0,
CREATE_ALWAYS, FILE_FLAG_WRITE_THROUGH, 0));
if (INVALID_HANDLE_VALUE != hReportFileDmp)
{
MINIDUMP_EXCEPTION_INFORMATION stMDEI;
stMDEI.ThreadId = ::GetCurrentThreadId();
stMDEI.ExceptionPointers = pExceptionInfo;
stMDEI.ClientPointers = TRUE;
if(!s_fpMiniDumpWriteDump(::GetCurrentProcess(), ::GetCurrentProcessId(),
hReportFileDmp, MiniDumpWithIndirectlyReferencedMemory,
&stMDEI, 0, 0))
{
/* ... report error ...*/
}
else
{
/* ... report ok ... */
}
::CloseHandle(hReportFileDmp);
}
else
{
/* ... report error ...*/
}
}
I'm facing a problem I can't seems to fix and I need your help.
I'm generating a list of PDF that I write to the hard drive and everything works fine for a small amount of files, but when I start to generate more files (via a for loop), the creations stops and the others PDF files arent created.
I'm using Play Framework with the PDF module, that rely on ITextRenderer to generate the PDF.
I localized the problem (well, I believe it's here) by adding outputs to see where it stops, and the problem is when I call .createPDF(os);.
At first, I was able to only create 16 files and after that, it would stops, but I created a Singleton that creates the renderer in the Class instance and re-use the same instance (in order to avoid adding the fonts and settings everytime) and I went to 61 files created, but no more.
I though about a memory leak that blocks the process, but can't see where nor how to find it correctly.
Here's my part of the code :
List models; // I got a list of MyModel from a db query, this MyModel contains a path to a file
List<InputStream> files = new ArrayList<InputStream>();
for (MyModel model : models) {
if (!model.getFile().exists()) {
model.generatePdf();
}
files.add(new FileInputStream(model.getFile()));
}
// The generatePDF :
public void generatePdf() {
byte[] bytes = PDF.toBytes(views.html.invoices.pdf.invoice.render(this, due));
FileOutputStream output;
try {
File file = getFile();
if (!file.getParentFile().exists()) {
file.getParentFile().mkdirs();
}
if (file.exists()) {
file.delete();
}
output = new FileOutputStream(file);
BufferedOutputStream bos = new BufferedOutputStream(output);
bos.write(bytes);
bos.flush();
bos.close();
output.flush();
output.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
As you can see, I do my best to avoid memory leaks but this isn't enough.
In order to locate the problem, I replaced PDF.toBytes and all subsequent calls from that class to a copy/paste version inside my class, and added outputs. That's how I found that the thread hangs at createPDF, line
Update 1:
I have two (indentical) PlayFramework applications running with those parameters :
-Xms1024m -Xmx1024m -XX:PermSize=512m -XX:MaxPermSize=512m
I tried to stop one instance and re-execute the PDF generation, but it didn't impact the number of file generated, it stops at the same amount of files.
I also tried to update the allocated memories :
-Xms1536m -Xmx1536m -XX:PermSize=1024m -XX:MaxPermSize=1024m
No changes at all neither.
For information, the server has 16 Gb of RAM.
cat /proc/cpuinfo :
model name : Intel(R) Core(TM) i5-2400 CPU # 3.10GHz
cpu MHz : 3101.000
cpu cores : 4
cache size : 6144 KB
Hope it'll helps.
Well I'm really surprised the bug has absolutely nothing related to memory, memory leaks or available memory left.
I'm astonished.
It's related to an image that was loaded via an url, in the same server (local), that was taking to long to load. Removing that image fixed the issue.
I will make a base64 encoded image and it should fix the issue.
I still can't believe it!
The module is developed by Jörg Viola, I think it's safe to assume everything is fine on this side. From the IText library, I also believe it's safe to assume that everything is safe.
The bottleneck, as you guessed, was from your code. The interesting part was that it's wasn't some memory not properly managed, but from a network request that was making the PDF rendering slower and slower everytime, until it would ultimately fails.
It's nice you finally make it work.
Dear programmers, i wrote a program wich target a Windows Mobile platform (NetCF 3.5).
My programm has a method of answers check and this method show dynamically created pictureboxes, textboxes and images in new form. Here is a method logic:
private void ShowAnswer()
{
PictureBox = new PictureBox();
PictureBox.BackColor = Color.Red;
PictureBox.Location = new Point(x,y);
PictureBox.Name = "Name";
PictureBox.Size = Size(w,h);
PictureBox.Image = new Bitmap(\\Image01.jpg);
}
My problem is in memory leaks or something. If the user work with a programm aproximately 30 minutes and run the ShowAnswer() method several times, Out of memry exception appears. I know that the reason may be in memory allocation of bitmaps, but i even handle the ShowAnswers form closing event and manually trying to release all controls resources and force a garbage collector:
foreach(Control cntrl in this.Controls)
{
cntrl.Dispose();
GC.Collect();
}
It seems like everything collects and disposes well, every time i check the taskmanager on my windows mobile device during the programm tests and see that memory were released and child form was closed properly, but in every ShowAnswer() method call and close i see a different memory amount in device taskmanager (somtimes it usues 7.5 Mb, sometimes 11.5, sometimes 9.5) any time its different, but it seems like sometimes when the method start to run as usual memory is not allocated and Out of memory exception appears.. Please advice me how to solve my problem.. Maybe i should use another Dispose methods, or i should set bitmap another way.. thank you in advance!!!
Depending on how you're handling the form generation, you might need to dispose of the old Image before loading a new one.
private void ShowAnswer()
{
PictureBox = new PictureBox();
PictureBox.BackColor = Color.Red;
PictureBox.Location = new Point(x,y);
PictureBox.Name = "Name";
PictureBox.Size = Size(w,h);
if(PictureBox.Image != null) //depending on how you construct the form
PictureBox.Image.Dispose();
PictureBox.Image = new Bitmap(\\Image01.jpg);
}
However, you should also check before you load the image that it's not so obscenely large that it munches up all of your device's memory.
Edit: I don't just mean the size of the compressed image in memory - I also mean the physical size of the image (height & width). The Bitmap will create an uncompressed image that will take up much, much more memory than is resident on storage memory (height*width*4). For a more in-depth explanation, check out the following SO question:
OutOfMemoryException loading big image to Bitmap object with the Compact Framework
We are upgrading from VC8 to VC10 and have found a number of memory leaks that seem to be CDialog related. The simplest example of this is demonstrated with the following code using a CDialog that just has a number of buttons. In VC10 this leaks, but in VC8 it doesn't:
for (int i = 0; i < 5000; ++i) {
CDialog* dialog = new CDialog;
dialog->Create(IDD_LEAKER, 0);
dialog->DestroyWindow();
delete dialog;
}
Memory usage keeps rising and the example dialog we have with about 30 buttons leaks 10s of Mb.
Note that the above is a test example where we have stripped out all of our dialog handling code, in our real code we have a derived class and use PostNcDestroy().
Oddly neither of the following code examples leak in either VC8 or VC10:
CDialog* dialog = new CDialog;
for (int i = 0; i < 5000; ++i) {
dialog->Create(IDD_LEAKER, 0);
dialog->DestroyWindow();
}
delete dialog;
for (int i = 0; i < 5000; ++i) {
CDialog* dialog = new CDialog;
delete dialog;
}
What are we missing here?
This appears to be down to the way that MFC manages its handle maps:
What is the lifetime of a CWnd obtained from CWnd::FromHandle?
If you wait long enough for your application to become idle, you do get your memory back, i.e. it's not really a leak. However, as you have observed, while Visual C++ 2010 continues to consume more and more memory - until the maps are tidied in OnIdle() - this doesn't appear to happen in Visual C++ 2008.
Debugging an application containing your code does show that there are a lot more objects in the HWND temporary map in the VC 10 version than there are in the VC 9 version.
The handle map code (winhand.cpp) doesn't appear to have changed between the two versions but there's lots of code in MFC that uses it!
Anyway, assuming that you really want to run your program like this - I guess you're running in some kind of automated mode? - then you'll want to force the garbage collection at appropriate intervals. Have a look at this entry on MSDN:
http://msdn.microsoft.com/en-us/library/xt4cxa4e(v=VS.100).aspx
CWinThread::OnIdle() actually calls this to tidy things up:
AfxLockTempMaps();
AfxUnlockTempMaps(/*TRUE*/);
My Win32 application performs numerous disk operations in a designated temporary folder while functioning, and seriously redesigning it is out of the question.
Some clients have antivirus software that scans the same temporary directory (it simply scans everything). We tried to talk them into disabling it - it doesn't work, so it's out of the question either.
Every once in a while (something like once for every one thousand file operations) my application tries to perform an operation on a file which is at that very time opened by the antivirus and is therefore locked by the operating system. A sharing violation occurs and causes an error in my application. This happens about once in three minutes on average.
The temporary folder can contain up to 100k files in most typical scenarios, so I don't like the idea of having them open at all times because this could cause running out of resources on some edge conditions.
Is there some reasonable strategy for my application to react to situations when a needed file is locked? Maybe something like this?
for( int i = 0; i < ReasonableNumber; i++ ) {
try {
performOperation(); // do useful stuff here
break;
} catch( ... ) {
if( i == ReasonableNumber - 1 ) {
throw; //not to hide errors if unlock never happens
}
}
Sleep( ReasonableInterval );
}
Is this a viable strategy? If so, how many times and how often should my application retry? What are better ideas if any?
A virusscanner that locks files while it's scanning them is quite bad. Clients who have virusscanners this bad need to have their brains replaced... ;-)
Okay, enough ranting. If a file is locked by some other process then you can use a "try again" strategy like you suggest. OTOH, do you really need to close and then re-open those files? Can't you keep them open until your process is done?
One tip: Add a delay (sleep) when you try to re-open the file again. About 100 ms should be enough. If the virusscanner keeps the file open that long then it's a real bad scanner. Clients with scanners that bad deserve the exception message that they'll see.
Typically, try up to three times... -> Open, on failure try again, on second failure try again, on third failure just crash.
Remember to crash in a user-friendly way.
I've had experience with antivirus software made by both Symantec and AVG which resulted in files being unavailable for open.
A common problem we experienced back in the 2002 time frame with Symantec was with MSDev6 when a file was updated in this sequence:
a file is opened
contents are modified in memory
application needs to commit changes
application creates new tmp file with new copy of file + changes
application deletes old file
application copies tmp file to old file name
application deletes the tmp file
The problem would occur between step 5 and step 6. Symantec would do something to slowdown the delete preventing the creation of a file with the same name (CreateFile returned ERROR_DELETE_PENDING). MSDev6 would fail to notice that - meaning step 6 failed. Step 7 still happened though. The delete of the original would eventually finish. So the file no longer existed on disk!
With AVG, we've been experiencing intermittent problems being able to open files that have just been modified.
Our resolution was a try/catch in a reasonable loop as in the question. Our loop count is 5.
If there is the possibility that some other process - be it the antivirus software, a backup utility or even the user themselves - can open the file, then you must code for that possibility.
Your solution, while perhaps not the most elegant, will certainly work as long as ReasonableNumber is sufficiently large - in the past I've used 10 as the reasonable number. I certainly wouldn't go any higher and you could get away with a lower value such as 5.
The value of sleep? 100ms or 200ms at most
Bear in mind that most of the time your application will get the file first time anyway.
Depends on how big your files are, but for 10s to 100s of Kb I find that 5 trys with 100ms (0.1 seconds) to be sufficient. If you still hit the error once in a while, double the wait, but YMMV.
If you have a few places in the code which needs to do this, may I suggest taking a functional approach:
using System;
namespace Retry
{
class Program
{
static void Main(string[] args)
{
int i = 0;
Utils.Retry(() =>
{
i = i + 1;
if (i < 3)
throw new ArgumentOutOfRangeException();
});
Console.WriteLine(i);
Console.Write("Press any key...");
Console.ReadKey();
}
}
class Utils
{
public delegate void Retryable();
static int RETRIES = 5;
static int WAIT = 100; /*ms*/
static public void Retry( Retryable retryable )
{
int retrys = RETRIES;
int wait = WAIT;
Exception err;
do
{
try
{
err = null;
retryable();
}
catch (Exception e)
{
err = e;
if (retrys != 1)
{
System.Threading.Thread.Sleep(wait);
wait *= 2;
}
}
} while( --retrys > 0 && err != null );
if (err != null)
throw err;
}
}
}
Could you change your application so you don't release the file handle? If you hold a lock on the file yourself the antivir application will not be able to scan it.
Otherwise a strategy such as yours will help, a bit, because it only reduces the probability but it doesn't solve the problem.
Tough problem. Most ideas that I have go into a direction that you don't want (e.g. redesign).
I don't know how many files you have in your directory, but if it's not that much you may be able to work around your problem by keeping all files open and locked while your program runs.
That way the virus scanner will have no chance to interrupt your file-accesses anymore.