I suddenly am getting what appears to be a Unicode issue in Visual Studio after a Windows 7 restart. Does anyone have an idea about how to resolve this? I've been messing around with virus scanners and .cspoj files (where the errors are located) for the last few hours to no avail.
Error 1 The build stopped unexpectedly because of an internal failure.
System.Text.EncoderFallbackException: Unable to translate Unicode character \uD97C at index 1321 to specified code page.
at System.Text.EncoderExceptionFallbackBuffer.Fallback(Char charUnknown, Int32 index)
at System.Text.EncoderFallbackBuffer.InternalFallback(Char ch, Char*& chars)
at System.Text.UTF8Encoding.GetByteCount(Char* chars, Int32 count, EncoderNLS baseEncoder)
at System.Text.UTF8Encoding.GetByteCount(String chars)
at System.IO.BinaryWriter.Write(String value)
at Microsoft.Build.BackEnd.NodePacketTranslator.NodePacketWriteTranslator.TranslateDictionary(Dictionary`2& dictionary, IEqualityComparer`1 comparer)
at Microsoft.Build.Execution.BuildParameters.Microsoft.Build.BackEnd.INodePacketTranslatable.Translate(INodePacketTranslator translator)
at Microsoft.Build.BackEnd.NodePacketTranslator.NodePacketWriteTranslator.Translate[T](T& value, NodePacketValueFactory`1 factory)
at Microsoft.Build.BackEnd.NodeConfiguration.Translate(INodePacketTranslator translator)
at Microsoft.Build.BackEnd.NodeProviderOutOfProcBase.NodeContext.SendData(INodePacket packet)
at Microsoft.Build.BackEnd.NodeProviderOutOfProc.CreateNode(Int32 nodeId, INodePacketFactory factory, NodeConfiguration configuration)
at Microsoft.Build.BackEnd.NodeManager.AttemptCreateNode(INodeProvider nodeProvider, NodeConfiguration nodeConfiguration)
at Microsoft.Build.BackEnd.NodeManager.CreateNode(NodeConfiguration configuration, NodeAffinity nodeAffinity)
at Microsoft.Build.Execution.BuildManager.PerformSchedulingActions(IEnumerable`1 responses)
at Microsoft.Build.Execution.BuildManager.HandleNewRequest(Int32 node, BuildRequestBlocker blocker)
at Microsoft.Build.Execution.BuildManager.IssueRequestToScheduler(BuildSubmission submission, Boolean allowMainThreadBuild, BuildRequestBlocker blocker)
AND THE ANSWER IS:
http://www.hanselman.com/blog/CSIVisualStudioUnableToTranslateUnicodeCharacterAtIndexXToSpecifiedCodePage.aspx
NB. Hans was the closest to working out what had happened... so I awarded the points to him
Well, the message is pretty accurate. \uD97C a utf-16 surrogate, surrogates must always appear in a pair to encode a character whose value is larger than \uFFFF. The exception message says that the second surrogate of the pair does not occur in the string.
Seeing this occur in a build is very bad news, such characters should never appear in project files. You don't write them in an ancient dead Middle-Eastern language or an obscure native American language with a couple of thousand people that still know how to speak it :). The only reasonable explanation is that the file(s) on your disk are scrambled all to hell. You'll need to get your machine fixed, replacing the disk should be high on your list of priorities right now.
Related
I'm running a 64-bit version of GhostScript (9.50) on 64-bit processor with 16gb of RAM under Windows 7.
GhostScript returns a random-ish error message (it will tell me that I have type error in the array command) when I try to allocate one too many arrays totaling more than 2 GBs of RAM.
To be clear, I am seeing how growth of the memory usage in Windows Task Monitor, not from within GhostScript
I'd like to know why this is so.
More importantly, I'd like to know if I can override this behavior.
Edit: This code produces the error --
/TL 25000 def
/TL- TL 1 sub def
/G TL array def
0 1 TL- { dup == flush G exch TL array put }for
The error looks like this: Here's the last bit of the messages I get
5335
5336
5337
5338
5339
5340
5341
5342
5343
5344
5345
Unrecoverable error: typecheck in array
Operand stack: --nostringval-- ---
Begin offending input ---
/TL 25000 def /TL- TL 1 sub def /G TL array def 0 1 TL- { dup == flush G exch TL array put }for --- End offending input --- file offset = 0 gsapi_run_string_continue returns -20
The amount of RAM is almost certainly not the limiting factor, but it would help if you were to post the actual error message. It may be 'random-ish' to you, but it's meaningful to people who program in PostScript.
More than likely you've tripped over some other internal limit, for example the operand stack size but without seeing the PostScript program or the error message I cannot say any more than that. I can say that (64-bit) Ghostscript will happily address more than 2GB of RAM, I was running a file last week which had Ghostscript using 8.1GB.
Note that PostScript itself is basically a 32-bit language; while Ghostscript has extended many of the architectural limitations documented in the PostScript Language Reference Manual (such as 64K elements in arrays and strings) moving beyond 32-bit limits is essentially unspecified.
As to whether you can change the behaviour, that depends on exactly what the problem is, and I can't tell from what's here.
Edit
Here's a screenshot of Ghostscript running the test file to completion, along with the Task Manager display showing the amount of memory the process is using. Not shown is the vmstatus which I ran from the PostScript environment afterwards. This showed that Ghostscript thinks it's using 10,010,729,850 bytes form a maximum of 10,012,037,312. My calculator says that 9,562.8MB comes out at 10,027,322,572.4 bytes, so a pretty close match.
To answer the points in the comments this is (as you can probably tell) on a 64-bit Windows 10 installation with quite a lot of memory.
The difference is, almost certainly, something which has been fixed since the release of 9.52. The 9.52 64-bit binary does exit with a VMerror after (for me) 5360 iterations. Obviously trying to use vast amounts of PostScript memory (as opposed to, say, canvas memory) is not a common occurrence, not least because many PostScript interpreters simply won't allow it, so this doesn't get exercised much.
The Ghostscript Git repository is here if you want to go through the commits and try to figure out which one caused the change. You only have to go back to March this year, anything before about the 19th March would have been in 9.52.
Beyond simple curiosity, is there a reason to try and use up loads of memory in PostScript ?
A OSX app crashes when I try to close a socket handle, it worked fine in all the previous platforms, but it appears to crash in Yosemite.
The line where is crashes is
-(void)stopPacketReceiver
{
close(sd);
}
In Xcode it pauses all the threads and show EXC_GUARD exception, what kind of exception is this, any ideas ?
Thanks,
Ahmed
EDIT:
Here r the exception codes that I get
Exception Type: EXC_GUARD
Exception Codes: 0x4000000100000000, 0x08fd4dbfade2dead
From a post in Apple's old developer forums from Quinn "The Eskimo" (Apple Developer Relations, Developer Technical Support, Core OS/Hardware), edited by me to remove things which were specific to that specific case:
EXC_GUARD is a change in 10.9 designed to help you detect file
descriptor problems. Specifically, the system can now flag specific
file descriptors as being guarded, after which normal operations on
those descriptors will trigger an EXC_GUARD crash (when it wants to
operate on these file descriptors, the system uses special 'guarded'
private APIs).
We added this to the system because we found a lot of apps were
crashing mysteriously after accidentally closing a file descriptor
that had been opened by a system library. For example, if an app
closes the file descriptor used to access the SQLite file backing a
Core Data store, Core Data would then crash mysteriously much later
on. The guard exception gets these problems noticed sooner, and thus
makes them easier to debug.
For an EXC_GUARD crash, the exception codes break down as follows:
o The first exception code … contains three bit
fields:
The top three bits … indicate [the type of guard].
The remainder of the top 32 bits … indicate [which operation was disallowed].
The bottom 32 bits indicate the descriptor in question ….
o The second exception code is a magic number associated with the
guard. …
Your code is closing a socket it doesn't own. Maybe sd contains the descriptor number for a descriptor that you once owned but is now a dangling reference, because you already closed your descriptor and that number has now been reused for somebody else's descriptor. Or maybe sd just has a junk value somehow.
We can decode some more information from the exception codes, but most likely you just have to trace exactly where you're doing with sd over its life.
Update:
From the edited question, I see that you've posted the exception codes. Using the constants from the kernel source, the type of guard is GUARD_TYPE_FD, the operation that was disallowed was kGUARD_EXC_CLOSE (i.e. close()), and the descriptor was 0 (FILENO_STDIN).
So, in all probability, your stopPacketReceiver was called when the sd instance variable was uninitialized and had the default 0 value that all instance variables get when an object is first allocated.
The magic value is 0x08fd4dbfade2dead, which according to the original developer forums post, "indicates that the guard was applied by SQLite". That seems strange. Descriptor 0 would normally be open from process launch (perhaps referencing /dev/null). So, SQLite should not own that.
I suspect what has happened is that your code has actually closed descriptor 0 twice. The first time it was not guarded. It's legal to close FILENO_STDIN. Programs sometimes do it to reopen that descriptor to reference something else (such as /dev/null) if they don't want/need the original standard input. In your case, it would have been an accident but would not have raised an exception. Once it was closed, the descriptor would have been available to be reallocated to the next thing which opened a descriptor. I guess that was SQLite. At that time, SQLite put a guard on the descriptor. Then, your code tried to close it again and got the EXC_GUARD exception.
If I'm right, then it's somewhat random that your code got the exception (although it was always doing something bad). The fact that file descriptor 0 got assigned to a subsystem that applied a guard to it could be a race condition or it could be a change in order of operations between versions of the OS.
You need to be more careful to not close descriptors that you didn't open. You should initialize any instance variable meant to hold a file descriptor to -1, not 0. Likewise, if you close a descriptor that you did own, you should set the instance variable back to -1.
Firstly, that sounds awesome - it sounds like it caught what would have been EXC_BAD_ACCESS (but this is a guess).
My guess is that sd isn't a valid descriptor. It's possible an API changed in Yosemite that's causing the place you create the descriptor to return NULL, or it's possible a change in the event timeline in Yosemite causes it to have already been cleaned up.
Debugging tip here: trace back sd all the way to its creation.
We have an older massive C++ application and we have been converting it to support Unicode as well as 64-bits. The following strange thing has been happening:
Calls to registry functions and windows creation functions, like the following, have been failing:
hWnd = CreateSysWindowExW( ExStyle, ClassNameW.StringW(), Label2.StringW(), Style,
Posn.X(), Posn.Y(),
Size.X(), Size.Y(),
hParentWnd, (HMENU)Id,
AppInstance(), NULL);
ClassNameW and Label2 are instances of our own Text class which essentially uses malloc to allocate the memory used to store the string.
Anyway, when the functions fail, and I call GetLastError it returns the error code for "invalid memory access" (though I can inspect and see the string arguments fine in the debugger). Yet if I change the code as follows then it works perfectly fine:
BSTR Label2S = SysAllocString(Label2.StringW());
BSTR ClassNameWS = SysAllocString(ClassNameW.StringW());
hWnd = CreateSysWindowExW( ExStyle, ClassNameWS, Label2S, Style,
Posn.X(), Posn.Y(),
Size.X(), Size.Y(),
hParentWnd, (HMENU)Id,
AppInstance(), NULL);
SysFreeString(ClassNameWS); ClassNameWS = 0;
SysFreeString(Label2S); Label2S = 0;
So what gives? Why would the original functions work fine with the arguments in local memory, but when used with Unicode, the registry function require SysAllocString, and when used in 64-bit, the Windows creation functions also require SysAllocString'd string arguments? Our Windows procedure functions have all been converted to be Unicode, always, and yes we use SetWindowLogW call the correct default Unicode DefWindowProcW etc. That all seems to work fine and handles and draws Unicode properly etc.
The documentation at http://msdn.microsoft.com/en-us/library/ms632679%28v=vs.85%29.aspx does not say anything about this. While our application is massive we do use debug heaps and tools like Purify to check for and clean up any memory corruption. Also at the time of this failure, there is still only one main system thread. So it is not a thread issue.
So what is going on? I have read that if string arguments are marshalled anywhere or passed across process boundaries, then you have to use SysAllocString/BSTR, yet we call lots of API functions and there is lots of code out there which calls these functions just using plain local strings?
What am I missing? I have tried Googling this, as someone else must have run into this, but with little luck.
Edit 1: Our StringW function does not create any temporary objects which might go out of scope before the actual API call. The function is as follows:
Class Text {
const wchar_t* StringW () const
{
return TextStartW;
}
wchar_t* TextStartW; // pointer to current start of text in DataArea
I have been running our application with the debug heap and memory checking and other diagnostic tools, and found no source of memory corruption, and looking at the assembly, there is no sign of temporary objects or invalid memory access.
BUT I finally figured it out:
We compile our code /Zp1, which means byte aligned memory allocations. SysAllocString (in 64-bits) always return a pointer that is aligned on a 8 byte boundary. Presumably a 32-bit ANSI C++ application goes through an API layer to the underlying Unicode windows DLLs, which would also align the pointer for you.
But if you use Unicode, you do not get that incidental pointer alignment that the conversion mapping layer gives you, and if you use 64-bits, of course the situation will get even worse.
I added a method to our Text class which shifts the string pointer so that it is aligned on an eight byte boundary, and viola, everything runs fine!!!
Of course the Microsoft people say it must be memory corruption and I am jumping the wrong conclusion, but there is evidence it is not the case.
Also, if you use /Zp1 and include windows.h in a 64-bit application, the debugger will tell you sizeof(BITMAP)==28, but calling GetObject on a bitmap will fail and tell you it needs a 32-byte structure. So I suspect that some of Microsoft's API is inherently dependent on aligned pointers, and I also know that some optimized assembly (I have seen some from Fortran compilers) takes advantage of that and crashes badly if you ever give it unaligned pointers.
So the moral of all of this is, dont use "funky" compiler arguments like /Zp1. In our case we have to for historical reasons, but the number of times this has bitten us...
Someone please give me a "this is useful" tick on my answer please?
Using a bit of psychic debugging, I'm going to guess that the strings in your application are pooled in a read-only section.
It's possible that the CreateSysWindowsEx is attempting to write to the memory passed in for the window class or title. That would explain why the calls work when allocated on the heap (SysAllocString) but not when used as constants.
The easiest way to investigate this is to use a low level debugger like windbg - it should break into the debugger at the point where the access violation occurs which should help figure out the problem. Don't use Visual Studio, it has a nasty habit of being helpful and hiding first chance exceptions.
Another thing to try is to enable appverifier on your application - it's possible that it may show something.
Calling a Windows API function does not cross the process boundary, since the various Windows DLLs are loaded into your process.
It sounds like whatever pointer that StringW() is returning isn't valid when Windows is trying to access it. I would look there - is it possible that the pointer returned it out of scope and deleted shortly after it is called?
If you share some more details about your string class, that could help diagnose the problem here.
I've been getting a lot of blue screens on my XP box at work recently. So many in fact that I downloaded debugging tools for windows(x86) and have been analyzing the crash dumps. So many in fact that I've changed the dumps to mini only or else I would probably end up tanking half a work day each week just waiting for the blue screen to finish recording the detailed crash log.
Almost without exception every dump tells me that the cause of the blue screen is some kind of memory misallocation or misreference and the memory at 0x%08lx referenced 0x%08lx and could not be %s.
Out of idle curiosity I put "0x%08lx" into Google and found that quite a few crash dumps include this bizarre message. Am I to take it that 0x%08lx is a place holder for something that should be meaningful? "%s" which is part of the concluding sentence "The memory could not be %s" definitely looks like it's missing a variable or something.
Does anyone know the provenance of this message? Is it actually supposed to be useful and what is it supposed to look like?
It's not a major thing I have always worked around it. It's just strange that so many people should see this in so many crash dumps and nobody ever says: "Oh the crash dump didn't complete that message properly it's supposed to read..."
I'm just curious as to whether anyone knows the purpose of this strange error message artefact.
0x%08lx and %s are almost certainly format specifiers for the C function sprintf. But looks like the driver developers did as good a job in their error handling code as they did in the critical code, as you should never see these specifiers in the GUI -- they should be replaced with meaningful values.
0x%08lx should turn into something like "0xE001D4AB", a hexadecimal 32-bit pointer value.
%s should be replaced by another string, in this case a description. Something like
the memory at 0xE001D4AB referenced
0xE005123F and could not be read.
Note that I made up the values. Basically, a kernel mode access violation occurred. Hopefully in the mini dumps you can see which module caused it and uninstall / update / whatever it.
I believe it is just the placeholder for the memory address. 0x is a string prefix that would notify the user that it is an hexadecimal, while %08lx is the actual placeholder for a long int (l) converted to hexadecimal (x) with a padding of 8 zeroes (08).
I am trying to figure out a crash in my application.
WinDbg tells me the following: (using dashes in place of underscores)
LAST-CONTROL-TRANSFER: from 005f5c7e to 6e697474
DEFAULT-BUCKET-ID: BAD_IP
BUGCHECK-STR: ACCESS-VIOLATION
It is obvious to me that 6e697474 is NOT a valid address.
I have three questions:
1) Does the "BAD_IP" bucket ID mean "Bad Instruction Pointer?"
2) This is a multi-threaded application so one consideration was that the object whose function I was attempting to call went out of scope. Does anyone know if that would lead to the same error message?
3) What else might cause an error like this? One of my co-workers suggested that it might be a stack overflow issue, but WinDBG in the past has proven rather reliable at detecting and pointing these out. (not that I'm sure about the voodoo it does in the background to diagnose that).
Bad-IP is Bad Instruction Pointer. From the description of your problem, I would assume it is a stack corruption instead of a stack overflow.
I can think of the following things that could cause a jump to invalid address, in decreasing order of likelyhood:
calling a member function on a deallocated object. (as you suspect)
calling a member function of a corrupted object.
calling a member function of an object with a corrupted vtable.
a rouge pointer overwriting code space.
I'd start debugging by finding the code at 005f5c7e and looking at what objects are being accessed around there.
It may be helpful to ask, what could have written the string 'ttie' to this location? Often when you have bytes in the 0x41-0x5A, 0x61-0x7A ([a-zA-Z]) range, it indicates a string buffer overflow.
As to what was actually overwritten, it could be the return address, some other function pointer you're using, or occasionally that a virtual function table pointer (vfptr) in an object got overwritten to point to the middle of a string.