I would like to know what is the maximum length of a String when saving in the classic preferences System:
var prefs = Components.classes["#mozilla.org/preferences-service;1"]
.getService(Components.interfaces.nsIPrefBranch);
prefs.setCharPref("com.exemple.namespace.preference", potentiallyLongString);
Couldn't find it in official documentation.
Note: I tried to type in more than 255, it works on Firefox 3.6, but I'm looking for a documented answer which would certify that length L works since version V.
Since there was no documentation, I just though of trying it anyway (at the most it'd have crashed my browser). I tried to the extent of 1,51,276 characters - and it worked perfectly fine. Trust me, I even matched the characters to test the reliability. :) Of course, doesn't mean you should use it regularly to those extents. ;). Just to give you an idea, that'd be around 30,000 words of English language, and has more number of characters than the whole script of the Movie Matrix (part 1).
I didn't try for more, as Notepad++ had started becoming slow in copying and pasting those many characters.
(Test Environment: Firefox 24, Windows 7 64 bit)
Edit: While I was trying this, I also noticed that even though it works for enormously large values, after some 4000 characters, Firefox starts giving a performance warning in the Error Console. So take what you want to from this.
There's no documentation on that. You shouldn't store unreasonably large strings in prefs -- if you're not sure if it will be reasonable, it's likely not a pref.
Related
In a C# console application for Windows, I'm am using the Windows Console API WriteConsoleOutput (via PInvoke) to write an entire buffer in a single operation to prevent flickering. This works fine.
Microsoft recommends using virtual terminal sequences to interact with the console. These sequences are great, as they offer much better output, such as colors, etc.
But, as I understand it, WriteConsoleOutput cannot be used with escape sequences (see CHAR_INFO).
My question is,
How can I use virtual terminal sequences to write to the console flicker-free?
I'd like to update different parts of the screen with different characters and colors. Doing this by chaining a lot of Console.Write() and Console.SetCursorPosition will cause a lot of flickering and reduce framerate.
What is the virtual terminal equivalent of writing an entire buffer?
I hate to answer my own question, but I have found a solution after a couple of days' experimenting.
The answer to this question:
What is the virtual terminal equivalent of writing an entire buffer?
There is none.
Not exactly.
I haven't found anything similar to WriteConsoleOutput, which renders a pre-populated buffer of an arbitrary size.
However, virtual terminals have the concept of alternate screen buffers, which can be created by CreateConsoleScreenBuffer and switched with SetConsoleActiveScreenBuffer.
Using that, I came up with a solution for my first question:
How can I use virtual terminal sequences to write to the console flicker-free?
Basically, what I do is I create a new buffer and then use WriteConsole to write VT escape sequences to the back buffer before I switch the buffers.
The thing I don't like about this solution is the call to WriteConsole. There will be a lot of them (writing characters/sequences one by one), or there will be few of them when writing long pre-escaped strings.
In order to test flickering, I created a single string with 120 x 30 characters, with each character given a 256-color. This produced a string of over 41'000 characters, which was used as input to WriteConsole.
This actually seems to work pretty well!
This is the best solution I have found so far. If you find a better one, please write your own answer here!
Never having thought about it, I just found out that I am used to Shift-Space doing the same thing as Space. But now it doesn't any more: my typing started looking like this a couple of days ago:
a[x +1]= b
where I wanted to write
a[x + 1] = b
I suppose I let go of Shift too slowly. It never used to be a problem, until I installed Yosemite, and now Shift-Space does not register at all any more. I tried :imap <S-Space> <Space>, but that too does not trigger.
It does not happen in Terminal Vim, nor does it happen in any other application I own.
EDIT: I am on a Japanese keyboard, using Kotoeri (Apple's Japanese IME).
This seems to be an incompatibility introduced in Yosemite's Kotoeri (Japanese IME). For the time being, switching to Google Japanese IME as an alternative (or not using Japanese input in the first place, if that is an option) circumvents the issue. See this discussion for details.
Just had an interesting case.
My software reported back a failure caused by a path being longer than MAX_PATH.
The path was just a plain old document in My Documents, e.g.:
C:\Documents and Settings\Bill\Some Stupid FOlder Name\A really ridiculously long file thats really very very very..........very long.pdf
Total length 269 characters (MAX_PATH==260).
The user wasn't using a external hard drive or anything like that. This was a file on an Windows managed drive.
So my question is this. Should I care?
I'm not saying can I deal with the long paths, I'm asking should I. Yes I'm aware of the "\?\" unicode hack on some Win32 APIs, but it seems this hack is not without risk (as it's changing the behaviour of the way the APIs parse paths) and also isn't supported by all APIs .
So anyway, let me just state my position/assertions:
First presumably the only way the user was able to break this limit is if the app she used uses the special Unicode hack. It's a PDF file, so maybe the PDF tool she used uses this hack.
I tried to reproduce this (by using the unicode hack) and experimented. What I found was that although the file appears in Explorer, I can do nothing with it. I can't open it, I can't choose "Properties" (Windows 7). Other common apps can't open the file (e.g. IE, Firefox, Notepad). Explorer will also not let me create files/dirs which are too long - it just refuses. Ditto for command line tool cmd.exe.
So basically, one could look at it this way: a rouge tool has allowed the user to create a file which is not accessible by a lot of Windows (e.g. Explorer). I could take the view that I shouldn't have to deal with this.
(As an aside, this isn't an vote of approval for a short max path length: I think 260 chars is a joke, I'm just saying that if Windows shell and some APIs can't handle > 260 then why should I?).
So, is this a fair view? Should I say "Not my problem"?
UPDATE: Just had another user with the same problem. This time an mp3 file. Am I missing something? How can these users be creating files that violate the MAX_PATH rule?
It's not a real problem. NTFS support filenames up to 32K (32,767 wide characters). You need only use correct API and correct syntax of filenames. The base rule is: the filename should start with '\\?\' (see http://msdn.microsoft.com/en-us/library/aa365247(v=VS.85).aspx) like \\?\C:\Temp. The same syntax you can use with UNC: \\?\UNC\Server\share\Path. Important to understand that you can use only a small subset of API function. For example look at MSDN description of functions
CreateFile
CreateDirectory
MoveFile
and so on
you will find text like :
In the ANSI version of this function,
the name is limited to MAX_PATH
characters. To extend this limit to
32,767 wide characters, call the
Unicode version of the function and
prepend "\?\" to the path. For more
information, see Naming a File.
This functions you can safe use. If you have a file handle from CreateFile you can use all other functions used hFile (ReadFile, WriteFile etc.) without any restriction.
If you write a program like virus scanner or backup software or some good software running on a server you should write your program so, that all file operations support filenames up to 32K characters and not MAX_PATH characters.
This limitation is baked into a lot of software written in C or C++. Including MSFT code, although they've been chipping away at it. It is only partly a Win32 limitation, it still has a hard upper limit on the length of a file name (not path) through WIN32_FIND_DATA for example. One reason that even .NET has length restrictions. This is not going away any time soon, Win32 is still going strong and the stone-age C string won't disappear.
Your customer will have little sympathy with it, no doubt, probably until you can show them another program that fails the same way. Do however make sure that your code reliably can detect the potential string buffer overflow, followed by a reasonable diagnostic. No sympathy for programs bombing on heap corruption.
As you mentioned many of the Windows Shell functions only work on paths up to MAX_PATH. Windows XP and I believe Vista both have problems in Explorer when nesting directories with long file names. I've not checked Windows 7 - perhaps they have fixed that. This unfortunately means that users have a hard time browsing these file.
If you really wish to support long paths you'll need to check any functions you are using in Shell32.dll that take paths to ensure they support long paths. For those that don't you'll have to use write them yourself using Kernel32 functions.
If you decide to use Shell32 and be limited to MAX_PATH, writing your code to support long file paths would be advisable. If Microsoft later change Shell32 (or create an alternative), you will be better positioned to add support for them.
Just to add another couple of dimensions to the problem, remember that filenames are UTF-16, and you may encounter non NTFS or FAT filesystems that may be case sensitive!
Your own APIs should not hard-code a fixed limit on the path length (or any other hard limits); however, you shouldn't violate the preconditions of the system APIs in order to accomplish some task. IMHO, the fact that Windows limits the length of path names is absurd and should be considered a bug. That said, no I would suggest you not attempt to use the various system APIs other than as documented, even if that results in certain undesireable behavior such as limits to the maximum path length. So, in short, your view is completely fair; if the OS doesn't support it, then the OS doesn't support it. That said, you may want to make it clear to users that this is a limitation of Windows and not of your own code.
One easy way how these files with long paths could be created even by software that does not support paths longer than MAX_PATH: through a file share.
Example:
"C:\My veeeeeeeeeeeeeeeeeeeeery looooooooooooooooooong folder" could be shared as "data". Users could then access that folder through the UNC path \\computer\data or (even shorter) through a drive letter (M:\) assuming that M: is mapped to \\computer\data.
This often happens on file servers.
Paths often can be bigger than 260, one example would be when symlinks get nested and repeat over and over sometimes even on purpose. I think programmers should think about whether they want their program to handle these insanely large paths or not. IMO, 260 is PLENTY of space but thats just me. My answer to this is:
if you have to ask yourself so deeply about breaking the 260 char limit, then thats probably what you should do. We often look for confirmation when we are about to do something that we are unsure about...
I think the maximum path anywhere in the API is about 32k long but thats up to you. Back in the day that was a pretty big chunk of change (half of an entire memory segment!! sheesh!) but nowdays, in the segment-transparent addressing environment in which we live, where all memory is heaped together on the flat, 32k is nothin'... AFAIK paths wouldn't need to be that long unless you are using some fancy unicode language that requires lots of other characters, etc, etc.. we could blab about this all day but you get the idea. I hope this helps..... or hurts?
I am doung some C programming and I was searching for a way to get the maximum length of a given filename, after a search for MAX_PATH I stumbled to this thread and after som thoughts on this matter and after reading the comments on this thread I have come to the following conclusion.
So I understand that NTFS support filenames up to 32.767 characters in length, however, according to knowledge FAT16 only support 11 character filenames, 8 + 3, so in reallity operating systems should have a function which our program can call to dertemine the maximum filename size, simply because all filesystems have different limitations including the length of the filename.
So the end conclusion must be that since us, the developers, don't know anything about which filesystem the data is going to be stored in, so therefore the only solution must be an try and error method.
Not strictly an answer to your specific question, but it might help those who do need to handle long file names.
The Delimon library is a .NET Framework 4 based library on Microsoft TechNet for overcoming the long filenames problem:
Delimon.Win32.IO Library (V4.0).
It has its own versions of key methods from System.IO. For example, you would replace:
System.IO.Directory.GetFiles
with
Delimon.Win32.IO.Directory.GetFiles
which will let you handle long files and folders.
From the website:
Delimon.Win32.IO replaces basic file functions of System.IO and
supports File & Folder names up to up to 32,767 Characters.
This Library is written on .NET Framework 4.0 and can be used either
on x86 & x64 systems. The File & Folder limitations of the standard
System.IO namespace can work with files that have 260 characters in a
filename and 240 characters in a folder name (MAX_PATH is usually
configured as 260 characters). Typically you run into the
System.IO.PathTooLongException Error with the Standard .NET Library.
I am using VB6 SP6
This code has work correctly for years but I am now having a problem on a WIN7 to WIN7 network. It also works correctly on an XP to Win7 network.
Open file for random as ChannelNum LEN =90
'the file is on the other computer on the network
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
'(MyAcFile is UDT that is less than 90 long)
.......... other code that does not reference file or RecNum - then
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
Close ChannelNum
The second record overwrites the first.
We had a similar problem in the past with OpportunisticLocking so we turn that off at install - along with some other keys that cause errors in data in Windows networks.
However we have had no problems like this for years, so I think MS have some new "better" option that they think will "improve" networking.
Thanks for your help
I doubt there is any "bug" here except in your approach. The file metadata that LOF() interrogates is not meant to be updated immediately by simple writes. A delay seems like a silly idea, prone to occasional failure unless a very long delay is used and sapping performance at best. Even close/reopen can be iffy: VB6's Close statement is an async operation. That's why the Reset statement exists.
This is also why things like FlushFileBuffers() and SetEndOfFile() exist at the API level. They are also relatively expensive operations from a performance standpoint.
Track your records yourself. Only rely on LOF() if necessary after you first open the file.
Hmmm... is file (as per in the open statement at the top of the code sample) UNC filename or similar to x:\ where x is the mapped drive? Are you not incrementing RecNum? Judging by the code, the RecNum is unchanged and hence appears to overwrite the first record...Sorry for sounding ummm no pun intended... basic...It would be of help to show some more code here...
Hope this helps,
Best regards,
Tom.
It can be just timing issue. In some runs your LOF() function returns more updated information than in other runs. The file system API is asynchronous, for example when some write function is called it will not be immediately reflected as the increazed size.
In short: you code have shown an old bug, which is just easier to reproduce on Windows 7.
To fix the bug the cheapest way: you may decide to add a delay (it can be significant delay of say 5 seconds).
More elaborate fix is to force the size update by closing and reopening file.
I am creating a utility that will store data on flat file in a specific binary format.
I want the filename extension to be specific to my application. Is there any reason other than the old 8.3 filename limit for restricting the extension to 3 characters, and if not, what is the limit? Can I have myfilename.MyExtensionSoHandsOffEverybodyElse ?
This is a hold over from the old windows 3.x/MSDOS days. Today, there are plenty of file names that have more than 3 character extensions.
If I remember correctly, Windows XP had a maximum character limit for path names (including the file name) of 255 characters.
In my experience, having seen a few non-3-character extensions I'd say that it's a matter of tradition, and you're perfectly welcome to use myfilename.MyExtensionSoHandsOffEverybodyElse.
The only good reason for doing this is if you plan to support Windows 9x. If you're only targeting XP and later, as with most projects nowdays, the 8.3 thing is irrelevant.
In fact, Windows itself stores things in long-extension filenames in Vista and later, for example, .search-ms for saved searches.
No, there isn't a good reason to limit the extension to 3 characters. However, a shorter, descriptive name is better if a user has to remember it. For example, most people know what a .html or .doc file would contain.
As long as you make a reasonable attempt to avoid naming collisions with major software there shouldn't be an issue. A corollary to that is the fact that unless you create some insanely long extension that will only ever be unique to your software (and even then, it's not guaranteed), the extension you choose will always be subject to name collision by other people's software when they choose their program's data file extension as you are doing here.