Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am in a doubt whether this is a right place for this question..
I want to know if windows 7 or xp has any limit on no. of files within particular folder?
There's no practical limit on the combined sizes of all the files in a folder, though there may be limits on the number of files in a folder. More importantly, there are limits on individual file size that depend on what filesystem you're using on your hard disk. (The "filesystem" is nothing more than the specification of exactly how files are store on disk.)
Let's break this down by file system:
FAT aka FAT16
FAT, for File Allocation Table, is the successor to the original FAT12 filesystem that shipped with MS-DOS many, many years ago.
Maximum disk size: 4 gigabytes
Maximum file size: 4 gigabytes
Maximum number of files on disk: 65,517
Maximum number of files in a single folder: 512 (if I recall correctly, the root folder "/" had a lower limit of 128).
FAT32
"There's no practical limit on the combined sizes of all the files in a folder, though there may be limits on the number of files in a folder."FAT32 was introduced to overcome some of the limitations of FAT16.
Maximum disk size: 2 terabytes
Maximum file size: 4 gigabytes
Maximum number of files on disk: 268,435,437
Maximum number of files in a single folder: 65,534
NTFS
NTFS, or "New Technology File System" introduced with Windows NT, is a completely redesigned file system.
Maximum disk size: 256 terabytes
Maximum file size: 256 terabytes
Maximum number of files on disk: 4,294,967,295
Maximum number of files in a single folder: 4,294,967,295
Note that when I say "disk" above, I'm really talking about "logical" disks, not necessarily physical. No one makes a 256 terabyte disk drive, but using NTFS you can treat an array of disk drives as a single logical disk. Presumably if you have enough of them, you can build a huge logical drive.
Also note that the NTFS's 256 terabyte limitation may well simply be an implementation restriction - I've read that the NTFS format can support disks up to 16 exabytes (16 times 1,152,921,504,606,846,976 bytes).
Source: http://ask-leo.com/is_there_a_limit_to_what_a_single_folder_or_directory_can_hold.html
According to this source, there's not limit per folder. Well, the limit is the same number of files you can store in the volume (NTFS).
Edit: Microsoft link, as pointed in Serverfault.
Related
I have read that the Intel chips support up to 1 GB virtual memory page sizes. Using VirtualAlloc with MEM_LARGE_PAGES gets you 2MB pages. Is there any way to get a different page size? We are currently using Server 2008 R2, but are planning to upgrade to Server 2012.
Doesn't look like it, the Large Page Support docs provide no mechanism for defining the size of the large pages. You're just required to make allocations that have a size (and alignment if explicitly requested) that are multiples of the minimum large page size.
I suppose it's theoretically possible that Windows could implement multiple large page sizes internally (the API function only tells you the minimum size), but they don't expose it at the API level. In practice, I'd expect diminishing returns for larger and larger pages; the overhead of TLB cache misses just won't matter as much when you're already reducing the TLB usage by several orders of magnitude.
In recent versions of Windows 10 (or 11 and later) it is finally possible to choose 1GB (as opposed to 2MB) pages to satisfy large allocations.
This is done by calling VirtualAlloc2 with specific set of flags (you will need recent SDK for the constants):
MEM_EXTENDED_PARAMETER extended {};
extended.Type = MemExtendedParameterAttributeFlags;
extended.ULong64 = MEM_EXTENDED_PARAMETER_NONPAGED_HUGE;
VirtualAlloc2 (GetCurrentProcess (), NULL, size,
MEM_LARGE_PAGES | MEM_RESERVE | MEM_COMMIT,
PAGE_READWRITE, &extended, 1);
If the 1GB page(s) cannot be allocated, the function fails.
It might not be necessary to explicitly request 1GB pages if your software already uses 2MB ones.
Quoting Windows Internals, Part 1, 7th Edition:
On Windows 10 version 1607 x64 and Server 2016 systems, large pages may also be mapped with huge pages, which are 1 GB in size. This is done automatically if the allocation size requested is larger than 1 GB, but it does not have to be a multiple of 1 GB. For example, an allocation of 1040 MB would result in using one huge page (1024 MB) plus 8 “normal” large pages (16 MB divided by 2 MB).
Side note:
Unfortunately the flags above only work for VirtualAlloc2, not for creating shared sections (CreateFileMapping2), where also new flag SEC_HUGE_PAGES exists, but always returns ERROR_INVALID_PARAMETER. But again, given the quote, Windows might be using 1GB pages transparently where appropriate anyway.
I am profiling binary data which has
increasing Unix block size (one got from stat > Blocks) when the number of events are increased as in the following figure
but the byte distance between events stay constant
I have noticed some changes in other fields of the file which may explain the increasing Unix block size
The unix block size is a dynamic measure.
I am interested in why it is increasing with bigger memory units in some systems.
I have had an idea that it should be constant.
I used different environments to provide the stat output:
Debian Linux 8.1 with its default stat
OSX 10.8.5 with Xcode 6 and its default stat
Greybeard's comment may have the answer to the blocks behaviour:
The stat (1) command used to be a thin CLI to the stat (2) system
call, which used to transfer relevant parts of a file's inode. Pretty
early on, the meaning of the st_blksize member of the C struct
returned by stat (2) was changed to "preferred" blocksize for
efficient file system I/O, which carries well to file systems with
mixed block sizes or non-block oriented allocation.
How can you measure the block size in case (1) and (2) separately?
Why can the Unix block size increase with bigger memory size?
"Stat blocks" is not a block size. It is number of blocks the file consists of. It is obvious that number of blocks is proportional to size. Size of block is constant for most file systems (if not all).
When viewing details of a file using Finder, different values are shown for how much space the file occupies. For example, a file takes up 28.8KB of RAM but, 33KB of the disk. Anyone know the explanation?
Disk space is allocated in blocks. Meaning, in multiples of a "block size".
For example, on my system a 1 byte file is 4096 bytes on disk.
That's 1 byte of content & 4095 bytes of unused space.
I've got a 3TB drive partitioned like so:
TimeMachine 800,000,000,000 Bytes
TELUS 2,199,975,890,944 Bytes
I bought an identical drive so that I could mirror the above in case of failure.
Using DiskUtility, partitioning makes the drives a different size than the above by several hundreds of thousands of bytes, so when I try to add them to the RAID set, it tells me the drive is too small.
I figured I could use terminal to specify the exact precise sizes I needed so that both partitions would be the right size and I could RAID hassle-free...
I used the following command:
sudo diskutil partitionDisk disk3 "jhfs+" TimeMachine 800000000000b "jhfs+" TELUS 2199975886848b
But the result is TimeMachine being 799,865,798,656 Bytes and TELUS being 2,200,110,092,288 Bytes. The names are identical to the originals and I'm also formatting them in Mac OS Extended (Journaled), like the originals. I can't understand why I'm not getting the same exact sizes when I'm being so specific with Terminal.
Edit for additional info: Playing around with the numbers, regardless of what I do I am always off by a minimum of 16,384 bytes. I can't seem to get the first partition, TimeMachine to land on 800000000000b on the nose.
So here's how I eventually got the exact sizes I needed:
Partitioned the Drive using Disk Utility, stating I wanted to split it 800 GB and 2.2 TB respectively. This yielded something like 800.2GB and 2.2TB (but the 2.2 TB was smaller than the 2,199,975,890,944 Bytes required, of course).
Using Disk Utility, I edited the size of the first partition to 800 GB (from 800.2GB), which brought it down to 800,000,000,000 bytes on the nose, as required.
I booted into GParted Live so that I could edit the second partition with more accuracy than Terminal and Disk Utility and move it around as necessary.
In GParted, I looked at the original drive for reference, noting how much space it had between partitions for the Apple_Boot partitions that Disk Utility adds when you add a partition to a RAID array (I think it was 128 MB in GParted).
I deleted the second partition and recreated it leaving 128 MB before and after the partition and used the original drive's second partition for size reference.
I rebooted into OS X.
Now I couldn't add the second partition to the RAID because I think it ended up being slightly larger than the 2,199,975,890,944 Bytes required (i.e., it didn't have enough space after it for that Apple_Boot partition), I got an error when attempting it in Disk Utility.
I reformatted the partition using Disk Utility just so that it could be a Mac OS Extended (journaled) rather than just HSF+, to be safe (matching the original).
I used Terminal's diskutil resizeVolume [drive's name] 2199975895040b command to get it to land on the required 2,199,975,890,944 Bytes (notice how I had to play around with the resize size, making it bigger than my target size to get it to land where I wanted).
Added both partitions to their respective RAID arrays using Disk Utility and rebuilt them successfully.
... Finally.
How can I calculate storage when FTPing to MainFrame? I was told LRECL will always remain '80'. Not sure how I can calculate PRI and SEC dynamically based on the file size...
QUOTE SITE LRECL=80 RECFM=FB CY PRI=100 SEC=100
If the site has SMS, you shouldn't need to, but if you need to calculate the number of tracks is the size of the file in bytes divided by 56,664, or the number of cylinders is the size of the file in bytes divided by 849,960. In either case, you would round up.
Unfortunately IBM's FTP server does not support the newer space allocation specifications in number of records (the JCL parameter AVGREC=U/M/K plus the record length as the first specification in the SPACE parameter).
However, there is an alternative, and that is to fall back on one of the lesser-used SPACE parameters - the blocksize specification. I will assume 3390 disk types for simplicity, and standard data sets.
For fixed-length records, you want to calculate the largest number that will fit in half a track (27994 bytes), because z/OS only supports block sizes up to 32760. Since you are dealing with 80-byte records, that number is 27290. Divide your file size by that number and that will give you the number of blocks. Then in a SITE server command, specify
SITE BLKSIZE=27920 LRECL=80 RECFM=FB BLOCKS=27920 PRI=calculated# SEC=a_little_extra
This is equivalent to SPACE=(27920,(calculated#,a_little_extra)).
z/OS space allocation calculates the number of tracks required and rounds up to the nearest track boundary.
For variable-length records, if your reading application can handle it, always use BLKSIZE=27994. The reason I have the warning about the reading application is that even today there are applications from ISVs that still have strange hard-coded maximum variable length blocks such as 12K.
If you are dealing with PDSEs, always use BLKSIZE=32760 for variable-length and the closest-to-32760 for fixed-length in your specification (32720 for FB/80), but calculate requirements based on BLKSIZE=4096. PDSEs are strange in their underlying layout; the physical records are 4096 bytes, which is because there is some linear data set VSAM code that handles the physical I/O.