Get file size on disk from file size - windows

I have a file on Windows machine with this size and i need to caclulate file size on disk from file size:
Size
3,06 MB (3.216.171 bytes)
Size on disk
3,07 MB (3.219.456 bytes)
I have 512 bytes/sector file system
How to calculate how many sectors I need to use, to store the file from file size?
I understand 3219456 / 512 = 6288, but how to calculate size on disk from file size?
Thare is a way to get size on disc from file size?
I miss something?

Your file length is 0x31132B.
The required storage (rounded up to the nearest cluster) is 0x312000. Your clusters are either 4kB (0x1000) or 8kB (0x2000).
This can be computed as:
clusterSize * ceil(fileSize * 1.0 / clusterSize)
(The 1.0 prevents integer division.) In integer math, it is:
clusterSize * (1 + (fileSize - 1) / clusterSize)
You get the cluster size from GetDiskFreeSpace, which you'll need to call anyway to figure out if your file will fit. See this existing answer:
Getting the cluster size of a hard drive (through code)
Of course, other things can affect the true storage used by storing a file... if a directory doesn't have enough space in its cluster for the new entry, if you are storing metadata with it that doesn't fit in the directory, if you have compression enabled. But for an "ordinary" file system, the above calculations will be correct.

Related

WinAPI: set file size only and not physical size

When creating a file with zero size, I would like to set the logical size of the file to be bigger than zero size.
One drive shows for dehydrated files Size on disk with zero bytes and the Size
it will be shown to a value that is the bigger than zero.
Can this behavior be done through windows api functions?
Thanks.
Sample onedrive properties window for a file:

Has curlftpfs a maximum size of mounted space and how can I skip it?

I mounted a ftpserver into my local OS:
curlftpfs user:pass#ftp.server.com /var/test/
I noticed using pydf that there is maximal size of this volume at about 7.5GB:
Filesystem Size Used Avail Use% Mounted on
curlftpfs#ftp://user:pass#ftp.server.com 7629G 0 7629G 0.0 [.........] /var/test
Then I tried to fill the disk space using dd with an 8GB file but this failed also at the given size:
dd if=/dev/zero of=upload_test bs=8000000000 count=1
dd: memory exhausted by input buffer of size 8000000000 bytes (7.5 GiB)
The FTP user has unlimited traffic and disk space at remote server.
So my question is: Why is there a limit at 7.5GB and how can I skip it?
Looking at the source code of curlftpfs 0.9.2, which is the last released version, this 7629G seems to be the hardcoded default.
In other words, the curlftpfs doesn't check the actual size of the remote filesystem and uses some predefined static value instead. Moreover the actual check can't be implemented because ftp protocol doesn't provide information about free space.
This means that failure of your file transfer on 7.5 GB is not caused by reported free space, as there is an order of magnitude difference between the two.
Details
Function ftpfs_statfs implementing statfs FUSE operation defines number of free blocks as follows:
buf->f_blocks = 999999999 * 2;
And the size of filesystem block as:
buf->f_bsize = ftpfs.blksize;
Which is defined elsewhere as:
ftpfs.blksize = 4096;
So putting this all together gives me 999999999 * 2 * 4096 / 2^30 GB ~= 7629.3945236206055 GB, which matches the number in your pydf output: 7629G.
It's an old question, however, for completeness sake:
DD's bs ("block size") option makes it buffer the specified amount of data in memory before writing out the chunk to the output. With a massive block size like your 8GB, it's entirely possible your system simply did not have the free memory (or even memory capacity!) to hold the entire buffer at once. Retrying with a smaller block size and appropriately higher count for the same output size should work as expected:
dd if=/dev/zero of=upload_test bs=8000000 count=1000

NtQueryInformationFile returns incorrect allocation size

I use NtQueryInformationFile with FILE_STANDARD_INFORMATION struct to retrieve the allocation size of file. But for small files it returns incorrect1 result. For example text file with size 1 byte returns 8 bytes allocation size, instead 4096 bytes. Where is problem?
1 I'm assuming that this value is incorrect, because Explorer (on Windows XP Checked Build in my case) the size on disk reports higher figures (4096 bytes for a file with size 1).
file size in EndOfFile member. AllocationSize - this is how many disk space allocated for file -
Usually, this value is a multiple of the sector or cluster size of the
underlying physical device.

Files take up more space on the disk

When viewing details of a file using Finder, different values are shown for how much space the file occupies. For example, a file takes up 28.8KB of RAM but, 33KB of the disk. Anyone know the explanation?
Disk space is allocated in blocks. Meaning, in multiples of a "block size".
For example, on my system a 1 byte file is 4096 bytes on disk.
That's 1 byte of content & 4095 bytes of unused space.

memory_limit=80M. what is the maximum image size for imagecreateformjpeg()?

i have a webhosting that gives maximum memory_limit of 80M (i.e. ini_set("memory_limit","80M");).
I'm using photo upload that uses the function imagecreatefromjpeg();
When i upload large images it gives the error
"Fatal error: Allowed memory size of 83886080 bytes exhausted"
What maximum size (in bytes) for the image i can restrict to the users?
or the memory_limit depends on some other factor?
The memory size of 8388608 is 8 Megabytes, not 80. You may want to check whether you can still increase the value somewhere.
Other than that, the general rule for image manipulation is that it will take at least
image width x image height x 3
bytes of memory to load or create an image. (One byte for red, one byte for green, one byte for blue, possibly one more for alpha transparency)
By that rule, a 640 x 480 pixel image will need at least 9.2 Megabytes of space - not including overhead and space occupied by the script itself.
It's impossible to determine a limit on the JPG file size in bytes because JPG is a compressed format with variable compression rates. You will need to go by image resolution and set a limit on that.
If you don't have that much memory available, you may want to look into more efficient methods of doing what you want, e.g. tiling (processing one part of an image at a time) or, if your provider allows it, using an external tool like ImageMagick (which consumes memory as well, but outside the PHP script's memory limit).
Probably your script uses more memory than the just the image itself. Trying debugging your memory consumption.
One quick-and-dirty way is to utilize memory_get_usage and memory_get_usage memory_get_peak_usage on certain points in your code and especially in a custom error_handler and shutdown_function. This can let you know what exact operations causes the memory exhaustion.

Resources