"The lseek command repositions the offset of the descriptor fildes to the argument offset according to the directive whence and is mainly used in file system implementation for Indexed Disk Allocation"
I was reading my professor's powerpoint and came upon this statement. What I don't understand is how the lseek command ties in to indexed disk allocation compared to Linked and contiguous. Could someone explain why it says indexed disk allocation is easier to implement than Contiguous or Linked?
From what I read from another source: "The lseek() command will degrade into O(N) time because we will need to scan the file allocation table sequentially to access file data."
Wouldn't this relate to contiguous disk allocation more because it blocks allocated data sequentially.
Related
Is it possible in Win32 to get a writeable (or write-only) range of "garbage" virtual address space (i.e., via VirtualAlloc, VirtualAlloc2, VirtualAllocEx, or other) that never needs to be persisted, and thus ideally is never backed by physical memory or pagefile?
This would be a "hole" in memory.
The scenario is for simulating a dry-run of a sequential memory writing operation just in order to obtain the size that it actually consumes. You would be able to use the exact same code used for actual writing, but instead pass in an un-backed "garbage" address range that essentially ignores or discards anything that's written to it. In this example, the size of the "void" address range could be 2⁶⁴ = 18.4ᴇʙ (why not? it's nothing, after all), and all you're interested in is the final value of an advancing pointer.
[edit:] see comments section for the most clever answer. Namely: map a single 4K page multiple times in sequence, tiling the entire "empty" range
This isn't possible. If you have code that attempts to write to memory then the virtual memory needs to be backed with something.
However, if you modified your code to use the stream pattern then you could provide a stream implementation that ignored the write and just tracked the size.
When we talk about Memory Mapped Files, it is generally mentioned that a portion of file can be mapped to a process address space and we can do random access on it using pointers etc . I also have read at many places that I should have sufficient memory to accomodate whole file into memory. Now these are two statements which are bit confusing to me because if we have need sufficient memory for the complete file than what would be the advantage? I know about the benefits concerning extra kernel space copy of contents or fast time as data would not be block read or byte read as in case of streams etc.
You don't need to have memory for the entire file - mmap is lazy loading, so the benefit there is you can modify a large file without having to use a lot of ram. Another neat trick is if you have to iterate over it backwards without having to chunk it.
Assume I save a text file in the HDD disk storage(assume the disk storage is new and so defragmented) and the file name is A with a file size of say 10MB
I presume, the file A occupies some space in the disk as shown, where x is an unoccupied space/memory on the disk
AAAAAAAAAAAAAxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Now, I create and save another file B of some size. So B will be saved as
AAAAAAAAAAAAABBBBBBBBBBBBBBBBBxxxxxxxxxxxxxxxxxxxxxxxxxxx - as the disk is defragmented, I assume the storage will be contiguous.
Here, what if I edit the file A and reduce the file size to 2MB. Can you say how the memory will be allocated now.
Some options I could think of are
AAAAAAxxxxxxxxxBBBBBBBBBBBBBBBBxxxxxxxxxxxxxxxxxxxxxxxxxxxx
or
AAxxxAAxxxAxAxxBBBBBBBBBBBBBBBBxxxxxxxxxxxxxxxxxxxxxxxxxxxx
or
a totally new location freeing up the bigger chunk for other files.
xxxxxxxxxxxxxxxBBBBBBBBBBBBBBBBAAAAAAxxxxxxxxxxxxxxxxxxxxxx
or is it any other way based on any algorithm or data-structure.
A lot of this would depend upon what type of filesystem you are using (and also how the OS interacts with it). The behavior of an NTFS filesystem in Windows may be nothing like the behavior of an ext3 filesystem in Ubuntu for the same set of logical operations.
Generally speaking, however, most modern filesystems define a file as a series of pointers to blocks on the disk. There is a minimum block size that describes the smallest allocatable block (typically ranging from 512 bytes to 4 KBytes), so files that are less than this size or not some exact multiple of this size will have some amount of extra space allocated to them.
So what happens when you allocate a 10 MB file 'A'? The filesystem reserves 10MB worth of blocks (perhaps even allowing for a few extra blocks at the end to accommodate any minor edits that are made to the file or its metadata) for the file contents. Ideally these blocks will be contiguous, as in your example. When you edit 'A' and make it smaller, the filesystem will release some or all (most likely all since in most cases editing 'A' involves writing out the entire contents of 'A' to disk again, so there's little reason for the filesystem to prefer keeping 'A' in the same physical location over writing the data to a new location somewhere else on the disk) of the blocks allocated to 'A', and update its reference to include any new blocks that were allocated, if necessary.
With that said, in the typical case and using a modern filesystem and OS, I would expect your example to produce the following final state on disk ('b' and 'a' represent extra bytes allocated to 'B' and 'A' that do not contain any meaningful data):
xxxxxxxxxxxxxxxBBBBBBBBBBBBBBBBbbAAAAAAaaxxxxxxxxxxxxxxxxxxxxxx
But real-world results will of course vary by filesystem, OS, and potentially other factors (for instance, when using an SSD data fragmentation becomes irrelevant because any section of the disk can be accessed at very low latency and with no seek penalty but at the same time it becomes important to minimize write cycles so that the device doesn't wear-our prematurely, so the OS may favor leaving 'A' in place as much as possible in that case in order to minimize the number of sectors that need to be overwritten).
So the short answer is, "it depends".
How allocation is done depends entirely on the file system type (e.g. FAT32, NTFS, jfs, reiser, etc. etc.) and the driver software. Your assumption that the file will be stored contiguously is not necessarily true - it may be more performant to store it in a different pattern, depending on hardware. For example, let's say you have a disk with 16 cylinder heads and a blocksize of 512 bytes, then it could be most efficient to store an amount of 8k data on 16 different cylinders.
OTOH, with recent hardware that does not involve rotating mechanical parts, the story changes dramatically - a concept like "fragmentation" becomes suddenly meaningless, because the access time to each block is the same - no matter in which order it is done.
No it's like this:
First you create file A: (here big A stands for data actually used for A and 'a' for reserved data for A, x stands for free).
AAAAAAAAAAAAAaaaaaaaXXXXXXXXXXXXXXXXXXX
Then B is added:
AAAAAAAAAAAAAaaaaaaaBBBBbbbbbbbbbb
Then C is added, but there is no unreserved space left:
AAAAAAAAAAAAAaaaaaaaBBBBbbbbCCCccc
If A is truncated this is what will happen
AAAAAaaaaaaaxxxxxxxxBBBBbbbbCCCccc
If B is now expanded this will happen:
AAAAAaaaaaaaBBBBxxxxxBBBBBBBBCCCccc
You see that the data for B is no longer close to each other, this is called fragmentation. When you run a defragmentation tool the data is placed close together again.
I was wondering whether there is any reasonably portable way to take existing, unshared heap memory and to convert it into shared memory. The usage case is a block of memory which is too large for me to want to copy it unnecessarily (i.e. into a new shared memory segment) and which is allocated by routines out of my control. The primary target is *nix/POSIX, but I would also be interested to know if it can be done on Windows.
Many *nixe have Plan-9's procfs which allows to open read a process's memory by inspecting /proc/{pid}/mem
http://en.wikipedia.org/wiki/Procfs
You tell the other process your pid, size and the base address and it can simply read the data (or mmap the region in its own address space)
EDIT:: Apparently you can't open /proc/{pid}/mem without a prior ptrace(), so this is basically worthless.
Under most *nixes, ptrace(2) allows to attach to a process and read its memory.
The ptrace method does not work on OSX, you need more magic there:
http://www.voidzone.org/readprocessmemory-and-writeprocessmemory-on-os-x/
What you need under Windows is the function ReadProcessMemory
.
Googling "What is ReadProcessMemory for $OSNAME" seems to return comprehensive result sets.
You can try using mmap() with MAP_FIXED flag and address of your unshared memory (allocated from heap). However, if you provide mmap() with own pointer then it is constrained to be aligned and sized according to the size of memory page (may be requested by sysconf()) since mapping is only available for the whole page.
I am looking to optimize my disk IO, and am looking around to try to find out what the disk cache size is. system_profiler is not telling me, where else can I look?
edit: my program is processing entire volumes: I'm doing a secure-wipe, so I loop through all of the blocks on the volume, reading, randomizing the data, writing... if I read/write 4k blocks per IO operation the entire job is significantly faster than r/w a single block per operation. so my question stems from my search to find the ideal size of a r/w operation (ideal in terms of performance:speed). please do not point out that for a wipe-program I don't need the read operation, just assume that I do. thx.
Mac OS X uses a Unified Buffer Cache. What that means is that in the kernel VM objects and files are them at some level, same thing, and the size of the available memory for caching is entirely dependent on the VM pressure in the rest of the system. It also means the read and write caching is unified, if an item in the read cache is written to it just gets marked dirty and then will be written to disk when changes are committed.
So the disk cache may be very small or gigabytes large, and dynamically changes as the system is used. Because of this trying to determine the cache size and optimize based on it is a losing fight. You are much better off looking at doing things that inform the cache how to operate better, like checking with the underlying device's optimal IO size is, or identifying data that should not be cached and using F_NOCACHE.