How to set offset for LockFileEx like LockFile - winapi

I need to lock a file for read and write operation and preferably I would like to lock only a region. As I need to use LockFileEx instead of LockFile I don't understand how to specify a lock region because there's only a nNumberOfBytesToLock parameter and no dwFileOffsetLow like with LockFile.
Does it allow only to lock from the beginning ?
Thanks

The file offset is part of the lpOverlapped parameter.

Related

Confusion about file offset in Win32 LockFileEx API

According to the LockFileEx() documentation, a file offset is specified in lpOverlapped->Offset/OffsetHigh. But when debugging winword.exe to analyze its file system behaviors, I see it calls LockFileEx() on a 122-byte file with Offset=0xfffffffb and OffsetHigh=0xffffffff, and the call completes successfully. Apparently this is not a valid offset, what does this mean?
From MSDN:
Locking a region that goes beyond the current end-of-file position is not an error.
They could be using the lock as some kind of flag or for synchronization.

Is writing to a unix file through shell script is synchronized?

i have a requirement where many threads will call same shell script to perform a work, and then will write output(data as single text line) to a common text file.
as here many threads will try to write data to same file, my question is whether unix provides a default locking mechanism so that all can not write at the same time.
Performing a short single write to a file opened for append is mostly atomic; you can get away with it most of the time (depending on your filesystem), but if you want to be guaranteed that your writes won't interrupt each other, or to write arbitrarily long strings, or to be able to perform multiple writes, or to perform a block of writes and be assured that their contents will be next to each other in the resulting file, then you'll want to lock.
While not part of POSIX (unlike the C library call for which it's named), the flock tool provides the ability to perform advisory locking ("advisory" -- as opposed to "mandatory" -- meaning that other potential writers need to voluntarily participate):
(
flock -x 99 || exit # lock the file descriptor
echo "content" >&99 # write content to that locked FD
) 99>>/path/to/shared-file
The use of file descriptor #99 is completely arbitrary -- any unused FD number can be chosen. Similarly, one can safely put the lock on a different file than the one to which content is written while the lock is held.
The advantage of this approach over several conventional mechanisms (such as using exclusive creation of a file or directory) is automatic unlock: If the subshell holding the file descriptor on which the lock is held exits for any reason, including a power failure or unexpected reboot, the lock will be automatically released.
my question is whether unix provides a default locking mechanism so
that all can not write at the same time.
In general, no. At least not something that's guaranteed to work. But there are other ways to solve your problem, such as lockfile, if you have it available:
Examples
Suppose you want to make sure that access to the file "important" is
serialised, i.e., no more than one program or shell script should be
allowed to access it. For simplicity's sake, let's suppose that it is
a shell script. In this case you could solve it like this:
...
lockfile important.lock
...
access_"important"_to_your_hearts_content
...
rm -f important.lock
...
Now if all the scripts that access "important" follow this guideline,
you will be assured that at most one script will be executing between
the 'lockfile' and the 'rm' commands.
But, there's actually a better way, if you can use C or C++: Use the low-level open call to open the file in append mode, and call write() to write your data. With no locking necessary. Per the write() man page:
If the O_APPEND flag of the file status flags is set, the file offset
shall be set to the end of the file prior to each write and no
intervening file modification operation shall occur between changing
the file offset and the write operation.
Like this:
// process-wide global file descriptor
int outputFD = open( fileName, O_WRONLY | O_APPEND, 0600 );
.
.
.
// write a string to the file
ssize_t writeToFile( const char *data )
{
return( write( outputFD, data, strlen( data ) );
}
In practice, you can write anything to the file - it doesn't have to be a NUL-terminated character string.
That's supposed to be atomic on writes up to PIPE_BUF bytes, which is usually something like 512, 4096, or 5120. Some Linux filesystems apparently don't implement that properly, so you may in practice be limited to about 1K on those file systems.

How to access kernel parameters in kernel space

This is one of my lab assignments: I have to create an proc entry here: /proc/sys/kernel/ and I have to write a system call to manipulate a user space variable for different values of the proc entry I just added. For eg: say, user space variable is 1 and proc entry is 0 or 1. Now the system call should increment the user space variable by 1(if proc entry is 0/off) or multiply it by two(if proc entry is 1/on)
I did the following to add the proc entry: I created an entry xxx by adding a struct under the kernel ctl table section in the file in the kernel/sysctl.c. Compiled the kernel and the system boots well with this kernel. The entry is also added into proc directory as /proc/sys/kernel/xxx.
I am now able to read or write to it from user space. I did both cat and echo to read and write resp.
I did the following in the system call: I wrote a system call to read the user space variable. I also completed and tested the access_ok, copy_from user, copy_to_user and all that. I also completed manipulating the user space variable to increment always(for now).
Problem I am facing: Now, I have to add an if condition to check the "xxx" value to decide whether I should increment or multiply the user space variable. This is where I am stuck. Not in writing the system call. I don't know how to read this proc entry "xxx".
Can I use file handling?
If so, should I use open() system call inside my system call? Will it work?
When I checked, there was sysctl system call, but it seems deprecated now. This IBM tutorial talks about reading the proc entry. But create_proc_entry does not apply to parameters inside /proc/sys/kernel directory right? If so, how can I ever use read proc entry function?
"But, now I have to write a system call to read the value of xxx."
I suspect that the term "system call" is being used in a formal sense and that you are being asked to add a new system call to the kernel (similar to open, read, mmap, signal etc) that returns your value.
See Adding a new system call in Linux kernel 3.3

Read a chunk of a file using WINAPI's ReadFile or something similar?

Well, I'm working on a project, in which I'm handling potentially big files, that I can't load into ram all at once, so I'm going to treat them like a CHS hard drive, and grab the data one 0x800 byte chunk at a time.
My problem is, I cannot find any functions in the WINAPI that allow me to read the data from a file I've opened with CreateFile, starting at an offset.
And yes, it must be a WINAPI function, and no, I do not want to map the whole file into memory.
Thanks much, Bradley.
Use ReadFile with SetFilePointer

file_operations Question, how do i know if a process that opened a file for writing has decided to close it?

I'm currently writing a simple "multicaster" module.
Only one process can open a proc filesystem file for writing, and the rest can open it for reading.
To do so i use the inode_operation .permission callback, I check the operation and when i detect someone open a file for writing I set a flag ON.
i need a way to detect if a process that opened a file for writing has decided to close the file so i can set the flag OFF, so someone else can open for writing.
Currently in case someone is open for writing i save the current->pid of that process and when the .close callback is called I check if that process is the one I saved earlier.
Is there a better way to do that? Without saving the pid, perhaps checking the files that the current process has opened and it's permission...
Thanks!
No, it's not safe. Consider a few scenarios:
Process A opens the file for writing, and then fork()s, creating process B. Now both A and B have the file open for writing. When Process A closes it, you set the flag to 0 but process B still has it open for writing.
Process A has multiple threads. Thread X opens the file for writing, but Thread Y closes it. Now the flag is stuck at 1. (Remember that ->pid in kernel space is actually the userspace thread ID).
Rather than doing things at the inode level, you should be doing things in the .open and .release methods of your file_operations struct.
Your inode's private data should contain a struct file *current_writer;, initialised to NULL. In the file_operations.open method, if it's being opened for write then check the current_writer; if it's NULL, set it to the struct file * being opened, otherwise fail the open with EPERM. In the file_operations.release method, check if the struct file * being released is equal to the inode's current_writer - if so, set current_writer back to NULL.
PS: Bandan is also correct that you need locking, but the using the inode's existing i_mutex should suffice to protect the current_writer.
I hope I understood your question correctly: When someone wants to write to your proc file, you set a variable called flag to 1 and also save the current->pid in a global variable. Then, when any close() entry point is called, you check current->pid of the close() instance and compare that with your saved value. If that matches, you turn flag to off. Right ?
Consider this situation : Process A wants to write to your proc resource, and so you check the permission callback. You see that flag is 0, so you can set it to 1 for process A. But at that moment, the scheduler finds out process A has used up its time share and chooses a different process to run(flag is still o!). After sometime, process B comes up wanting to write to your proc resource also, checks that the flag is 0, sets it to 1, and then goes about writing to the file. Unfortunately at this moment, process A gets scheduled to run again and since, it thinks that flag is 0 (remember, before the scheduler pre-empted it, flag was 0) and so sets it to 1 and goes about writing to the file. End result : data in your proc resource goes corrupt.
You should use a good locking mechanism provided by the kernel for this type of operation and based on your requirement, I think RCU is the best : Have a look at RCU locking mechanism

Resources