Where in the linux kernel does the closing of a socket's file descriptor occur? I know for a file, the file's file descriptor is closed in fs/open.cs function sys_close(). However, for a socket file descriptor, is this the same location or somewhere else?
Also, do sockets utilize the file.c alloc_fd to allocate the file descriptor or do they utilize some other function?
Yes, sys_close() is the entry point for closing all file descriptors, including sockets.
sys_close() calls filp_close(), which calls fput() on the struct file object. When the last reference to the struct file has been put, fput() calls the file object's .release() method, which for sockets, is the sock_close() function in net/socket.c.
The socket code uses get_unused_fd() and put_unused_fd() to acquire and release file descriptors.
Related
I am understanding usb_new_device() function.
I have a question related to usb_enumerate_device() function calling inside usb_new_device().
As per the comments above the function, it reads configs/intfs/otg. But I didn't see in the function definition that it is reading the interface descriptor.
It seems the function is reading config descriptor and doing something for OTG devices. I am not seeing the source related to interface descriptor reading. I am missing something here.
linux
I'm developing a charachter device driver for Linux.
I want to implement file-descriptor-targeted read() operation which will be a bit specific every time you open a device.
It is possible to identify the process where read() called from (using kernel current macro), but there can be several file descriptor associated with my device in this process.
I know that file descriptors got mapped to struct file objects just before making system call but can I get it back?
welcome to stackoverflow!
To achieve the goal you have specified in comment there are two methods:
ioctl and read :
Here you will have multiple buffers for each consumer to read from, and write buffer is different from read buffer. Each consumer immediatly after opening the device will fire an ioctl which will result in new buffer being allocated and a new token being generated for that buffer (something like this token numeber means this buffer). this token number should be passed back to the concernted consumer.
Now each consumer before making a read call will fire the ioctl giving the token number that will switch the current read buffer to that associated with that token number.
Now this method adds over head and you need to add locks too. Also no more than one consumer at a time can read from the device.
ioctl and mmap:
you can mmap the read buffer for each consumer and let it read from it at its own pace, using ioctl to request new data etc.
This will allow multiple consumers to read at the same time.
Or, you can malloc a new data buffer to read from on each open call and store the pointer to buffer in the private field of the file structure.
when ever a read is called this way you can just read the private data field of the file structure passed with the call and see which buffer is being talked about.
Also you can embed the whole structure containing the buffer pointer and size etc in the private field.
I am writing a character device driver. In the sample code which I found over internet, mentions that we need to attach some file operations to this character device. In those file_operations there is one function named as open. But in that open call, there are not doing anything significant.
But if we want to use this character device, first we need to open the device and then only we can read/write anything on it. So I want to know how open() call is working exactly. Here is the link I am referring for character device driver :
http://appusajeev.wordpress.com/2011/06/18/writing-a-linux-character-device-driver/
The sequence for open() on the user side is very straightforward: it will invoke sys_open() on the kernel path, will do some path resolution and permission checking, then will path everything its got to dev_open() (and would not do anything else).
dev_open() gets parameters you have passed to it through the open() system call (+ quite a lot of information specific to kernel vfs subsystem, but this is rarely of concern).
Notice, that you're getting struct file parameter passed in. It has several useful fields:
struct file {
....
struct path f_path; // path of the file passed to open()
....
unsigned int f_flags; // 'flags' + 'mode' as passed to open()
fmode_t f_mode; // 'mode' as set by kernel (FMODE_READ/FMODE_WRITE)
loff_t f_pos; // position in file used by _llseek
struct fown_struct f_owner; // opening process credentials, like uid and euid
....
}
The rest you can dig out yourself by checking out examples in the source.
I was reading through one of golang's implementation of memory mapped files, https://github.com/edsrzf/mmap-go/. First he describes the several access modes:
// RDONLY maps the memory read-only.
// Attempts to write to the MMap object will result in undefined behavior.
RDONLY = 0
// RDWR maps the memory as read-write. Writes to the MMap object will update the
// underlying file.
RDWR = 1 << iota
// COPY maps the memory as copy-on-write. Writes to the MMap object will affect
// memory, but the underlying file will remain unchanged.
COPY
But in gommap test file I see this:
func TestReadWrite(t *testing.T) {
mmap, err := Map(f, RDWR, 0)
... omitted for brevity...
mmap[9] = 'X'
mmap.Flush()
So why does he need to call Flush to make sure the contents are written to the file if the access mode is RDWR?
Or is the OS managing this so it only writes when it thinks it should?
If the last option, could you please explain it in a little more detail - what i read is that when the OS is low in memory it writes to the file and frees up memory. Is this correct and does it apply only to RDWR or only to COPY?
Thanks
The program maps a region of memory using mmap. It then modifies the mapped region. The system isn't required to write those modifications back to the underlying file immediately, so a read call on that file (in ioutil.ReadAll) could return the prior contents of the file.
The system will write the changes to the file at some point after you make the changes.
It is allowed to write the changes to the file any time after the changes are made, but by default makes no guarantees about when it writes those changes. All you know is that (unless the system crashes), the changes will be written at some point in the future.
If you need to guarantee that the changes have been written to the file at some point in time, then you must call msync.
The mmap.Flush function calls msync with the MS_SYNC flag. When that system call returns, the system has written the modifications to the underlying file, so that any subsequent call to read will read the modified file.
The COPY option sets the mapping to MAP_PRIVATE, so your changes will never be written back to the file, even if you using msync (via the Flush function).
Read the POSIX documentation about mmap and msync for full details.
The usage case is that one application generates an event and sends out a signal that any application that cares to listen for it will get. E.g. an application updates the contents of a file and signals this. On Linux this could be done by the waiters calling inotify on the file. One portable way would be for listeners to register with a well-known server, but I would prefer something simpler if possible. As portable as possible ideally means using only POSIX features which are also widely available.
Option using lock files
You can do this by locking a file.
Signal emitter initial setup:
Create a file with a well-known name and lock it for writing (fcntl(F_SETLK) with F_WRLCK or flock(LOCK_EX)`).
Signal receiver procedure:
Open the file using the well-known filename and try to obtain a read lock on it (fcntl(F_SETLK) with F_RDLCK or flock(LOCK_SH)).
Receiver blocks because the emitter is holding a conflicting write lock.
Signal emission:
Signal emitter creates a new temporary file
Signal emitter obtains a write lock on the new temporary file
Signal emitter renames the new temporary file to the well-known filename. This clobbers the old lock file but the waiting receivers all retain references to it.
Signal emitter closes the old lock file. This also releases the lock.
Signal receivers all wake up because now they can obtain their read locks.
Signal receivers should close the file they've just obtained a lock on. It won't be used again. If they want to wait for the condition to happen again they should reopen the file.
In the signal emitter, the temporary lock file which has been renamed over top of the original lock file now becomes the new current lock file.
Option using network multicast
Have the receivers join a multicast group and wait for packets. Have the signal emitter send UDP packets to that multicast group.
You can bind both the sending and receiving UDP sockets to the loopback interface if you want it to use only host-local communication.
In the end I used a bound unix domain socket. The owner keeps an array of client FDs and sends each a message when there is an event.