Why IOCTL command numbers should be unique across the system? - linux-kernel

I read the instruction for choosing the ioctl commands (from the O’Reilly linux device driver):
The ioctl command numbers should be unique across the system in order to prevent
errors caused by issuing the right command to the wrong device.
One of the arguments of the IOCTL function (from user space) is the file descriptor.
So if I call to a specific device, why the ioctl command number should be unique across the system?

IOCTL CMD is not necessary to be unique across the system. It should be unique for the particular device node. But the common practice is to maintain the unique CMD across system is to avoid errors caused by issuing the right command to the wrong device.
If you pass the specific command (lets say Invalid cmd for device-1) to the wrong device-2 which is capable of processing that ioctl CMD will leads to success, you will get some invalid data instead of error. To avoid this scenario we use unique CMD across system.

Related

write_room Full - tty Driver

I am writing a new tty serial driver. I have a count in the driver which tells the number of bytes sent to the write function. The count will reduce after successful write. when the write_room is full, the application waits and when the write_room is available it tries to write the next set of data. At this time the tty driver tries to write the same previous data character by character. The tty_io.c tty_put_char function is called. Unable to resolve this issue, request inputs here.
Ok so i tried the belwo implementation changes:
1. I disable echos in serial application.
2. In the tty driver, after i get response of successful write, i call tty_wakeup of the tty driver.
Doing the above partially resolved my issues.
But this does not work consistantly. Request inputs here on my approach.

Get Volume GUID from Partition number in windows

I am looking for some kind help regarding below query.
I am trying to write (using WriteFile()) to a windows disk partition within a Windows PE environment by opening a disk handle and seeking to the partition starting offset.
WriteFile() is returning error code 5 (ACCESS DENIED).
I believe it is because the application has not locked the volume before writing to the volume.
My application has only the partition number as input. The ioctl FSCTL_LOCK_VOLUME needs a volume handle which is returned by CreateFile() and this needs a volume GUID as parameter.
So how do I get the volume GUID via the partition number?
Regards.

Spawned process limit in MacOS X

I am running into an issue spawning a large number of processes (200) under MacOS X Mountain Lion (though I'm sure this issue is not version specific). What I am doing is launching processes (in my test it is /bin/cat) which have their STDIN, STDOUT, and STDERR connected to pipes -- the other end of which is the spawning (parent) process.
The parent process writes data into the STDIN of the processes, which is piped to the [/bin/cat] child processes, which in turn spit the data back out of STDOUT and is read by the parent process. /bin/cat is just used for testing.
I am actually using kqueue to be notified when there is space available in the STDIN pipe. When kqueue notifies you with a EVFILT_WRITE event that space is available, the event also tells you exactly how many bytes can be written without blocking.
This all works well, and I can easily spawn 100 child (/bin/cat) processes, and everything flows through the pipes without blocking (all day long). However, when I crank up the number of processes to 200 everything grinds to a halt when the single kqueue service thread blocks on a write() call to one of the STDIN pipes. The event says that there is 16384 bytes available (basically an empty pipe) but when the program tries to write exactly 16384 bytes into the pipe, the write() blocks anyway.
Initially I thought I was running into a max. open files issue, but I've jacked up the ulimit for open files to 8192, so that is not the issue. What I have discovered from some googling is that on OS X, STDIN/STDOUT/STDERR are not in fact "files" (or "pipes") but are actually devices. When the process is hung, running lsof on the command-line also hangs with a warning about not being able to stat() the file system:
lsof: WARNING: can't stat() hfs file system /
Output information may be incomplete.
assuming "dev=1000008" from mount table
As soon as I kill the process, lsof completes. This reinforces the notion that STDIN/OUT/ERR are in fact devices and I'm running into some kind of limit.
Does anyone have an idea of what limit I am running into, for example is there a limit on the number of "device" that can be created? Can this be increased somehow?
Just to answer my own question here. This appears to be related to how MacOS X will dynamically expand a pipe from 16K to 32K to 64K (and so on). My basic workaround was to prevent the pipe from expanding. It appears that whenever you fill the pipe completely the OS will expand it. So, when the kqueue triggers that I can write into the pipe, and indicates that I have 16384 bytes available to write, I simply write 16384 - 1 bytes. Basically, whatever it tells me I have available, I write at most (available - 1) bytes. This prevents the pipe from expanding, and is preventing my code from encountering the condition where a write() to the pipe would block (even though the pipe is non-blocking).

how does linux kernel implement shared memory between 2 processes

How does the Linux kernel implement the shared memory mechanism between different processes?
To elaborate further, each process has its own address space. For example, an address of 0x1000 in Process A is a different location when compared to an address of 0x1000 in Process B.
So how does the kernel ensure that a piece of memory is shared between different process, having different address spaces?
Thanks in advance.
Interprocess Communication Mechanisms
Processes communicate with each other and with the kernel to coordinate their activities. Linux supports a number of Inter-Process Communication (IPC) mechanisms. Signals and pipes are two of them but Linux also supports the System V IPC mechanisms named after the Unix TM release in which they first appeared.
Signals
Signals are one of the oldest inter-process communication methods used by Unix TM systems. They are used to signal asynchronous events to one or more processes. A signal could be generated by a keyboard interrupt or an error condition such as the process attempting to access a non-existent location in its virtual memory. Signals are also used by the shells to signal job control commands to their child processes.
There are a set of defined signals that the kernel can generate or that can be generated by other processes in the system, provided that they have the correct privileges. You can list a system's set of signals using the kill command (kill -l).
Pipes
The common Linux shells all allow redirection. For example
$ ls | pr | lpr
pipes the output from the ls command listing the directory's files into the standard input of the pr command which paginates them. Finally the standard output from the pr command is piped into the standard input of the lpr command which prints the results on the default printer. Pipes then are unidirectional byte streams which connect the standard output from one process into the standard input of another process. Neither process is aware of this redirection and behaves just as it would normally. It is the shell which sets up these temporary pipes between the processes.
In Linux, a pipe is implemented using two file data structures which both point at the same temporary VFS inode which itself points at a physical page within memory. Figure shows that each file data structure contains pointers to different file operation routine vectors; one for writing to the pipe, the other for reading from the pipe.
Sockets
Message Queues: Message queues allow one or more processes to write messages, which will be read by one or more reading processes. Linux maintains a list of message queues, the msgque vector; each element of which points to a msqid_ds data structure that fully describes the message queue. When message queues are created a new msqid_ds data structure is allocated from system memory and inserted into the vector.
System V IPC Mechanisms: Linux supports three types of interprocess communication mechanisms that first appeared in Unix TM System V (1983). These are message queues, semaphores and shared memory. These System V IPC mechanisms all share common authentication methods. Processes may access these resources only by passing a unique reference identifier to the kernel via system calls. Access to these System V IPC objects is checked using access permissions, much like accesses to files are checked. The access rights to the System V IPC object is set by the creator of the object via system calls. The object's reference identifier is used by each mechanism as an index into a table of resources. It is not a straight forward index but requires some manipulation to generate the index.
Semaphores: In its simplest form a semaphore is a location in memory whose value can be tested and set by more than one process. The test and set operation is, so far as each process is concerned, uninterruptible or atomic; once started nothing can stop it. The result of the test and set operation is the addition of the current value of the semaphore and the set value, which can be positive or negative. Depending on the result of the test and set operation one process may have to sleep until the semphore's value is changed by another process. Semaphores can be used to implement critical regions, areas of critical code that only one process at a time should be executing.
Say you had many cooperating processes reading records from and writing records to a single data file. You would want that file access to be strictly coordinated. You could use a semaphore with an initial value of 1 and, around the file operating code, put two semaphore operations, the first to test and decrement the semaphore's value and the second to test and increment it. The first process to access the file would try to decrement the semaphore's value and it would succeed, the semaphore's value now being 0. This process can now go ahead and use the data file but if another process wishing to use it now tries to decrement the semaphore's value it would fail as the result would be -1. That process will be suspended until the first process has finished with the data file. When the first process has finished with the data file it will increment the semaphore's value, making it 1 again. Now the waiting process can be woken and this time its attempt to increment the semaphore will succeed.
Shared Memory: Shared memory allows one or more processes to communicate via memory that appears in all of their virtual address spaces. The pages of the virtual memory is referenced by page table entries in each of the sharing processes' page tables. It does not have to be at the same address in all of the processes' virtual memory. As with all System V IPC objects, access to shared memory areas is controlled via keys and access rights checking. Once the memory is being shared, there are no checks on how the processes are using it. They must rely on other mechanisms, for example System V semaphores, to synchronize access to the memory.
Quoted from tldp.org.
There are two kinds of shared memory in Linux.
If A and B are Parent process and Child process respectively, each of them uses their own pte to access the shared memory.The shared memory is shared by the fork mechanism. So every thing is good, right?(More details, please look at the kernel function copy_one_pte() and related functions.)
If A and B are not parent and Child, they use the a public key to access the shared memory.
Let's assume that A creates a shared memory though System V shmget() with a key and, correspondly, the kernel creates a file(file name is "SYSTEMV+key") for process A in a shmem/tmpfs which is an internal RAM-based filesystem. It's mounted by the kenrel(Check shmem_init()). And the shared memory region is handled by shmem/tmpfs. Basically, it's handled by the page fault mechanism when process A accesses the shared memory region.
If process B wants to access that shared memory region created by process A. Process B should use shmget() with the same key used by Process A. Then process B can find the file("SYSTEMV+key") and map the file into Process B's address space.

How to stop a process and begin other process without returning to the initial Process in Arduino

I desire to construct a Hexapod which utilizes Arduino and is remotely controlled via Bluetooth, at present I am writing the code for its walking(in Arduino part),however I do not know how to proceed.The problem is as follow:
When a new command is received from the remote device I want the legs to stop what they are doing and carry out the received command.If this action is realized with Interrupts then after the command has been completed the previous process again starts,which is undesired for me. What can be done?
Thanks in advance for your answers.
The arduino doesn't really have separate processes - or even an OS.
You should think in terms of "states". Have a global (sorry) int representing the current state (use an enum) then when you do a new command set the state to the new command and return, then have a main loop which checks the state and performs whatever function is needed.

Resources