I read some material on major and minor numbers and have doubts in it.
What I understood:
driver has one major number associated with it due to register_chrdev_region().
mknod /dev/hello1 -c 123 32 will create device file with major number 123 and when application opens /dev/hello1 it searches driver with major number same as /dev/hello1.
multiple device file can be associated with one driver. and none of the two files in /dev will share same pair of major and minor number.
now some modern operating systems allows drivers with same major
numbers. Now in this case how mapping will work?
When you have multiple drivers associated with the same major number, you can differentiate between them through different minor number ranges under individual drivers. You can use the minor number as an index to a local array to access individual devices.
Also, it is advisable to use alloc_chrdev_region() to get the major number from the kernel dynamically rather than hardcoding a number that's currently free through register_chrdev_region().
Hope this helps!
When open() is called and the file entry contains a major/minor pair, open finds the device driver which has a corresponding struct device that contains the same major/minor pair. The major number alone is not enough to open a device.
Modern drivers should have their major number dynamically allocated by the kernel by leaving the dev_num set to zero when calling alloc_chrdev_region (&dev_num, 0, <number of contiguous minors to reserve>, DEVICE_NAME) If the call succeeds, MAJOR(dev_num) is the dynamically allocated major device number (and MINOR(dev_num) has the minor device number).
Related
I'm learning about linux kernel drivers and I am confused about device major/minor allocation. Especially the case in which I am not aware ahead of time of the number of devices I would like to manage.
Indeed, I know that we may allocate multiple devices at once dynamically with
alloc_chrdev_region (
&dev,
ALLOC_FIRST_MINOR,
ALLOC_COUNT,
DEV_NAME )
But I wonder about situations in which ALLOC_COUNT might not be obvious ahead of time.
Question
How do I re-size an allocated character device region ? Is it common practice to do so ?
Concerns
Large allocation drawbacks
Is there any drawback with taking a large/safe amount of minors and call cdev_init on a few of them only ?
Small allocation instead ?
If so, should I allocate small number of device first and then later allocate more if necessary later ? Is it something seen in the wild ? the last argument makes me think that I would be doing something wrong doing that.
What I Have Tried
Mutliple call to alloc_chred_region
I quickly run out of MAJOR numbers and I conclude this is not a very good solution
Multiple call to register_chred_region
I thought to make an initial call to alloc_chred_region which would give me a sentinel with a major dynamically allocated and then call at need register_chrdev_region to allocate more device number some time later. While much better that its counterpart, it doesn't fix any of the concerns.
According to this answer Linux sysfs entries are limited to a page, which is 4 KiB on most architectures.
I’m currently working on a net/sched/ module (a fork of fq_codel with slightly changed behaviour to test something) and need to expose largish statistics to userspace. One of the KPIs is about 800 bytes, the other is expected to be 10–50 KiB or so (back-of-the-envelope, not fixed yet). The latter obviously won’t fit into sysfs.
The information is generated during operation but stored into a preallocated array, and userspace is expected to fetch it (and thus empty the array) twice per second or so. Raising that interval might be possible but increase system load by quite some amount.
What other ways of exposing such information to userspace exist that would fit my scenario?
How about relay interface (/Documentation/filesystems/relay.txt) to log large quantities of data from the kernelto userspace
I want to version all the boards on which I put a version of my FPGA.
Each board shall have a different serial number stored in an internal ROM. It's basically a 10 digits number (ie: 0123456789).
After generating the binary file , how can I modify it to increment the number without damaging the FPGA and its behavior?
Have anyone already done this before?
Which FPGA are you using? For Xilinx Devices you can use the USR_ACCESS register that can be set when creating the bitstream file. Limited up to 32bits of data.
https://www.xilinx.com/support/documentation/application_notes/xapp497_usr_access.pdf
I really do tried to understand the Von Neumann architecture, but there is one thing I can't understand, how can the user know the number in the computer's memory if this command or if it is a data ?
I know there is a 'stored-program concept', but I understood nothing...
Can someone explain it to me in a two sentences ?
thnx !
Put simply, the user cannot look at a memory address and determine if it is a command or data. It can be both.
Its all in the interpretation; if the program counter points to a memory address, it will be interpreted as a command. If it is referenced by a read instruction, it is data.
The point of this is flexibility. A program can write (or re-write) programs into memory, that can then be executed by setting the program counter to the start address.
Modern operating systems limit this behaviour by data execution prevention, keeping parts of the memory from being interpreted as commands.
The Basic concept of Stored program concept is the idea of storing data and instructions together in main memory.
NOTE: This is a vastly oversimplified answer. I intentionally left a lot of things out for the sake of making the point
Remember that all computer memory is, for all intents and purposes on modern machines, a long list of bytes. The numbers are meaningless unless the thing that put them there has a specific purpose for them.
I could put number 5 at address 0. It could represent the 5th instruction specified by my CPU's instruction-set manual. It could represent the number of hours of sleep I had last week. It's meaningless unless it's assigned some value.
So how do computers know what to actually "do" with the numbers?
It's a large combination of standards and specifications, which are documents or code that specify which data should go where, which each piece of data means, what acceptable values for the data are, etc. Such standards are (usually) agreed upon by the masses.
Standards exist everywhere. Your BIOS has specifications as to where to look for the main operating system entry point on the boot media (your hard disk, a live CD, a bootable USB stick, etc.).
From there, the operating system adheres to standards that dictate where in memory the VGA buffer exists (0xb8000 on x86 machines, for example) in order to output all of that boot up text you see when you start your machine.
So on and so forth.
A portable executable (windows) or an ELF image (linux) or a Mach-O image (MacOS) are just files that also follow a specification, usually mandated by the operating system manufacturer, that put pieces of code at specific positions in the file. Then that file is simply loaded into memory, given a specific virtual address in user space, and then the operating system knows exactly where the entry point for your program is.
From there, it sets up the instruction pointer (IP) to point to the current instruction byte. On most CPUs, the current byte pointed to by the IP activates specific circuits in the CPU to perform some action.
For example, on x86 CPUs, byte 0x04 is the ADD instruction that takes the next byte (so IP + 1), reads it as an unsigned 8 bit number, and adds it to the al register. This is mandated by the x86 specification, which all x86 CPUs have agreed to implement.
That means when the IP register is pointing to a byte with the value of 0x04, it will perform the add and increase the IP by 2 - the first is to skip the ADD instruction itself, and the second is to skip the "argument" (operand) to the ADD instruction.
The IP advances as fast as the CPU (and the operating system's scheduler) will allow it to - which amounts to a "running" program.
What the data mean is defined entirely by what's creating the data and what's using it. In the best of circumstances, the two parties agree, usually via a standard or specification of some sort.
This image gives a good picture about Virtual Address space. But it only says half of the story. It only gives complete picture of User Address space ie.. lower 50% (or 75% in some cases).
What about the rest 50% (or 25%) which is occupied by the kernel. I know kernel also has so many different things like kernel modules , device drivers, core kernel itself. There must be some kind of layout right?
What is its layout? If you say its Operating System dependent. I would say, there are two major operating systems Windows & Linux. Please give answer for any one these.
alt text http://img690.imageshack.us/img690/2543/virtualadressspace.gif
I've got even worse news for you, there's also a feature to explicitly randomize kernel address layouts by the OS as a security feature. This is on by default in most recent Windows, OpenBSD as well as being an option for Linux.
Like users said here, your picture is incomplete. It tends to look something specific to single-threaded OS. In particular there may be hundreds of threads within the process (hence - sharing the same address space), everyone with its own stack.
Also, I believe the actual picture of the address space may vary strongly depending on OS version and some subtle changes.
It's not completely clear from your question or the image, but with the 'System Address Space' you probably mean the area between 2GB-4GB. This indeed takes up half of the theoretical 4GB space, but there is a valid reason for it.
Normally with 32-bits you can address 4 GB of memory (2^32=4294967296) so it would seem logical to have 4 GB of address space, not 2 GB. The reason for this is the following:
Suppose you have 2 pointers, like this in C/C++:
char *ptr1;
char *ptr2;
I now want to know what the difference is between the two pointers, like this:
offset = ptr2 - ptr1;
What should be the data type of 'offset'?
If we don't know whether ptr1 comes before ptr2 or vice versa, the offset can be positive or negative. Now if both ptr1 or ptr2 are between the range 0 - 2GB, then the offset is always between -2147483648 and +2147483647, which fits exactly in a 4 byte signed integer.
However, if ptr1 and ptr2 would be able to access the full 4 GB address space, offset would be between -4294967296 and +4294967295 which doesn't fit in a 4 byte signed integer anymore.
If you are sure that you are never doing this kind of calculations in your application, or you are sure that if you subtract 2 pointers that they will be never more apart than 2 GB (or your vectors are always smaller than 2 GB), you can tell the linker (Windows, Visual Studio) that your application is LARGEADDRESSAWARE. This linker flag sets a bit in the executable, and if a 32-bit Windows is booted correctly (on XP you had to boot with the /3GB) flag, Windows gave you 3GB instead of 2GB (only for the LARGEADDRESSAWARE executables).
The remaining 1GB is still used for operating system data structures (but I have no details about them).
If you are running a 64-bit Windows, then things get even more interesting, because LARGEADDRESSAWARE executables will then get 4GB of memory. Apparently, the operating system data structures are now stored somewhere in the 64-bit address space, outside the 4GB used by the application.
Hope this clarifies a bit.
Memory Layout of Windows Kernel. Picture taken from Reversing: Secrets of Reverse Engineering
alt text http://img821.imageshack.us/img821/1525/windowskernelmemorylayo.jpg