Block drivers have major and minor numbers? - linux-kernel

can anyone please tell me block driver have major and minor number? if block driver doesn't have major and minor number then how it identifies the different set of connected devices in computer?

Related

Generate multiple binary files on ISE with different serial number

I want to version all the boards on which I put a version of my FPGA.
Each board shall have a different serial number stored in an internal ROM. It's basically a 10 digits number (ie: 0123456789).
After generating the binary file , how can I modify it to increment the number without damaging the FPGA and its behavior?
Have anyone already done this before?
Which FPGA are you using? For Xilinx Devices you can use the USR_ACCESS register that can be set when creating the bitstream file. Limited up to 32bits of data.
https://www.xilinx.com/support/documentation/application_notes/xapp497_usr_access.pdf

major and minor numbers in device drivers

I read some material on major and minor numbers and have doubts in it.
What I understood:
driver has one major number associated with it due to register_chrdev_region().
mknod /dev/hello1 -c 123 32 will create device file with major number 123 and when application opens /dev/hello1 it searches driver with major number same as /dev/hello1.
multiple device file can be associated with one driver. and none of the two files in /dev will share same pair of major and minor number.
now some modern operating systems allows drivers with same major
numbers. Now in this case how mapping will work?
When you have multiple drivers associated with the same major number, you can differentiate between them through different minor number ranges under individual drivers. You can use the minor number as an index to a local array to access individual devices.
Also, it is advisable to use alloc_chrdev_region() to get the major number from the kernel dynamically rather than hardcoding a number that's currently free through register_chrdev_region().
Hope this helps!
When open() is called and the file entry contains a major/minor pair, open finds the device driver which has a corresponding struct device that contains the same major/minor pair. The major number alone is not enough to open a device.
Modern drivers should have their major number dynamically allocated by the kernel by leaving the dev_num set to zero when calling alloc_chrdev_region (&dev_num, 0, <number of contiguous minors to reserve>, DEVICE_NAME) If the call succeeds, MAJOR(dev_num) is the dynamically allocated major device number (and MINOR(dev_num) has the minor device number).

Role of FIFO Buffer for COM Port in windows

can anyone here please explain the role of FiFo Buffer check (at advanced COM Port settings from device manager) in windows?
How checking/unchecking the FIFO Buffer affects reading data from COM Port?
Many Thanks in advance for helpful replies!
The original UART chip used in IBM-PC designs was the 8250. It could store just one received byte while the receiver was busy receiving the next byte. That puts a high demand on the responsiveness of the operating system's serial port driver, responding to the "data received" interrupt. It must be quick enough to read that byte before the receiver overwrites it. Not being quick enough causes an overrun error and irretrievable data loss. High interrupt rates are also detrimental.
That design was improved upon by the 16550 UART chip. It got a larger buffer, the FIFO, giving the OS more time to empty the buffer before an overrun could occur. The serial port driver can program it to generate an interrupt at a particular fill level, thus reducing the interrupt rate as well.
But chips designs have the same kind of problem that software has, the original 16550 had a bug in the FIFO implementation. Fixed in the 16550A, version 1.1 in software speak.
Problem was, the driver could not tell whether the machine had the buggy version of the 16550 or a good one. Simple chips like that don't have a GetVersion() equivalent. So it provided a property page that lets the user turn the FIFO support off, thus bypassing the bug.
The odds that today you'll have the buggy version are zero. Turning the FIFO off is no longer necessary.

Is it possible to cause 2.4Ghz co-channel interference if no clients are transmitting

I installed some APs at a facility. This facility is now complaining they are having issues with their 2.4 phone system.
The APs that I installed (different SSID) are running but no clients are associated or transmitting data.
Is it possible to cause co-channel interference without data being transmitted?
Thanks
Yes, 802.11 access points are chatty, users or no. You can expect every access point to transmit beacon frames on the order of 5-15 times per second.
These frames are transmitted very quickly and 2.4 GHz is generally very noisy, so I have difficulty believing that a 2.4 GHz phone system would fail in this scenario -- at least, assuming you didn't install an AP right on top of the phone system. Any device transmitting +20 dBm a few inches away from a device listening for -90 dBm signals could easily cause problems. Similarly, 2.4 GHz devices don't actually operate on 2.4 GHz the entire signal path; it's generally shifted down towards baseband at something like 100 MHz, and sometimes (particularly with high-power APs) this section is poorly shielded, and this leakage can cause issues even outside the target frequency band.
That said, none of that really matters for troubleshooting. The line of questions I would pursue is: does the problem go away if you shut off all your devices? If so, does it go away if you shut off one in particular? If so, what makes that one special?

CUDA fallback to CPU?

I have a CUDA application that on one computer (with a GTX 275) works fine and on another, with a GeForce 8400 works about 100 times slower. My suspicion is that there is some kind of fallback that makes the code actually run on the CPU rather than on the GPU.
Is there a way to actually make sure that the code is running on the GPU?
Is this fallback documented somewhere?
What conditions may trigger it?
EDIT: The code is compiled with compute capabilities 1.1 which what the 8400 has.
Couldn't it just be that the gap in performance is that large. This link indicates that the 8400 operates at 22-62 GFlops and this link indicates that the GTX 275 operates at 1010.88 GFlops.
There are a number of possible reasons for this.
Presumably you're not using the emulation device. Can you run the device query sample from the SDK? That will show if you have the toolkit and driver installed correctly.
You can also query the device properties from within your app to check what device you are attached to.
The 8400 is much lower performance than the GTX275, so it could be real, but see the next point.
One of the major changes in going from compute capability 1.1 to 1.2 and beyond is the way the memory accesses are handled. In 1.1 you have to be very careful not only to coalesce your memory accesses but also to make sure that each half-warp is aligned, otherwise each thread will issue it's own 32 byte transaction. In 1.2 and beyond the alignment is not such an issue as it degrades gracefully to minimise transactions.
This, combined with the lower performance of the 8400, could also account for what you are seeing.
If I remember correctly, you can list all available devices (and choose which device to use for your kernel) from the host code. You could try determine if the available device is software emulation and issue a warning.

Resources