Programmable interrupt controller(PIC) can hold maximum 64 interrupt vector using master slave protocol. What about the other ones? - x86-16

I was reading about Programmable Interrupt controller(PIC). In master slave protocol PIC can hold maximum 64 interrupt vectors. But as there is 256 interrupt vectors. So what about these other interrupt vectors. I searched for answer in google but didn't get any. Can anyone help?

Related

PIC SPI configuration questions

I have some questions related to SPIxCON registers of SPI. I use PIC18F26K83.
1) There is a SPIxTCNTH: SPI TRANSFER COUNTER MSB REGISTER. And I can set first 3 bits on it which counters the bits to be transmitted. And according to datasheet it is writable bit. According to the datasheet it counts bits that will be transmitted then why is it writable? Do I need to write it according to bits that I will send? Or is it there to inform user.
2) There is SPIxTWIDTH: SPI TRANSFER WIDTH REGISTER. In case of BMODE=1, it is
Size (in bits) of each transfer counted by the transfer counter
I will be sending values such as 1.1 or 2.3 to DAC. In this case what should I set it to? Is there a standart value for this register?
3) I couldn't get what are FIFO registers are for according to datasheet we cannot control them by software. Isn't it like a buffer? So If I am writing to transmit register faster than transmission speed, the transmit data will be put into the FIFO. And one by one they will be transmitted. Am I correct? I do not need to anything rather than writing to the transmit buffer.
4) I read but could not understand the polarity bits in SPIxCON1. Is it okay if I do not touch these bits in the control register? I do not want to mess up.
5) How can I select slaves? There is a SSET (Slave select enable bit) in the SPIxCON2 register. I can make it 1, but then how can I select the slave?
Thank you for your answers. I am a newbie. Sorry for the simple and maybe non sense questions. Or I can simply show my configuration code, but I believe it would be harder to analyse.
1) The transfer counter (when in use) is written to with the number of bytes, or partial bytes, to send or receive (depending on mode). So you'd set it, if you are using it (BMODE=0 or TXR=0) to the number of bytes that you are expecting to send or receive.
2) You'd need to look at your binary representation of those numbers to see how many bits you'd be sending in each case. Standard value is a full byte.
3) the FIFOs are hidden elements, writing to the SPIxTXB or reading from the SPIxRXB registers accesses the respective FIFO. the FIFOs are only two bytes deeps so you'd still need to check for overrun if you are sending fast TXWE bit (iirc) but if you have lots of data to transfer fast I'd recommend using DMA to do the transfer then you'd just be setting it up and letting it go and can do other things until it is finished.
4) I think the polarity bits just set the line level during idle state to either high or low. It should be set the same for everyone (masters and slaves).
5) If you only have one slave you can tie that line to the slaves enable line. If you have more than one slave you'll need to set up a gpio line for each and (for each one) OR the signals together and attach the OR output to the slave enable (if it is active low, which it usually is). Make sure only one slave is active at a time. Doing a daisy chain of slaves can be done as well. I haven't worked with that kind of setup.

STM32F411 I need to send a lot of data by USB with high speed

I'm using STM32F411 with USB CDC library, and max speed for this library is ~1Mb/s.
I'm creating a project where I have 8 microphones connected into ADC line (this part works fine), I need a 16-bit signal, so I'm increasing accuracy by adding first 16 signals from one line (ADC gives only 12-bits signal). In my project, I need 96k 16-bit samples for one line, so it's 0,768M signals for all 8 lines. This signal needs 12000Kb space, but STM32 have only 128Kb SRAM, so I decided to send about 120 with 100Kb data in one second.
The conclusion is I need ~11,72Mb/s to send this.
The problem is that I'm unable to do that because CDC USB limited me to ~1Mb/s.
Question is how to increase USB speed to 12Mb/s for STM32F4. I need some prompt or library.
Or maybe should I set up "audio device" in CubeMX?
If small b means byte in your question, the answer is: it is not possible as your micro has FS USB which max speeds is 12M bits per second.
If it means bits your 1Mb (bit) speed assumption is wrong. But you will not reach the 12M bit payload transfer.
You may try to write (only if b means bit) your own class but I afraid you will not find a ready made library. You will need also to write the device driver on the host computer

ACP and DMA, how they work?

I'm using ARM a53 platform, it has ACP component, and I'm trying to use DMA to transfer data through ACP.
By ARM trm document, if I understand it correctly, the DMA transmission data size limits to 64 bytes for each DMA transfer when using ACP.
If so, does this limitation make DMA not usable? Because it's dumb to configure DMA descriptor but to transfer 64 bytes only each time.
Or DMA should auto divide its transfer length into many ACP size limited(64 bytes) packets, without any software intervention.
Need any expert to explain how ACP and DMA work together.
Somewhere in the interfaces from the DMA to the ACP's AXI port should auto divide its transfer length as needed into transfers of appropriate length. For the Cortex-A53 ACP, AXI transfers are limited to 64B(perhaps intentionally 1x cacheline).
From https://developer.arm.com/documentation/ddi0500/e/level-2-memory-system/acp/transfer-size-support :
x byte INCR request characterized by:(some list of limitations)
Note the use of INCR instead of FIXED. INCR will automatically increment the address according to the size of the transfer, while FIXED will not. This makes it simple for the peripheral break a large transfer into a series of multiple INCR transfers.
However, do note that on the Cortex-A53, transfer size(x in the quote) is fixed at 16 or 64 byte aligned transfers. If the DMA sends an inappropriate sized transfer(because misconfigured or correct size unsupported), the AXI will emit a SLVERR. If the buffer is not appropriately aligned, I think this also causes a SLVERR.
Lastly, the on-chip network routing must support connecting the DMA to the ACP at chip design time. In my experience this is more commonly done for network accelerators and FPGA fabric glue, but tends to be less often connected for low speed peripherals like UART/SPI/I2C.

Linux block driver merge bio's

I have a block device driver which is working, after a fashion. It is for a PCIe device, and I am handling the bios directly with a make_request_fn rather than use a request queue, as the device has no seek time. However, it still has transaction overhead.
When I read consecutively from the device, I get bios with many segments (generally my maximum of 32), each consisting of 2 hardware sectors (so 2 * 2k) and this is then handled as one scatter-gather transaction to the device, saving a lot of signaling overhead. However on a write, the bios each have just one segment of 2 sectors and therefore the operations take a lot longer in total. What I would like to happen is to somehow cause the incoming bios to consist of many segments, or to merge bios sensibly together myself. What is the right approach here?
The current content of the make_request_fn is something along the lines of:
Determine read/write of the bio
For each segment in the bio, make an entry in a scatterlist* with sg_set_page
Map this scatterlist to PCI with pci_map_sg
For every segment in the scatterlist, add to a device-specific structure defining a multiple-segment DMA scatter-gather operation
Map that structure to DMA
Carry out transaction
Unmap structure and SG DMA
Call bio_endio with -EIO if failed and 0 if succeeded.
The request queue is set up like:
#define MYDEV_BLOCK_MAX_SEGS 32
#define MYDEV_SECTOR_SIZE 2048
blk_queue_make_request(mydev->queue, mydev_make_req);
set_bit(QUEUE_FLAG_NONROT, &mydev->queue->queue_flags);
blk_queue_max_segments(mydev->queue, MYDEV_BLOCK_MAX_SEGS);
blk_queue_physical_block_size(mydev->queue, MYDEV_SECTOR_SIZE);
blk_queue_logical_block_size(mydev->queue, MYDEV_SECTOR_SIZE);
blk_queue_flush(mydev->queue, 0);
blk_queue_segment_boundary(mydev->queue, -1UL);
blk_queue_max_segments(mydev->queue, MYDEV_BLOCK_MAX_SEGS);
blk_queue_dma_alignment(mydev->queue, 0x7);

Non-standard comport baudrates in windows

Do the windows built in com port drivers support non-standard baudrates? (actually does windows have a built in driver for com1 & 2?)
The reason I ask is I'm having trouble getting a reliable connection to a device that uses the unusual baudrate 5787. The device & PC talk briefly, then seem to loose the dialogue, and then get it again. Once a long message is sent, it gets lost at the other end, a short time later the dialogue is back. This sounds to me like the classic baudrate mismatch. Not quite close enough to be reliable though but close enough that some data gets through.
If I use an inexpensive PCI serial board it works without problems. It's only computers that use on board serial I've found don't work properly.
Baudrates in a PC are controlled by a UART and a crystal. The crystal frequency determines what baudrates the serial port can generate. The baudrate is often generated by a divide by 16 counter. The crystal frequency for a standard PC is normally 1.8432 MHz. Dividing that by 16 gives you 115200 which is usually the maximum the com port can do.
Inside the UART is a DLAB register. This further divides the clock. So essentially, to get 5787 baud you're talking about dividing 115200 by 5787 which gives you 19.906687...
It's close to 20 you'd load the DLAB register with 20. 115200 / 20 gives you 5760. Therefore you're probably getting 5760 baud out of the PC com port. That's probably enough of a difference to cause the issue that you're seeing.
No, the difference from 5760 to 5787 is nowhere near enough to explain any sort of problems. UARTs identify the start of a byte from the leading edge of the start bit, then sample the data in the middle of each bit. This means they are tolerant to errors in Baud rate up to the point where the predicted middle is an edge. That's a half bit error in one full byte, because each byte has a stop bit so there's a re-synchronise event per byte. On half bit in ten bits (8 data, one start, one stop) is 5%. The difference from 5760 to 5787 is only 0.5% so miles inside the safe region.

Resources