I have some questions related to SPIxCON registers of SPI. I use PIC18F26K83.
1) There is a SPIxTCNTH: SPI TRANSFER COUNTER MSB REGISTER. And I can set first 3 bits on it which counters the bits to be transmitted. And according to datasheet it is writable bit. According to the datasheet it counts bits that will be transmitted then why is it writable? Do I need to write it according to bits that I will send? Or is it there to inform user.
2) There is SPIxTWIDTH: SPI TRANSFER WIDTH REGISTER. In case of BMODE=1, it is
Size (in bits) of each transfer counted by the transfer counter
I will be sending values such as 1.1 or 2.3 to DAC. In this case what should I set it to? Is there a standart value for this register?
3) I couldn't get what are FIFO registers are for according to datasheet we cannot control them by software. Isn't it like a buffer? So If I am writing to transmit register faster than transmission speed, the transmit data will be put into the FIFO. And one by one they will be transmitted. Am I correct? I do not need to anything rather than writing to the transmit buffer.
4) I read but could not understand the polarity bits in SPIxCON1. Is it okay if I do not touch these bits in the control register? I do not want to mess up.
5) How can I select slaves? There is a SSET (Slave select enable bit) in the SPIxCON2 register. I can make it 1, but then how can I select the slave?
Thank you for your answers. I am a newbie. Sorry for the simple and maybe non sense questions. Or I can simply show my configuration code, but I believe it would be harder to analyse.
1) The transfer counter (when in use) is written to with the number of bytes, or partial bytes, to send or receive (depending on mode). So you'd set it, if you are using it (BMODE=0 or TXR=0) to the number of bytes that you are expecting to send or receive.
2) You'd need to look at your binary representation of those numbers to see how many bits you'd be sending in each case. Standard value is a full byte.
3) the FIFOs are hidden elements, writing to the SPIxTXB or reading from the SPIxRXB registers accesses the respective FIFO. the FIFOs are only two bytes deeps so you'd still need to check for overrun if you are sending fast TXWE bit (iirc) but if you have lots of data to transfer fast I'd recommend using DMA to do the transfer then you'd just be setting it up and letting it go and can do other things until it is finished.
4) I think the polarity bits just set the line level during idle state to either high or low. It should be set the same for everyone (masters and slaves).
5) If you only have one slave you can tie that line to the slaves enable line. If you have more than one slave you'll need to set up a gpio line for each and (for each one) OR the signals together and attach the OR output to the slave enable (if it is active low, which it usually is). Make sure only one slave is active at a time. Doing a daisy chain of slaves can be done as well. I haven't worked with that kind of setup.
Related
I am bit confused how start and stop bit are differentiated from the actual data bits. For example say "data" whose binary is 01100100 01100001 01110100 01100001 is being set from System A to System B as a single packet (because it's less than 64 Kibibytes) bit by bit. Please let me know how start bit and stop bits are added to these data bits. There were two related thread on Stacloverflow with only one answer this was not accepted but is very confusing. Can someone explain it in simple terms please. Thank you
When you want to send data over serial line, you need to synchronize transmitter and receiver. The start bit simply marks the beginning of the data chunk (typically one byte with or without parity bit), and the stop bit marks the end of data chunk.
In the beginning, there’s no data being transmitted - let´s say there is ‘0’ on the line for some time. The receiver is waiting for the start bit (both start and stop bits are always ‘1’). When the start bit arrives, it starts an internal timer and on every tick it reads the value from the line, until all data and parity bits are read. Then it waits for the stop bit and then it begins to start waiting for a new start bit.
Without the start bit, the receiver would not now when to start reading data bits. Imagine sending zero byte without parity: The line would just stay in 0 state all the time.
The stop bit is not necessary, it’s there just for enhancing the reliability (both receiver and transmitter must use the same frequency).
So, the start and stop bits don’t need to be distinguished from data bits. Quite the oposite: They allow the receiver to properly identify data bits.
When sending your data, you would take them byte by byte and for each of them you would send the start bit (‘1’) first, then individual bits, then maybe parity bit and then a ‘1’ - the stop bit, everything at a given frequency. Then you would wait at least for one timer tick.
Usually you don’t need to do all of this, because there are specialized chips for this on the board. You just provide your data using a buffer and wait until they’re sent, or you wait for data being received.
I'm using STM32F411 with USB CDC library, and max speed for this library is ~1Mb/s.
I'm creating a project where I have 8 microphones connected into ADC line (this part works fine), I need a 16-bit signal, so I'm increasing accuracy by adding first 16 signals from one line (ADC gives only 12-bits signal). In my project, I need 96k 16-bit samples for one line, so it's 0,768M signals for all 8 lines. This signal needs 12000Kb space, but STM32 have only 128Kb SRAM, so I decided to send about 120 with 100Kb data in one second.
The conclusion is I need ~11,72Mb/s to send this.
The problem is that I'm unable to do that because CDC USB limited me to ~1Mb/s.
Question is how to increase USB speed to 12Mb/s for STM32F4. I need some prompt or library.
Or maybe should I set up "audio device" in CubeMX?
If small b means byte in your question, the answer is: it is not possible as your micro has FS USB which max speeds is 12M bits per second.
If it means bits your 1Mb (bit) speed assumption is wrong. But you will not reach the 12M bit payload transfer.
You may try to write (only if b means bit) your own class but I afraid you will not find a ready made library. You will need also to write the device driver on the host computer
I was reading the datasheet for the Atmel ATmega16 microcontroller and i came to this phrase in the USART section:
The two Buffer Registers operate as a circular
FIFO buffer. Therefore the UDR must only be read once for each incoming data! More
important is the fact that the Error Flags (FE and DOR) and the 9th data bit (RXB8) are
buffered with the data in the receive buffer. Therefore the status bits must always be read
before the UDR Register is read. Otherwise the error status will be lost since the buffer state
is lost.
I have no idea what buffering the Error Flags and RXB8 means. Any help will be appreciated.
The main caveat here is that UDR must only be read once per incoming byte and that in reading it, the error flags mentioned get wiped out. So, if one is interested in the error flags or the "9th bit" in RXB8, then they must be read before reading UDR.
However, in ten years of AVR and serial communications designs I've never had to resort to using RXB8. Why? It's only needed when you're using 9 databits for your communications. See page 155 of the datasheet for examples in C and assembler. Most data communications use 7 or (more commonly) 8 bits, so this extra bit is not necessary most of the time. If you need it, simply follow the example on p. 155.
More important is the fact that the Error Flags (FE and DOR) and the 9th data bit (RXB8) are buffered with the data in the receive buffer. Therefore the status bits must always be read before the UDR Register is read. Otherwise the error status will be lost since the buffer state is lost.
This just points out, that the Error Flags and the 9th data bit are (obviously) coupled to the data in the UDR FIFO and are lost as soon as you read UDR.
Example:
If you use 9 data bits, you have to read that 9th bit before reading UDR. Otherwise, the next byte in the FIFO (including its status bits) would overwrite the information of the 9th bit that belonged to the previous byte. The same applies to the error bits.
what is the best practice to access a changing 32bit register (like a counter) through a 16bit databus ?
I suppose i have to 'freeze' or copy the 32bit value on a read of the LSB until the MSB is also read and vise versa on a write to avoid data corruption if the LSB overflows to the MSB between the 2 accesses.
Is there a standard approach to this ?
As suggested in both the question and Morten's answer, a second register to hold the value at the time of the read of the first half is a common method. In some MCUs this register is common to multiple devices, meaning you need to either disable interrupts across the two accesses or ensure ISRs don't touch the extra register. Writes are similarly handled, frequently in the opposite order (write second word temporary storage, then write first word on device thus triggering the device to read the second word simultaneously).
There have also been cases where you just can't access the register atomically. In such cases, you might need to implement additional logic to figure out the true value. An example of such an algorithm assuming three reads take much less than 1<<15 counter ticks might be:
earlyMSB = highreg;
midLSB = lowreg;
lateMSB = highreg;
fullword = ((midLSB<0x8000 ? lateMSB : earlyMSB)<<16) | midLSB;
Other variants might use an overflow flag to signal the more significant word needs an increment (frequently used to implement that part of the counter in software).
There is no standard way, but an often used approach is to make read one address return the first 16 bits, while the remaining 16 bits are captured at the same time, and read later at another address.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Possibly the most basic computer question of all time, but I have not been able to find a straightforward answer and it is driving me crazy. When a computer 'reads' a byte, does it read it as a sequential series of ones and zeros one after the other, or does it somehow read all 8 ones and zeros at once?
A computer system reads the data in both ways depending upon type of operation and the how the digital system is designed.I'll explain this with very simple example of a Full adder circuit.
A full adder adds binary numbers and accounts for values carried in as well as out (Wikipedia)
Example of Parallel operation
Suppose in some task we need to add two 8 bit(1 byte) numbers such that all bits are available at the time of addition.
Then in that case we can design a digital system with 8 full-adders(1 for each bit).
Example of Serial Operation
In some other task you observe that all 8 bits will not be simultaneously available.
Or you think having 8 separate adders is costly as you need to implement other mathematical operations (like subtraction,multiplication and division). So instead of having 8 separate units you have 1 unit which will individually process bits. In this scenario we will need three storage units ( Shift Registers) such that two storage units will store two 8-bit numbers and one storage units will store the result .At a given clock pulse single bit will be transmitted from each of two registers to the full adder which will perform the addition process and transfer 1 bit result to the result shift register in single clock pulse.
This figure contains some additional stuff which is not useful for this thread but you can
study digital logic design and computer architecture if you want to go more deep in this stuff.
Shift register
Shift register operations demo
This is really kind of outside the scope of Stackoverflow, but it brings back such fond memories from college.
It depends. Some times a computer reads bits one at a time. For example over older ethernet manchester code is used. However over old parallel printer cables, there were 8 pins each one signaling a bit, and an entireoctet (byte) is sent at once.
In serial (one-bit-at-a-time) encodings, you're typically measuring transitions in the line or transitions against some well-defined clock source.
In parallel encodings, you're typically reading all the bits into a register at a time and latching the register.
Look up flipflops, registers, and logic gates for information on the low-level parts of this.
Bits are transmitted one at a time in serial transmission, and
multiple numbers of bits in parallel transmission. A bitwise operation
optionally process bits one at a time. Data transfer rates are usually
measured in decimal SI multiples of the unit bit per second (bit/s),
such as kbit/s.
Wikipedia's article on Bit
the processor works with a defined number of registerlength. 8, 16, 32, 64 ... think about a register as an amount of connection, one for each bit... thats the amount of bits that will be processed at once in one processor core, one register at once ... the processor hat different kinds of register, examples are the private instruction register or the public data or adress register
Think of it this way, at least at a physical level: In a transmission cable from point A to B (A and B can be anything, hard drive, CPU, RAM, USB, etc.) each wire in that cable can transmit one bit at a time. Both A and B have a clock pulsing at the same rate. On each pulse, the sender changes the amount of power going down each wire to signify the value of the new bit(s). So, the # of wires in the cable = the # of bits that can be transmitted each "pulse". (Note: This is a very simplified and theoretical explanation).
At a software level, in the CPU, you can never address anything smaller than a byte. You can "access" and manipulate specific bytes by using the bitwise operators (& (AND), | (OR), << (Left Shift), >> (Right Shift), ^ (XOR)).
In hardware, the number of bits being sent each pulse is completely dependent of the hardware itself.