I have just entered into AVR MCU programming using gcc-avr, but when I see sample programs I am not able to make out much from the code:
DDRD |= (1 << PD7);
TCCR2 = (1 << WGM21) | (0 << WGM20);
TCCR2 |= (1 << COM20);
TCCR2 |= (6 << CS20);
I do not see also any declarations variables : DDRD, PD7, TCCR2, WGM21, WGM20, COM20, CS20, but they are directly used. Please let me know how I can know all pre-defined variables and its usage? It becomes very difficult in understanding the code without knowing the same.
Thanks in advance.
That kind of code is very common when programming embedded systems, although you will need to look at the header files and the AVR documentation to learn what those specific identifiers mean. Be aware that it can be very complex if you're new to this, and you will need to understand how to work with raw binary and C-style bit shifts/operators. (There are lots of tutorials online if you need to learn more about that.)
I'll try to explain the basic principle though.
All of the identifiers you saw will be preprocessor constants (i.e. #define ...), rather than variables. DDRD and TCCR2 will specify memory locations. These locations will be mapped onto certain functionality, so that setting or clearing certain bits at those locations will change the behaviour of the device (e.g. enable a clock divider, or set a GPIO pin high or low, etc.).
PD7, WGM21, WGM20, COM20, and CS20 will all be fairly small numbers. They specify how far you need to offset certain bit patterns to achieve certain results. Bit-wise operators (such as | and &) and bit-shift operators (typically <<) are used to create the patterns which are written to the memory locations. The documentation will tell you what patterns to use.
I'll use a simple fictional example to illustrate this. Let's say there is a register which controls the value of some output pins. We'll call the register OUTPUT1. Typically, each bit will correspond to the value of a specific pin. Turning on pin 4 (but leaving the other pins alone) might look like this:
OUTPUT1 |= (1 << PIN4);
This bitwise OR's the existing register with the pattern to turn pin 4 on. Turning that pin off again might look like this:
OUTPUT1 &= ~(1 << PIN4);
This bitwise AND's the existing register with everything except the pattern to turn pin 4 on (which results in clearing the bit). That's an entirely fictional example though, so don't actually try it!
The principle is basically the same for many different systems, so once you've learned it on AVR, you will hopefully be able to adapt to other devices as well.
Related
I'm learning how to use microcontrollers without a bunch of abstractions. I've read somewhere that it's better to use PUT32() and GET32() instead of volatile pointers and stuff. Why is that?
With a basic pin wiggle "benchmark," the performance of GPIO->ODR=0xFFFFFFFF seems to be about four times faster than PUT32(GPIO_ODR, 0xFFFFFFFF), as shown by the scope:
(The one with lower frequency is PUT32)
This is my code using PUT32
PUT32(0x40021034, 0x00000002); // RCC IOPENR B
PUT32(0x50000400, 0x00555555); // PB MODER
while (1) {
PUT32(0x50000414, 0x0000FFFF); // PB ODR
PUT32(0x50000414, 0x00000000);
}
This is my code using the arrow thing
* (volatile uint32_t *) 0x40021034 = 0x00000002; // RCC IOPENR B
GPIOB->MODER = 0x00555555; // PB MODER
while (1) {
GPIOB->ODR = 0x00000000; // PB ODR
GPIOB->ODR = 0x0000FFFF;
}
I shamelessly adapted the assembly for PUT32 from somewhere
PUT32 PROC
EXPORT PUT32
STR R1,[R0]
BX LR
ENDP
My questions are:
Why is one method slower when it looks like they're doing the same thing?
What's the proper or best way to interact with GPIO? (Or rather what are the pros and cons of different methods?)
Additional information:
Chip is STM32G031G8Ux, using Keil uVision IDE.
I didn't configure the clock to go as fast as it can, but it should be consistent for the two tests.
Here's my hardware setup: (Scope probe connected to the LEDs. The extra wires should have no effect here)
Thank you for your time, sorry for any misunderstandings
PUT32 is a totally non-standard method that the poster in that other question made up. They have done this to avoid the complication and possible mistakes in defining the register access methods.
When you use the standard CMSIS header files and assign to the registers in the standard way, then all the complication has already been taken care of for you by someone who has specific knowledge of the target that you are using. They have designed it in a way that makes it hard for you to make the mistakes that the PUT32 is trying to avoid, and in a way that makes the final syntax look cleaner.
The reason that writing to the registers directly is quicker is because writing to a register can take as little as a single cycle of the processor clock, whereas calling a function and then writing to the register and then returning takes four times longer in the context of your experiment.
By using this generic access method you also risk introducing bugs that are not possible if you used the manufacturer provided header files: for example using a 32 bit access when the register is 16 or 8 bits.
I am designing an FSM in SystemVerilog for synthesis through the QuartusII (14.1) tool to put on an Altera FPGA. I am using an enum declaration to make the code much more reasonable:
typedef enum logic [7:0] { CMD_INIT,
CMD_WAIT,
CMD_DECODE,
CMD_ILLEGAL,
CMD_CMD0,
... } cmd_st;
...
cmd_st cs, ncs;
...
Whenever Quartus synthesized this state machine, it seems to create a one-hot encoding despite the logic [7:0] part of the type. As in, when I got to add the states to SignalTap, I get all of the states as a signal 1-bit variable (cs.CMD_INIT, cs.CMD_WAIT, etc). While this is usually pretty useful, as I need to see a bunch of these states and some over values at once, I am running out of on-chip memory to contain all of these states (there are well over 8 of them; like 50+). So adding all of them to SignalTap takes ALOT of this memory; but if I could just put down the 8-bit value for cs, I would have plenty of space for other things.
I cant figure out how to get Quartus to NOT use the 1-hot encoding for the FSM. I have tried changing the settings (Settings->Compiler Settings->Advance Settings (Synthesis...)->State Machine Processing) to Minial Bits, User Encoding and Sequential, as well as added values for a few of the states:
typedef enum logic [7:0] { CMD_INIT = 8'd0,
CMD_WAIT = 8'd1,
CMD_DECODE = 8'd2,
CMD_ILLEGAL = 8'd3,
CMD_CMD0,
(Note, not all of them as there are a bunch of I might add even more in the middle)
Im not sure what else to do so that SignalTap sees only 8-bits for the states (which probably goes back to getting Quartus to synthesize this FSM as sequential rather than 1hot encoding)
You can use synthesis pragmas to guide Quartus to use a specific encoding scheme for the state variables. This page gives you details on how to encode state machines using "sequential" encoding thereby avoiding the default one-hot encoding.
I've got a STM32L-Discovery Board, which has got a STM32L152R8 microprocessor. I'm quite stuck trying to make basic things work.
I've looked the examples given by ST (the current consumption touch sensor and the temperature sensor), and I think they aren't user-friendly, with so many libraries, sub-processes and interrupts, that make the code really difficult to understand.
I've tried to turn on the blue LED (GPIO PB6), but I can't manage to do that.
My code compiles correctly but does nothing to the board. This is the code of "main.c".
RCC->AHBRSTR = 0x00000002;
RCC->AHBRSTR = 0x00000000;
RCC->AHBENR = 0x00000002;
GPIOB->MODER = 0x00001000;
GPIOB->OTYPER = 0x00000040;
GPIOB->OSPEEDR = 0x00001000;
GPIOB->PUPDR = 0x00000000;
GPIOB->ODR = 0x00000040;
while(1) {}
Am I missing something? Could I find really basic examples somewhere?
Thanks in advance!
The standard peripheral library that ST supplies on their website is a good starting point. It has examples on programming a GPIO. Note that their code is absolutely horrible, but at least it works and is something to start with.
What compiler/debugger are you using? If you are using IAR, then you can view the GPIO registers while stepping thru the code. Please post the values of the GPIO registers to your question and maybe we can help.
RCC->AHBENR = 0x00000002;
Change to "RCC->AHBENR |= 0x00000002;"
This will ensure you enable GPIOB without disabling everything else. The existing code will disabled important things like the flash memory controller and all the other GPIOs.
GPIOB->MODER = 0x00001000;
// This will set pin 6 as output, and all other pins as input. Was this your intent?
Change to "GPIOB->MODER = (GPIOB->MODER & 0xFFFFDFFF ) | 0x00001000;"
This will set pin 6 as an output without changing the configuration of any other pins.
GPIOB->OTYPER = 0x00000040;
// This will set the output type as open drain, meaning you can only pull the line down.
Change to "GPIOB->OTYPER |= 0x00000040;"
Set output as push-pull instead of open drain. You later code attempts to set this line high which will not work as an open drain output will pull to ground or allow the line to float. A push-pull output will allow you to set the line high or low.
Well, the question says it all.
What I would like to do is that, every time I power up the micro-controller, it should take some data from the saved data and use it. It should not use any external flash chip.
If possible, please give some code-snippet so that I can use them in AVR studio 4. for example if I save 8 uint16_t data it should load those data into an array of uint16_t.
You have to burn the data to the program memory of the chip if you don't need to update them programmatically, or if you want read-write support, you should use the built-in EPROM.
Pgmem example:
#include <avr/pgmspace.h>
PROGMEM uint16_t data[] = { 0, 1, 2, 3 };
int main()
{
uint16_t x = pgm_read_word_near(data + 1); // access 2nd element
}
You need to get the datasheet for the part you are using. Microcontrollers like these typically contain at least a flash and sometimes multiple banks of flash to allow for different bootloaders while making it easy to erase one whole flash without affecting another. Likewise some have eeprom. This is all internal, not external. Esp since you say you need to save programatically this should work (remember how easy it is to wear out a flash do dont save unless you need to). Either eeprom or flash will meet the requirement of having that information there when you power up, non-volatile. As well as being able to save it programmatically. Googling will find a number of examples on how to do this, in addition to the datasheet you apparently have not read, as well as the app notes that also contain this information (that you should have read). If you are looking for some sort of one time programmable fuse blowing thing, there may be OTP versions of the avr, and you will have to read the datasheets, programmers references and app notes on how to program that memory, and should tell you if OTP parts can be written programmatically or if they are treated differently.
The reading of the data is in the memory map in the datasheet, write code to read those adresses. Writing is described in the datasheet (programmers reference manual, users guide, whatever atmel calls it) as well and there are many examples on the net.
I'm working with the NetShareEnum function in the Windows API. It can return the SHARE_INFO_2 structure. That structure contains the shi2_type member, which is defined as "a bitmask of flags that specify the type of the shared resource". The values of the bitmask are defined in LMSHare.h
#define STYPE_DISKTREE 0 // Disk drive.
#define STYPE_PRINTQ 1 // Print queue.
#define STYPE_DEVICE 2 // Communication device.
#define STYPE_IPC 3 // Interprocess communication (IPC).
I don't know how to interpret STYPE_DISKTREE. Since it is a bitmask of zero, I can't use a bitwise AND on the mask and compare it against the mask to see if it is set. That is,
(shi2_type & STYPE_DISKTREE) == STYPE_DISKTREE
is always true. Is this intended to mean that all shares are inherently disk shares? Or, should I make this a special case and use the following comparison to check if the share is a disk share,
shi2_type == STYPE_DISKTREE
which is to say that a disk share is exclusively a disk share and nothing else.
From the documentation:
A bitmask of flags that specify the type of the shared resource. Calls to the NetShareSetInfo function ignore this member.
One of the following flags may be specified.
STYPE_DISKTREE
STYPE_PRINTQ
STYPE_DEVICE
STYPE_IPC
In addition, one or both of the following flags may be specified.
STYPE_SPECIAL
STYPE_TEMPORARY
So the low part of shi2_type will be one of DISKTREE, PRINTQ, DEVICE, or IPC and the high part may contain SPECIAL and/or TEMPORARY. Sadly the documentation is not explicit about the size of the parts, but since there's only 4 types you can just take the low byte. You could also just drop the high byte as that is where the 2 flags are defined.