Mikrocontroller (PIC16F1827) ADC scrambled output with MCC in MPLAB - pic

Im trying to construct an AD-converter from a potentiometer to an Arduino. I´m trying to learn MCC in MPLAB at the same time. So far I have generated a code that fits my PIC (I think...). My problem is now that my bit represented output is incorrect. This is hoe my PIC16F1827 is configured (se picture)
RA0 = input, RB1 and RB2 = EUSART and RB0,RB3,RA7,RA6,RB7,RB6,RB5,RB4 = output.
My main file look like this (se code). I get an output but its represented wrong and i can´t figure ut why...
char ADC_temp_in;
while (1) //Infinite Loop
{
// Add your application code
printf("pot_value =%d\r\n", ADC_GetConversion(channel_AN0_ADC));
ADC_temp_in = ADC_GetConversion(channel_AN0_ADC); // temp
PORTB = ADC_temp_in; //Write Lower bits to PORTB
PORTA = ADC_temp_in>>6; //Write Higher 2 bits to PORTA
__delay_ms(100); //Delay
}
VREF+ = 5V and is connected directly to VDD.
My goal is to have RB0 as LSB and RA7 as MSB with the voltage difference 0-5 V with the potentiometer.

Two things:
ADC_temp_in had to by a 16 Bit value to hold a value greater than 8 Bit.
So try: uint16_t ADC_temp_in;
Of course your function ADC_GetConversion had to return a uint_16 value.
Another thing is, to get the MSB you had to shift your value 8 times right.
PORTA = ADC_temp_in>>8;

Related

ATTiny85 PWM for 4 LEDs

I need to control 4 individual LEDs via PWM on an ATTiny85. I have found lots of info on how to control 3 LEDs. But apparently to control 4 with PWM, you have to really twist the 85 into knots. Is there an easier way to handle 4 LEDs on the 85, or would it be better to step over to the 84? If I went with the 84, would I be likely to run into the same brick walls as with the 85?
I found this code for controlling 4 on the 85, but it's above my skill level. Anyone see any issues with it?
/* Four PWM Outputs */
// ATtiny85 outputs
const int Red = 0;
const int Green = 1;
const int Blue = 4;
const int White = 3;
volatile uint8_t* Port[] = {&OCR0A, &OCR0B, &OCR1A, &OCR1B};
void setup() {
pinMode(Red, OUTPUT);
pinMode(Green, OUTPUT);
pinMode(Blue, OUTPUT);
pinMode(White, OUTPUT);
// Configure counter/timer0 for fast PWM on PB0 and PB1
TCCR0A = 3<<COM0A0 | 3<<COM0B0 | 3<<WGM00;
TCCR0B = 0<<WGM02 | 3<<CS00; // Optional; already set
// Configure counter/timer1 for fast PWM on PB4
TCCR1 = 1<<CTC1 | 1<<PWM1A | 3<<COM1A0 | 7<<CS10;
GTCCR = 1<<PWM1B | 3<<COM1B0;
// Interrupts on OC1A match and overflow
TIMSK = TIMSK | 1<<OCIE1A | 1<<TOIE1;
}
ISR(TIMER1_COMPA_vect) {
if (!bitRead(TIFR,TOV1)) bitSet(PORTB, White);
}
ISR(TIMER1_OVF_vect) {
bitClear(PORTB, White);
}
// Sets colour Red=0 Green=1 Blue=2 White=3
// to specified intensity 0 (off) to 255 (max)
void SetColour (int colour, int intensity) {
*Port[colour] = 255 - intensity;
}
void loop() {
for (int i=-255; i <= 254; i++) {
OCR0A = abs(i);
OCR0B = 255-abs(i);
OCR1A = abs(i);
OCR1B = 255-abs(i);
delay(10);
}
}
If you want to save pins at the expense of a more complicated strategy, you can get away with only 3 pins by connecting the LEDs as two sets of two like this...
Instead of using the built in PWM, you will need to do the PWM manually by setting a timer and then changing the INPUT/OUTPUT and ON/OFF of each of the pins each time the timer expires.
+-----+----------+----------+----------+
| LED | A | B | C |
+-----+----------+----------+----------+
| 1 | OUTPUT 1 | INPUT | OUTPUT 0 |
| 2 | OUTPUT 0 | INPUT | OUTPUT 1 |
| 3 | INPUT | OUTPUT 1 | OUTPUT 0 |
| 4 | INPUT | OUTPUT 0 | OUTPUT 1 |
+-----+----------+----------+----------+
Update or comment if you want more details on this strategy.
An easy strategy is to multiplex the 4 LEDs onto a single PWM pin. This will let independently control the brightness of each LED on the ATTINY using 5 pins total.
So, for example, you could connect all 4 of the cathodes together and connect those to a single PWM pin. Then you connect each of the 4 anodes to a different IO pin.
At any given moment, only one of the anodes is in output mode - the others are left floating. The means that at most 1 single LED is active and its brightness is controlled by the PWM duty cycle.
You can then use the overflow ISR for the PWM timer to activate the next LED in the sequence after each PWM cycle. You also update the PWM match to reflect the brightness of the next LED.
If you rotate though the LEDs quickly (faster than, say, 60 times per second), then visually they all just look like they are on at the desired brightness. PWM, after all, is just blinking an LED too quickly to see, so we are just adding a second dimension on to it.
One downside: Since only a single LED is on at any moment, the maximum total brightness will theoretically be 1/4 of what it would be if you drove all the LEDs independently. In practice this is likely an issue since the ATTINY is limited to how much current it can pass though all if its pins at once if you tried to light all the LEDs at the same time.
One hint: when setting up the PWM timer, make is so that the LED is OFF at the beginning of the cycle and turns on in the middle. This will give the ISR time to step to the next LED while all LEDs are off. This is better becuase it is Easy to see an LED that is on when it should not be, but not so easy to see an LED that is off when it should be on.
One suggestion: I will get flamed for this, but you can leave out any current limiting resistors when doing this since each LED is only on for at most 1/4 of the time. This will give you more brightness and also make it so you can dial down the PWM duty cycle so you have more off time at the beginning of each cycle to step to the next LED.
I have used this technique successfully many times, and even have been able to multiplex 6 RGB LEDs (three channels each) onto one chip and it works great.
Update the question if you have any questions about the details!

How to define Rb0 to Rb6 = hex (x%10); // no Rb7 pin

Using MikroC Pro for PIC16f73 to multiplexing 7 segment the written program is:
PORTB = Hex (x%10);
There PORTB means RB0 to RB7 total 8 pins are includes, but I want to use only 7 pins for 7 segments RB0 to RB6, and the pin RB7 as other O/P just 0 or 1.
As like Rb0 to Rb6 = hex (x%10) and Rb7_bit = 0 or 1
so how to define the line Rb0 to Rb6 = hex (x%10);
Try this:
uint8_t Pin_Value;
Pin_Value = Hex (x%10);
Pin_Value |= 0x80; //set bit RB7
Pin_Value &= 0x7F; //clear bit RB7
PORTB = Pin_Value;

How to set a port as input or output with PIC?

Problem : Cannot understand how to set port A and port B as input and output
Im using a book as reference : page 19 - Introduction (chapter) - BookName : John Morton Third Edition - The PIC MicroController Your Personal Introductory Course
According to what I understood from the book,bit numbering goes from right to left, so im supposed to read as port DCBA and thats why : b'0010'
However, this paragraph on page 18 is really confusing :
It moves the literal into the working register. Then the instruction
tris takes the number in the working register and uses it to select
which bits of the port are to act as inputs and which as outputs. A
binary 1 will correspond to an input and a 0 corresponds to an output.
And reading it again,I wonder if , for each port, there is 4 bits, and I can select how many of these are input and how many are output? But I thought a port can only be output or input..
Please , would someone clarify?
__config _CP_OFF & _WDT_OFF & _XT_OSC
list P = 16F57;
include "C:\Program Files (x86)\Microchip\MPLABX\v3.40\mpasmx\p16f57.inc";
portA equ 05
portB equ 06
org 0 ; Starts at 0?
goto Start
Init
clrf portA ; Reset Port A and B States
clrf portB ;
movlw b'0010'; Set port B as output
tris portA;
movlw b'0010'; Set Port A as input
;0010 should mean -> ABCD port states?
tris portB;
retlw 0; return
Start
call Init;
Main
bsf portA,0;
goto Main;
END
An individual port corresponds to all of its associated pins. For example, on the PIC16F57, you have pins RA0,RA1,RA2 and RA3. These pins correspond to PORTA bits 0, 1, 2 and 3 respectively. So, this is what is actually happening.
clrf portA
clrf portB
movlw b'0010' ;Set RA1 as input and RA0,RA2,RA3 as output
tris portA;
movlw b'0010' ;Set RB1 as input and RB0,RB2,RB3 as output
tris portB
Something to note is that all pins are initialized as inputs upon power up or reset and, while PORTA is only a 4 bit register, PORTB is 8 bits. In this case it may be better to explicitly declare all of the bits for that register.
movlw b'00000010' ;Set RB1 as input all others as output.
tris portB
You have to make sure that you read the datasheet to determine the width of your PORT registers and their corresponding pins.
TRISB=0xFF; //For PortB as a Input
TRISB=0x00; //For PortB as Output

How to calculate g values from LIS3DH sensor?

I am using LIS3DH sensor with ATmega128 to get the acceleration values to get motion. I went through the datasheet but it seemed inadequate so I decided to post it here. From other posts I am convinced that the sensor resolution is 12 bit instead of 16 bit. I need to know that when finding g value from the x-axis output register, do we calculate the two'2 complement of the register values only when the sign bit MSB of OUT_X_H (High bit register) is 1 or every time even when this bit is 0.
From my calculations I think that we calculate two's complement only when MSB of OUT_X_H register is 1.
But the datasheet says that we need to calculate two's complement of both OUT_X_L and OUT_X_H every time.
Could anyone enlighten me on this ?
Sample code
int main(void)
{
stdout = &uart_str;
UCSRB=0x18; // RXEN=1, TXEN=1
UCSRC=0x06; // no parit, 1-bit stop, 8-bit data
UBRRH=0;
UBRRL=71; // baud 9600
timer_init();
TWBR=216; // 400HZ
TWSR=0x03;
TWCR |= (1<<TWINT)|(1<<TWSTA)|(0<<TWSTO)|(1<<TWEN);//TWCR=0x04;
printf("\r\nLIS3D address: %x\r\n",twi_master_getchar(0x0F));
twi_master_putchar(0x23, 0b000100000);
printf("\r\nControl 4 register 0x23: %x", twi_master_getchar(0x23));
printf("\r\nStatus register %x", twi_master_getchar(0x27));
twi_master_putchar(0x20, 0x77);
DDRB=0xFF;
PORTB=0xFD;
SREG=0x80; //sei();
while(1)
{
process();
}
}
void process(void){
x_l = twi_master_getchar(0x28);
x_h = twi_master_getchar(0x29);
y_l = twi_master_getchar(0x2a);
y_h = twi_master_getchar(0x2b);
z_l = twi_master_getchar(0x2c);
z_h = twi_master_getchar(0x2d);
xvalue = (short int)(x_l+(x_h<<8));
yvalue = (short int)(y_l+(y_h<<8));
zvalue = (short int)(z_l+(z_h<<8));
printf("\r\nx_val: %ldg", x_val);
printf("\r\ny_val: %ldg", y_val);
printf("\r\nz_val: %ldg", z_val);
}
I wrote the CTRL_REG4 as 0x10(4g) but when I read them I got 0x20(8g). This seems bit bizarre.
Do not compute the 2s complement. That has the effect of making the result the negative of what it was.
Instead, the datasheet tells us the result is already a signed value. That is, 0 is not the lowest value; it is in the middle of the scale. (0xffff is just a little less than zero, not the highest value.)
Also, the result is always 16-bit, but the result is not meant to be taken to be that accurate. You can set a control register value to to generate more accurate values at the expense of current consumption, but it is still not guaranteed to be accurate to the last bit.
the datasheet does not say (at least the register description in chapter 8.2) you have to calculate the 2' complement but stated that the contents of the 2 registers is in 2's complement.
so all you have to do is receive the two bytes and cast it to an int16_t to get the signed raw value.
uint8_t xl = 0x00;
uint8_t xh = 0xFC;
int16_t x = (int16_t)((((uint16)xh) << 8) | xl);
or
uint8_t xa[2] {0x00, 0xFC}; // little endian: lower byte to lower address
int16_t x = *((int16*)xa);
(hope i did not mixed something up with this)
I have another approach, which may be easier to implement as the compiler will do all of the work for you. The compiler will probably do it most efficiently and with no bugs too.
Read the raw data into the raw field in:
typedef union
{
struct
{
// in low power - 8 significant bits, left justified
int16 reserved : 8;
int16 value : 8;
} lowPower;
struct
{
// in normal power - 10 significant bits, left justified
int16 reserved : 6;
int16 value : 10;
} normalPower;
struct
{
// in high resolution - 12 significant bits, left justified
int16 reserved : 4;
int16 value : 12;
} highPower;
// the raw data as read from registers H and L
uint16 raw;
} LIS3DH_RAW_CONVERTER_T;
than use the value needed according to the power mode you are using.
Note: In this example, bit fields structs are BIG ENDIANS.
Check if you need to reverse the order of 'value' and 'reserved'.
The LISxDH sensors are 2's complement, left-justified. They can be set to 12-bit, 10-bit, or 8-bit resolution. This is read from the sensor as two 8-bit values (LSB, MSB) that need to be assembled together.
If you set the resolution to 8-bit, just can just cast LSB to int8, which is the likely your processor's representation of 2's complement (8bit). Likewise, if it were possible to set the sensor to 16-bit resolution, you could just cast that to an int16.
However, if the value is 10-bit left justified, the sign bit is in the wrong place for an int16. Here is how you convert it to int16 (16-bit 2's complement).
1.Read LSB, MSB from the sensor:
[MMMM MMMM] [LL00 0000]
[1001 0101] [1100 0000] //example = [0x95] [0xC0] (note that the LSB comes before MSB on the sensor)
2.Assemble the bytes, keeping in mind the LSB is left-justified.
//---As an example....
uint8_t byteMSB = 0x95; //[1001 0101]
uint8_t byteLSB = 0xC0; //[1100 0000]
//---Cast to U16 to make room, then combine the bytes---
assembledValue = ( (uint16_t)(byteMSB) << UINT8_LEN ) | (uint16_t)byteLSB;
/*[MMMM MMMM LL00 0000]
[1001 0101 1100 0000] = 0x95C0 */
//---Shift to right justify---
assembledValue >>= (INT16_LEN-numBits);
/*[0000 00MM MMMM MMLL]
[0000 0010 0101 0111] = 0x0257 */
3.Convert from 10-bit 2's complement (now right-justified) to an int16 (which is just 16-bit 2's complement on most platforms).
Approach #1: If the sign bit (in our example, the tenth bit) = 0, then just cast it to int16 (since positive numbers are represented the same in 10-bit 2's complement and 16-bit 2's complement).
If the sign bit = 1, then invert the bits (keeping just the 10bits), add 1 to the result, then multiply by -1 (as per the definition of 2's complement).
convertedValueI16 = ~assembledValue; //invert bits
convertedValueI16 &= ( 0xFFFF>>(16-numBits) ); //but keep just the 10-bits
convertedValueI16 += 1; //add 1
convertedValueI16 *=-1; //multiply by -1
/*Note that the last two lines could be replaced by convertedValueI16 = ~convertedValueI16;*/
//result = -425 = 0xFE57 = [1111 1110 0101 0111]
Approach#2: Zero the sign bit (10th bit) and subtract out half the range 1<<9
//----Zero the sign bit (tenth bit)----
convertedValueI16 = (int16_t)( assembledValue^( 0x0001<<(numBits-1) ) );
/*Result = 87 = 0x57 [0000 0000 0101 0111]*/
//----Subtract out half the range----
convertedValueI16 -= ( (int16_t)(1)<<(numBits-1) );
[0000 0000 0101 0111]
-[0000 0010 0000 0000]
= [1111 1110 0101 0111];
/*Result = 87 - 512 = -425 = 0xFE57
Link to script to try out (not optimized): http://tpcg.io/NHmBRR

Why do x64 projects use a default packing alignment of 16?

If you compile the following code in a x64 project in VS2012 without any /Zp flags:
#pragma pack(show)
then the compiler will spit out:
value of pragma pack(show) == 16
If the project uses Win32 then, the compiler will spit out:
value of pragma pack(show) == 8
What I don't understand is that the largest natural alignment of any type (ie: long long and pointer) in Win64 is 8. So why not just make the default alignment 8 for x64?
Somewhat related to that, why would anyone ever use /Zp16?
EDIT:
Here's an example to show what I'm talking about. Even though pointers have a natural alignment of 8 bytes for x64, Zp1 can force them to a 1 byte boundary.
struct A
{
char a;
char* b;
}
// Zp16
// Offset of a == 0
// Offset of b == 8
// Zp1
// Offset of a == 0
// Offset of b == 1
Now if we take an example that uses SSE:
struct A
{
char a;
char* b;
__m128 c; // uses declspec(align(16)) in xmmintrinsic.h
}
// Zp16
// Offset of a == 0
// Offset of b == 8
// Offset of c == 16
// Zp1
// Offset of a == 0
// Offset of b == 1
// Offset of c == 16
If __m128 were truly a builtin type, then I'd expect the offset to be 9 with Zp1. But since it uses __declspec(align(16)) in its definition in xmmintrinsic.h, that trumps any Zp settings.
So here's my question worded a little differently: is there a type for 'c' that has a natural alignment of 16B but will have an offset of 9 in the previous example?
The MSDN page here includes the following relevant information about your question "why not make the default alignment 8 for x64?":
Writing applications that use the latest processor instructions introduces some new constraints and issues. In particular, many new instructions require that data must be aligned to 16-byte boundaries. Additionally, by aligning frequently used data to the cache line size of a specific processor, you improve cache performance. For example, if you define a structure whose size is less than 32 bytes, you may want to align it to 32 bytes to ensure that objects of that structure type are efficiently cached.
Why do x64 projects use a default packing alignment of 16?
On x64 the floating point is performed in the SSE unit. You state that the largest type has alignment 8. But that is not correct. Some of the SSE intrinsic types, for example __m128, have alignment of 16.

Resources