this is mostly a ruby question.
I'm stuck trying to parse some bytes from an i2c device to a float value on ruby.
Long story:
I'm trying to read a float value from an i2c device with raspberry pi and an AtTiny85 (the device). i'm able to read its value from console through i2ctools
Example:
i2cset -y 0 0x25 0x00; sleep 1; i2cget -y 0 0x25 0x00; i2cget -y 0 0x25 0x00; i2cget -y 0 0x25 0x00; i2cget -y 0 0x25 0x00
Gives me:
0x3e
0x00
0x80
0x92
that means 0.12549046, which is a value in volts that i'm able to check with my multimeter and is ok. (the order of the bytes is 0x3e008092)
Now i need to get this float value from a ruby script, I'm using the i2c gem.
A comment on this site suggest the following conversion method:
hex_string = '42880000'
float = [hex_string.to_i(16)].pack('L').unpack('F')[0]
# => 68.0
float = 66.2
hex_string = [float].pack('F').unpack('L')[0].to_s(16)
# => 42846666
But i haven't been able to get this string of hex values. This fraction of code:
require "i2c/i2c"
require "i2c/backends/i2c-dev"
#i2c = ::I2C.create("/dev/i2c-0")
sharp = 0x25
#i2c.write(sharp, 0)
sleep 1
puts #i2c.read(sharp, 4).inspect
Puts on screen
">\x00\x00P"
Where the characters '>' and 'P' are the ASCII values of the byte in that position, but then i cannot know where/how to split the string and clean it up to at least try the method showed above.
I could write a C program to read the value and printf it to console or something and run it from ruby, but i think that would be an awful solution.
Some ideas on how can this be done would be very helpful!
Greetings.
I came with something:
bytes = []
for i in (0..3) do
bytes << #i2c.read_byte(sharp).unpack('*C*')[0].to_s(16)
bytes[i] = "00" unless bytes[i] != "0"
end
bytes = bytes.join.to_s
float = [bytes.to_i(16)].pack('L').unpack('F')[0]
puts float.to_s
Not shure about unpack(' * C * ') though, but it works. If it's a better way to do it i'd be glad with another answer.
Greetings!
You probably just need to use unpack with a format of g, or possible e depending on the endianness.
#i2c.read(sharp, 4).unpack('g')[0]
The example you are referring to is taking a string of hex digits and first converting it to a binary string (that’s the [hex_string.to_i(16)].pack('L') part) before converting to an integer (the L directive is for 32 bit integers). The data you have is already a binary string, so you just need to convert it directly with the appropriate directive for unpack.
Have a read of the documentation for unpack and pack.
Related
I got an Arduino Uno, which is driven by an ATmega328P. And I wanted to move away from its libraries and do everything on a lower level for learning purposes. However I cannot get the uart working correctly, it works now only when sending to the device. Receiving returns weird garbage wich the temrinal can't print.
#define BAUDRATE (((F_CPU / (BAUD * 16UL))) - 1)
void init_uart()
{
UBRR0H = BAUDRATE >> 8; // set high baud
UBRR0L = BAUDRATE; //set low baud
UCSR0B = _BV(TXEN0) | _BV(RXEN0); //enable duplex
UCSR0C = _BV(UCSZ00) | _BV(UCSZ01) | _BV(USBS0); //8-N-1
}
void putchar_uart(char c, FILE* stream)
{
loop_until_bit_is_set(UCSR0A, UDRE0); //wait till prev char is read
UDR0 = c;
}
char getchar_uart(FILE* stream)
{
loop_until_bit_is_set(UCSR0A, RXC0); //wait if there is data
return UDR0;
}
//^ actually is in a seperate file which gets linked
int main()
{
DDRD |= PIN_LED;
PORTD |= PIN_LED;
stdout = &mystdout;
stdin = &mystdin;
char buf[0xFF];
init_uart();
while (1)
{
char c = getchar_uart(NULL);
if (c == 'a')
{
PIND = PIN_LED;
printf("%s\n", "Hallo");
}
}
}
I'm running Ubuntu 14.04 LTS and using minicom for the communication. Which is setup as: 115200 8N1 (with the correct serial device of course.)
It gets compiled as:
avr-gcc -Wall -Os -mmcu=atmega328p -DF_CPU=16000000UL -DBAUD=115200 -std=c99 -L/home/joel/avr-libs/lib -I/home/joel/avr-libs/inc -o firmware.o main.c -luart
So how do I know that one way works? Because of the led only toggles when typing in an 'a'. But the response are invalid characters. In hex:
c8 e1 ec ec ef 8a
By setting the USBS bit you are commanding a second stop bit.
This appears to lead your computer to mistakenly believe that the MSB (which is the last data bit) is set when it isn't causing your received data to be OR'd with 0x80.
While this will cause a framing error, it is probably not the cause of the wrong MSB. Your own answer about switching to 2x mode (and thus more accurately approximating the baud rate) is more key to the solution, though you should correct this too.
I fixed the problem when Chris suggested to print out the config registers that Arduino uses I noticed that it uses the double mode. I couldn't configure that with minicom or I missed that. Maybe it is default to use such mode. Anyway it works now.
I also learned that avr-libc provides a header called util/setbaud.h which calculates the correct baud rate automatically. In the UBRRL_VALUE and UBRRH_VALUE fields.
My project is an audio spectrum analyzer, but I am stuck in displaying the ADC results, either on my LCD or on the Terminal of CodevisionAVR.
The project uses an ATmega16A, with an 7.37 MHz external oscillator. For an IDE I am using CodevisionAVR.
The audio spectrum analyzer takes its input through a 3.5 mm jack audio cable, this signal is amplified and filtered in order to select the frequencies between 0 and 4 KHz, and the output of this circuit is connected to PA0, which is the channel 0 of the ADC of the microcontroller.
For testing, I have set the ADC to work on 8 bits (read the most significant 8 bits), taking the internal 2.56V as voltage reference. I have decoupled AREF pin using a 10nF capacitor (I will change it to 100nF for a better noise reduction). The ADC is also in free running mode.
I am stuck in displaying the ADC results, either on my LCD or on the Terminal of CodevisionAVR (through the UART --- configured using the wizard).
This is the function I used for the ADC:
// Voltage Reference: Int., cap. on AREF
#define ADC_VREF_TYPE ((1<<REFS1) | (1<<REFS0) | (1<<ADLAR))
// Read the 8 most significant bits
// of the AD conversion result
unsigned char read_adc(unsigned char adc_input)
{
ADMUX=adc_input | ADC_VREF_TYPE;
// Delay needed for the stabilization of the ADC input voltage
delay_us(10);
// Start the AD conversion
ADCSRA|=(1<<ADSC);
// Wait for the AD conversion to complete
while ((ADCSRA & (1<<ADIF))==0);
ADCSRA|=(1<<ADIF);
return ADCH;
}
Main function of the code:
void main (void)
{
Init_Controller(); // this must be the first "init" action/call!
#asm("sei") // enable interrupts
lcd_init(16);
lcd_gotoxy(0,1);
lcd_putsf("AUDIO SPECTRUM");
delay_ms(3000);
lcd_clear();
while(TRUE)
{
wdogtrig();
TCNT1 = 0; //usage of Timer1 with OCR1A
TIFR |= 1<<OCF1A;
for(i=0;i<N;i++) {
while((TIFR & (1<<OCF1A)) == 0)
putchar(read_adc());
//adc_set[i] = adc_read(); //this is a second option
TIFR |= 1<<OCF1A;
}
//for(i=0; i<N; i++)
//printf("adc values: %d \n",adc_set[i]);
} //end while loop
}
N is defined as 32 = number of samples in 1 AD conversion.
The first error I see is using putchar() to write a number to the LCD.
The result of read_adc() is a number, not a string of ascii characters. You need to use sprintf to write the ADC result as a string into a buffer, then use lcd_putsf() to send the buffer to the LCD.
I'm on the trail of why the contents of a TXT record in a Bonjour service discovery is sometimes being incompletely interpreted, and I've reached a point where it would be really useful to have a breakpoint print out the contents of an unsigned char in a callback (I've tried NSLog, but using NSLog in a threaded callback can get really tricky).
The callback function is defined this way:
static void resolveCallback(DNSServiceRef sdRef, DNSServiceFlags flags, uint32_t interfaceIndex, DNSServiceErrorType errorCode,
const char* fullname, const char* hosttarget, uint16_t port, uint16_t txtLen,
const unsigned char* txtRecord, void* context) {
So I'm interested in the txtRecord
Right now my breakpoint is using:
memory read --size 4 --format x --count 4 `txtRecord`
But that's only because that was an example on the lldv.llvm.org example page ;-) It's certainly showing data that I expect to be there, partially.
Do I have to apply informed knowledge of the length or can the breakpoint be coded such that it uses the length that is present? I'm thinking that instead of "hard coding" the two 4s in the example there ought to be a way to wrap in other read instructions inside back ticks like I did with the variable name.
Looking at http://lldb.llvm.org/varFormats.html I thought I'd try a format of C instead of x but that prints out series of dots which must mean I picked a wrong format or something.
I just tried
memory read `txtRecord`
and that's almost exactly what I wanted to see as it gives:
0x1c5dd884: 10 65 6e 30 3d 31 39 32 2e 31 36 38 2e 31 2e 33 .en0=192.168.1.3
0x1c5dd894: 36 0a 70 6f 72 74 3d 35 30 32 37 38 00 00 00 00 6.port=50278....
This looks really close:
memory read `txtRecord` --format C
giving:
0x1d0c6974: .en0=192.168.1.36.port=50278....
If that's the best I can get, I guess I can deal with the length bytes in front of each of the two strings in that txtRecord.
I'm asking this question because I'd like to display the actual and correct values... the bug is that sometimes the IP address comes back wrong, losing the frontmost 1, other times the port comes back "short" (in network byte order) with non-numeric characters at the end, like "502¿" instead of "50278" (in this example run).
My initial response to this question, while informative, was not complete. I originally thought the problem being reported was just about printing a c-string array of type unsigned char * where the default formatters (char *) weren't being used. That answer comes first. Then comes the answer about how to print this (somewhat unique) array of pascal strings data that the program is actually dealing with.
First answer: lldb knows how to handle the char * well; it's the unsigned char * bit that is making it behave a little worse than usual. e.g. if txtRecord were a const char *,
(lldb) p txtRecord
(const char *) $0 = 0x0000000100000f51 ".en0=192.168.1.36.port=50278"
You can copy the type summary lldb has built in for char * for unsigned char *. type summary list lists all of the built in type summaries; copying lldb-179.5's summaries for char *:
(lldb) type summary add -p -C false -s ${var%s} 'unsigned char *'
(lldb) type summary add -p -C false -s ${var%s} 'const unsigned char *'
(lldb) fr va txtRecord
(const unsigned char *) txtRecord = 0x0000000100000f51 ".en0=192.168.1.36.port=50278"
(lldb) p txtRecord
(const unsigned char *) $2 = 0x0000000100000f51 ".en0=192.168.1.36.port=50278"
(lldb)
Of course you can put these in your ~/.lldbinit file and they'll be picked up by Xcode et al from now on.
Second answer: To print the array of pascal strings that this is actually using, you'll need to create a python function. It will take two arguments, the size of the pascal string buffer (txtLen) and the address of the start of the buffer (txtRecord). Create a python file like pstrarray.py (I like to put these in a directory I made, ~/lldb) and load it into your lldb via the ~/.lldbinit file so you have the command available:
command script import ~/lldb/pstrarray.py
The python script is a little long; I'm sure someone more familiar with python could express this more concisely. There's also a bunch of error handling which adds bulk. But the main idea is to take two parameters: the size of the buffer and the pointer to the buffer. The user will express these with variable names like pstrarray txtLen txtRecord, in which case you could look up the variables in the current frame, but they might also want to use an acutal expression like pstrarray sizeof(str) str. So we need to pass these parameters through the expression evaluation engine to get them down to an integer size and a pointer address. Then we read the memory out of the process and print the strings.
import lldb
import shlex
import optparse
def pstrarray(debugger, command, result, dict):
command_args = shlex.split(command)
parser = create_pstrarray_options()
try:
(options, args) = parser.parse_args(command_args)
except:
return
if debugger and debugger.GetSelectedTarget() and debugger.GetSelectedTarget().GetProcess():
process = debugger.GetSelectedTarget().GetProcess()
if len(args) < 2:
print "Usage: pstrarray size-of-buffer pointer-to-array-of-pascal-strings"
return
if process.GetSelectedThread() and process.GetSelectedThread().GetSelectedFrame():
frame = process.GetSelectedThread().GetSelectedFrame()
size_of_buffer_sbval = frame.EvaluateExpression (args[0])
if not size_of_buffer_sbval.IsValid() or size_of_buffer_sbval.GetValueAsUnsigned (lldb.LLDB_INVALID_ADDRESS) == lldb.LLDB_INVALID_ADDRESS:
print 'Could not evaluate "%s" down to an integral value' % args[0]
return
size_of_buffer = size_of_buffer_sbval.GetValueAsUnsigned ()
address_of_buffer_sbval = frame.EvaluateExpression (args[1])
if not address_of_buffer_sbval.IsValid():
print 'could not evaluate "%s" down to a pointer value' % args[1]
return
address_of_buffer = address_of_buffer_sbval.GetValueAsUnsigned (lldb.LLDB_INVALID_ADDRESS)
# If the expression eval didn't give us an integer value, try it again with an & prepended.
if address_of_buffer == lldb.LLDB_INVALID_ADDRESS:
address_of_buffer_sbval = frame.EvaluateExpression ('&%s' % args[1])
if address_of_buffer_sbval.IsValid():
address_of_buffer = address_of_buffer_sbval.GetValueAsUnsigned (lldb.LLDB_INVALID_ADDRESS)
if address_of_buffer == lldb.LLDB_INVALID_ADDRESS:
print 'could not evaluate "%s" down to a pointer value' % args[1]
return
err = lldb.SBError()
pascal_string_buffer = process.ReadMemory (address_of_buffer, size_of_buffer, err)
if (err.Fail()):
print 'Failed to read memory at address 0x%x' % address_of_buffer
return
pascal_string_array = bytearray(pascal_string_buffer, 'ascii')
index = 0
while index < size_of_buffer:
length = ord(pascal_string_buffer[index])
print "%s" % pascal_string_array[index+1:index+1+length]
index = index + length + 1
def create_pstrarray_options():
usage = "usage: %prog"
description='''print an buffer which has an array of pascal strings in it'''
parser = optparse.OptionParser(description=description, prog='pstrarray',usage=usage)
return parser
def __lldb_init_module (debugger, dict):
parser = create_pstrarray_options()
pstrarray.__doc__ = parser.format_help()
debugger.HandleCommand('command script add -f %s.pstrarray pstrarray' % __name__)
and an example program to run this on:
#include <stdio.h>
#include <stdint.h>
#include <string.h>
int main ()
{
unsigned char str[] = {16,'e','n','0','=','1','9','2','.','1','6','8','.','1','.','3','6',
10,'p','o','r','t','=','5','1','6','8','7'};
uint8_t *p = str;
while (p < str + sizeof (str))
{
int len = *p++;
char buf[len + 1];
strlcpy (buf, (char*) p, len + 1);
puts (buf);
p += len;
}
puts ("done"); // break here
}
and in use:
(lldb) br s -p break
Breakpoint 1: where = a.out`main + 231 at a.c:17, address = 0x0000000100000ed7
(lldb) r
Process 74549 launched: '/private/tmp/a.out' (x86_64)
en0=192.168.1.36
port=51687
Process 74549 stopped
* thread #1: tid = 0x1c03, 0x0000000100000ed7 a.out`main + 231 at a.c:17, stop reason = breakpoint 1.1
#0: 0x0000000100000ed7 a.out`main + 231 at a.c:17
14 puts (buf);
15 p += len;
16 }
-> 17 puts ("done"); // break here
18 }
(lldb) pstrarray sizeof(str) str
en0=192.168.1.36
port=51687
(lldb)
While it's cool that it's possible to do this in lldb, it's not as smooth as we'd like to see. If the size of the buffer and the address of the buffer were contained in a single object, struct PStringArray {uint16_t size; uint8_t *addr;}, that would work much better. You could define a type summary formatter for all variables of type struct PStringArray and no special commands would be required. You'd still need to write a python function, but it could get all the information it needed out of the object directly so it would disappear into the lldb type format system. You could just write (lldb) p strs and the custom formatter function would be called on strs to print all the strings in there.
How do I convert a hex strign to its 32 bit signed int equivalent in ruby?
for example
a = "fb6d8cf1" #hex string
[a].pack('H*').unpack('l') #from the documentation it unpacks to its 32 bit signed int
It converts to
-242455045
But the actual answer is
-76706575
Could you point me to what I am doing wrong?
Seems like you had an endian problem. This gives the desired result:
[a].pack("H*").unpack("l>")
# => [-76706575]
["038a67f90"].pack("H*").unpack("l>")
#=> [59402233]
You could flip the bytes yourself to get around the endian and sign issues:
>> ['fb6d8cf1'.scan(/[0-9a-f]{2}/i).reverse.join].pack('H*').unpack('l')
=> [-76706575]
Use:
class String
def to_si(base, lenght = 32)
mid = 2**(length-1)
max_unsigned = 2**length
n = self.to_i base
(n>=mid) ? n - max_unsigned : n
end
end
"fb6d8cf1".to_si 16, 32
I'm trying to send an integer over the serial port to my Ardunio. The chip is then going to display the number in binary on the LED's. However I'm having lots of trouble trying to send the data as a byte over the serial port, as far as I can debug the following code sends it as the ASC char values.
Can anyone point me in the right direction or spot the mistake? I'd really appreciate it. I've been pulling my hair out over this for a long time.
Ruby
require 'rubygems'
require 'serialport' # use Kernel::require on windows, works better.
#params for serial port
port_str = "/dev/tty.usbserial-A700dZt3" #may be different for you
baud_rate = 9600
data_bits = 8
stop_bits = 1
parity = SerialPort::NONE
sp = SerialPort.new(port_str, baud_rate, data_bits, stop_bits, parity)
i = 15
#just write forever
while true do
sp.write(i.to_s(2))
sleep 10
end
Arduino
int ledPin = 10;
int ledPin1 = 11;
int ledPin2 = 12;
int ledPin3 = 13;
byte incomingByte; // for incoming serial data
void setup() {
pinMode(ledPin, OUTPUT); // initialize the LED pin as an output:
pinMode(ledPin1, OUTPUT); // initialize the LED pin as an output:
pinMode(ledPin2, OUTPUT); // initialize the LED pin as an output:
pinMode(ledPin3, OUTPUT); // initialize the LED pin as an output:
Serial.begin(9600);
Serial.println("I am online");
}
void loop() {
// send data only when you receive data:
if (Serial.available() > 0) {
incomingByte = Serial.read();
Serial.println(incomingByte, DEC);
int value = (incomingByte, DEC) % 16;
digitalWrite(ledPin, (value >> 0) % 2);
digitalWrite(ledPin1, (value >> 1) % 2);
digitalWrite(ledPin2, (value >> 2) % 2);
digitalWrite(ledPin3, (value >> 3) % 2); // MSB
}
}
I'm guessing you are trying to write the value 15 in order to light all the LEDs at once. However, 15.to_s(2) is "1111". The ASCII value of the character '1' is 49, so instead of writing 15 once you are writing 49 four times in rapid succession.
The write command you are looking for is therefore probably sp.putc(i). This writes only one character with the given binary value (= machine-readable for Arduino) instead of an ASCII string representation of the value expressed in binary (= human-readable for you).
So keeping everything else the same, replace the while loop in your Ruby code with:
loop do
sp.putc(i)
puts 'Wrote: %d = %bb' % [ i, i ]
i = (i == 15) ? 0 : (i + 1)
sleep(10)
end
If you wish to read the responses from Arduino, you can use e.g. sp.gets to get one line of text, e.g. try placing puts 'Arduino replied: ' + sp.gets in the loop before sleep (and one puts sp.gets before the loop to read the "I am online" sent when the connection is first established).
Edit: I just spotted another problem in your code, on the Arduino side: value = (incomingByte, DEC) % 16; always results in the value 10 because (incomingByte, DEC) has the value DEC (which is 10). You should use value = incomingByte % 16; instead. Or do away with value altogether and modify incomingByte itself, e.g. incomingByte %= 16;.
Your problems may be caused by buffering. To disable buffering, you can do one of the following:
Set sp to unbuffered after creating it (before writing): sp.sync = true
Call flush after the write
Use the unbuffered syswrite instead of write
It's been so long since I did anything with serial ports that I can't help there, but I do see one thing.
>> 15.to_s #=> "15"
and
>> 15.to_s(2) #=> "1111"
I think if you want the binary value to be sent you'll want "\xf" or "\u000F".
Change your code from:
while true do
sp.write(i.to_s(2)) # <-- this sends a multi-character ASCII representation of the "i" value, NOT the binary.
sleep 10
end
to:
while true do
sp.write(i.chr) # <-- this sends a single byte binary representation of the "i" value, NOT the ASCII.
sleep 10
end
To show the difference, here's the length of the strings being output:
>> 15.to_s(2).size #=> 4
>> 15.chr.size #=> 1
And the decimal values of the bytes comprising the strings:
>> 15.to_s(2).bytes.to_a #=> [49, 49, 49, 49]
>> 15.chr.bytes.to_a #=> [15]
I've had this Ruby code work before
while true do
printf("%c", sp.getc)
end
rather than using sp.write(i.to_s). It looks like you are explicitly converting it to a string, which may be the cause of your problems.
I found the original blog post I used:
http://www.arduino.cc/playground/Interfacing/Ruby