invalid operands of types 'const char [35]' and 'double' to binary 'operator+' - arduino-uno

I have a problem with a string in arduino. I know that I can not put together different types like that. I have tried several conversions but don't get it working.
The following line is where I get the message "invalid operands of types 'const char [35]' and 'double' to binary 'operator +'"
sendString("Time: " + (micros () - PingTimer) * 0.001, 3 + " ms");

Disclaimer: This question is pretty similar but on a different stack exchange site (and the answer is questionable).
The problem can be reduced to the following snippet:
void setup() {
"hello" + 3.0;
}
It produces the following error message:
error: invalid operands of types 'const char [6]' and 'double' to binary 'operator+'
Many programming languages support "adding" character sequences together, C++ doesn't. Which means that you will need to use a class which represents a character sequence and implements the + operator.
Luckily there is already such a class which you can use: String. Example:
void setup() {
String("hello") + 3.0;
}
The expression is evaluates from left to right which means that the left most type has to be a String, in other words:
String("a") + 1 + 2 + 3
Is understood as:
((String("a") + 1) + 2) + 3
Where String("a") + 1 is a String and therefore (String("a") + 1) + 2 is, and so on...

Related

cannot assign a string element to a random function in Swift Xcode

I am trying to generate an automatic password code, however, I get an error with the below code "Cannot convert value of type '[Any]' to specified type 'String'
Output of debugger as follows:
expression failed to parse:
error: Loops + Function.playground:7:25: error: cannot convert value of type '[Any]' to specified type 'String'
var passString:String = []
^~
warning: Loops + Function.playground:9:5: warning: immutable value 'n' was never used; consider replacing with '_' or removing it
for n in 0...5 {
^
_
let alphabet = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"]
var passString:String = []
for n in 0...5 {
passString = passString + alphabet.randomElement()!
}
print(passString)

Where BigDecimal "/" is defined?

I thought '3.0'.to_d.div(2) is same as '3.0'.to_d / 2, but the former return 1 while latter returns 1.5.
I searched by def / in Bigdecimal's github repository, but I couldn't find it.
https://github.com/ruby/bigdecimal/search?utf8=%E2%9C%93&q=def+%2F&type=Code
Where can I find the definition? And which method is a equivalent to / in Bigdecimal?
In Float there is a fdiv method. Is there similar one in Bigdecimal?
You can find it in the source code of the bigdecimal library, in the repository you linked to. On line 3403 of ext/bigdecimal/bigdecimal.c, BigDecimal#/ is bound to the function BigDecimal_div:
rb_define_method(rb_cBigDecimal, "/", BigDecimal_div, 1);
This function looks like this:
static VALUE
BigDecimal_div(VALUE self, VALUE r)
/* For c = self/r: with round operation */
{
ENTER(5);
Real *c=NULL, *res=NULL, *div = NULL;
r = BigDecimal_divide(&c, &res, &div, self, r);
if (!NIL_P(r)) return r; /* coerced by other */
SAVE(c); SAVE(res); SAVE(div);
/* a/b = c + r/b */
/* c xxxxx
r 00000yyyyy ==> (y/b)*BASE >= HALF_BASE
*/
/* Round */
if (VpHasVal(div)) { /* frac[0] must be zero for NaN,INF,Zero */
VpInternalRound(c, 0, c->frac[c->Prec-1], (BDIGIT)(VpBaseVal() * (BDIGIT_DBL)res->frac[0] / div->frac[0]));
}
return ToValue(c);
}
This is because BigDecimal#div takes a second argument, precision, which defaults to 1.
irb(main):017:0> '3.0'.to_d.div(2, 2)
=> 0.15e1
However, when / is defined on BigDecimal,
rb_define_method(rb_cBigDecimal, "/", BigDecimal_div, 1);
They used 1 for the # of arguments, rather than -1 which means "variable number of arguments". So BigDecimal#div thinks it takes one required argument and one optional argument, whereas BigDecimal#/ takes one required argument and the optional arg is ignored. Because the optional argument is ignored, it's not initialized correctly, it gets an empty int or 0.
This may be considered a bug. You should consider opening an issue with the ruby devs.

How to detect snprintf failure?

I am using snprintf to format string using user-defined format (also given as string). The code looks like this:
void DataPoint::valueReceived( QVariant value ) {
// Get the formating QVariant, which is only considered valid if it's string
QVariant format = this->property("format");
if( format.isValid() && format.type()==QMetaType::QString && !format.isNull() ) {
// Convert QString to std string
const std::string formatStr = format.toString().toStdString();
LOGMTRTTIINFO(pointName<<"="<<value.toString().toUtf8().constData()<<"=>"<<formatStr<<"["<<formatStr.length()<<'\n');
// The attempt to catch exceptions caused by invalid formating string
try {
if( value.type() == QMetaType::QString ) {
// Treat value as string (values are allways ASCII)
const std::string array = value.toString().toStdString();
const char* data = (char*)array.c_str();
// Assume no more than 10 characters are added during formating.
char* result = (char*)calloc(array.length()+10, sizeof(char));
snprintf(result, array.length()+10, formatStr.c_str(), data);
value = result;
}
// If not string, then it's a number.
else {
double data = value.toDouble();
char* result = (char*)calloc(30, sizeof(char));
// Even 15 characters is already longer than largest number you can make any sense of
snprintf(result, 30, formatStr.c_str(), data);
LOGMTRTTIINFO(pointName<<"="<<data<<"=>"<<formatStr<<"["<<formatStr.length()<<"]=>"<<result<<'\n');
value = result;
}
} catch(...) {
LOGMTRTTIERR("Format error in "<<pointName<<'\n');
}
}
ui->value->setText(value.toString());
}
As you can see I assumed there will be some exception. But there's not, invalid formatting string results in gibberish. This is what I get if I try to format double using %s:
So is there a way to detect that invalid formatting option was selected, such as formatting number as string or vice-versa? And what if totally invalid formatting string is given?
You ask if it's possible to detect format/argument mismatch at run-time, right? Then the short and only answer is no.
To expand on that "no" it's because Variable-argument functions (functions using the ellipsis ...) have no kind of type-safety. The compiler will convert some types of arguments to others (e.g. char or short will be converted to int, float will be converted to double), and if you use a literal string for the format some compilers will be able to parse the string and check the arguments you pass.
However since you pass a variable string, that can change at run-time, the compiler have no possibility for any kind of compile-time checking, and the function must trust that the format string passed is using the correct formatting for the arguments passed. If it's not then you have undefined behavior.
It should be noted that snprintf might not actually fail when being passed mismatching format specifier and argument value.
For example if using the %d format to print an int value, but then passing a double value, the snprintf would happily extract sizeof(int) bytes from the double value, and interpret it as an int value. The value printed will be quite unexpected, but there won't be a "failure" as such. Only undefined behavior (as mentioned above).
Thus it's not really possible to detect such errors or problems at all. At least not through the code. This is something that needs proper testing and code-review to catch.
What happens when snprintf fails? When snprintf fails, POSIX requires that errno is set:
If an output error was encountered, these functions shall return a negative value and set errno to indicate the error.
Also you can find some relevant information regarding how to handle snprintf failures Here.

Using Swift's Repeat collection

In pre-Swift 2.0 sample code, I've come across something like:
var val = "hello" + Repeat(count: paddingAmount, repeatedValue: "-") + "."
In Xcode 7.0/Swift 2.0 Playground, this produces the error:
note: expected an argument list of type '(String, String)'
How would you use the Repeat collection and get the value that's held by the collection for use?
String has an initializer that will return a string of repeated characters, I would recommend using that in your case:
let padding = String(count: paddingAmount, repeatedValue: Character("-"))
var val = "hello" + padding + "."
It's now Array(count: paddingAmount, repeatedValue: "-").

Parsing a peculiar unary minus sign using Spirit.Lex

I'm trying to parse a language where a unary minus is distinguished from a binary minus by the whitespaces existing around the sign. Below are some pseudo rules defining how the minus sign is interpreted in this language:
-x // unary
x - y // binary
x-y // binary
x -y // unary
x- y // binary
(- y ... // unary
Note: The open paren in the last rule can be replaced by any token in the language except 'identifier', 'number' and 'close_paren'.
Note: In the 4th case, x is an identifier. An identifier can constitue a statement of its own. And -y is a separate statement.
Since the minus sign type depends on whitespaces, I thought I'd have two different tokens returned from the lexer, one for unary minus and one for binary minus. Any ideas how can I do this?
Code: Here's some code that works for me, but I'm not quite sure if it's robust enough. I tried to make it simple by removing all the irrelevant lexer rules:
#ifndef LEXER_H
#define LEXER_H
#include <iostream>
#include <algorithm>
#include <string>
#include <boost/spirit/include/lex_lexertl.hpp>
#include <boost/spirit/include/phoenix_function.hpp>
#include <boost/spirit/include/phoenix_algorithm.hpp>
#include <boost/spirit/include/phoenix_operator.hpp>
#include <boost/spirit/include/phoenix_object.hpp>
#include <boost/spirit/include/phoenix_statement.hpp>
#define BOOST_SPIRIT_LEXERTL_DEBUG 1
using std::string;
using std::cerr;
namespace skill {
namespace lex = boost::spirit::lex;
namespace phoenix = boost::phoenix;
// base iterator type
typedef string::iterator BaseIteratorT;
// token type
typedef lex::lexertl::token<BaseIteratorT, boost::mpl::vector<int, string> > TokenT;
// lexer type
typedef lex::lexertl::actor_lexer<TokenT> LexerT;
template <typename LexerT>
struct Tokens: public lex::lexer<LexerT>
{
Tokens(const string& input):
lineNo_(1)
{
using lex::_start;
using lex::_end;
using lex::_pass;
using lex::_state;
using lex::_tokenid;
using lex::_val;
using lex::omit;
using lex::pass_flags;
using lex::token_def;
using phoenix::ref;
using phoenix::count;
using phoenix::construct;
// macros
this->self.add_pattern
("EXP", "(e|E)(\\+|-)?\\d+")
("SUFFIX", "[yzafpnumkKMGTPEZY]")
("INTEGER", "-?\\d+")
("FLOAT", "-?(((\\d+)|(\\d*\\.\\d+)|(\\d+\\.\\d*))({EXP}|{SUFFIX})?)")
("SYMBOL", "[a-zA-Z_?#](\\w|\\?|#)*")
("STRING", "\\\"([^\\\"]|\\\\\\\")*\\\"");
// whitespaces and comments
whitespaces_ = "\\s+";
comments_ = "(;[^\\n]*\\n)|(\\/\\*[^*]*\\*+([^/*][^*]*\\*+)*\\/)";
// literals
float_ = "{FLOAT}";
integer_ = "{INTEGER}";
string_ = "{STRING}";
symbol_ = "{SYMBOL}";
// operators
plus_ = '+';
difference_ = '-';
minus_ = "-({SYMBOL}|\\()";
// ... more operators
// whitespace
this->self += whitespaces_
[
ref(lineNo_) += count(construct<string>(_start, _end), '\n'),
_pass = pass_flags::pass_ignore
];
// a minus between two identifiers, numbers or close-open parens is a binary minus, so add spaces around it
this->self += token_def<omit>("[)a-zA-Z?_0-9]-[(a-zA-Z?_0-9]")
[
unput(_start, _end, *_start + construct<string>(" ") + *(_start + 1) + " " + *(_start + 2)),
_pass = pass_flags::pass_ignore
];
// operators (except for close-brackets) cannot be followed by a binary minus
this->self += token_def<omit>("['`.+*<>/!~&|({\\[=,:#](\\s+-\\s*|\\s*-\\s+)")
[
unput(_start, _end, *_start + construct<string>("-")),
_pass = pass_flags::pass_ignore
];
// a minus directly preceding a symbol or an open paren is a unary minus
this->self += minus_
[
unput(_start, _end, construct<string>(_start + 1, _end)),
_val = construct<string>("-")
];
// literal rules
this->self += float_ | integer_ | string_ | symbol_;
// ... other rules
}
~Tokens() {}
size_t lineNo() { return lineNo_; }
// ignored tokens
token_def<omit> whitespaces_, comments_;
// literal tokens
token_def<int> integer_;
token_def<string> float_, symbol_, string_;
// operator tokens
token_def<> plus_, difference_, minus_; // minus_ is a unary minus
// ... other tokens
// current line number
size_t lineNo_;
};
}
#endif // LEXER_H
Basically, I defined a binary minus (called difference in the code) to be any minus sign that has whitespaces on both sides and used unput to ensure this rule. I also defined a unary minus as a minus sign that directly precedes a symbol or an open paren and again used unput to ensure this rule is maintained (for numbers, the minus sign is part of the token).

Resources