Integers converted to strings do not work as expected - processing

Issue comparing str() to what I'd expect is their String form
Code :
int i = 2;
String r = str(i);
println(r);
if (r == "2") {
println("String works");
} else println("String doesnt work");
if (i == 2) {
println("Integer works");
} else println("Integer doesnt work");
Prints :
2
String doesnt work
Integer works
The second if statement is a copy paste of the first, with only the variable and value changed so theres nothing wrong with my if statement
Processing documentation states (about str()):
Converts a value of a primitive data type (boolean, byte, char, int, or float) to its String
representation. For example, converting an integer with str(3) will return the String value of "3",
converting a float with str(-12.6) will return "-12.6", and converting a boolean with str(true) will
return "true".
Also doesnt work with str(2) == "2" or str(i) == "2"
How do I fix this and get it to work (without converting it back to an integer because that would make my code a bit ugly)

You should not compare String values using ==. Use the equals() function instead:
if (r.equals("2")) {
From the reference:
To compare the contents of two Strings, use the equals() method, as in if (a.equals(b)), instead of if (a == b). A String is an Object, so comparing them with the == operator only compares whether both Strings are stored in the same memory location. Using the equals() method will ensure that the actual contents are compared. (The troubleshooting reference has a longer explanation.)
More info here: How do I compare strings in Java?

Related

How to convert global enum values to string in Godot?

The "GlobalScope" class defines many fundamental enums like the Error enum.
I'm trying to produce meaningful logs when an error occurs. However printing a value of type Error only prints the integer, which is not very helpful.
The Godot documentation on enums indicates that looking up the value should work in a dictionary like fashion. However, trying to access Error[error_value] errors with:
The identifier "Error" isn't declared in the current scope.
How can I convert such enum values to string?
In the documentation you referenced, it explains that enums basically just create a bunch of constants:
enum {TILE_BRICK, TILE_FLOOR, TILE_SPIKE, TILE_TELEPORT}
# Is the same as:
const TILE_BRICK = 0
const TILE_FLOOR = 1
const TILE_SPIKE = 2
const TILE_TELEPORT = 3
However, the names of the identifiers of these constants only exist to make it easier for humans to read the code. They are replaced on runtime with something the machine can use, and are inaccessible later. If I want to print an identifier's name, I have to do so manually:
# Manually print TILE_FLOOR's name as a string, then its value.
print("The value of TILE_FLOOR is ", TILE_FLOOR)
So if your goal is to have descriptive error output, you should do so in a similar way, perhaps like so:
if unexpected_bug_found:
# Manually print the error description, then actually return the value.
print("ERR_BUG: There was a unexpected bug!")
return ERR_BUG
Now the relationship with dictionaries is that dictionaries can be made to act like enumerations, not the other way around. Enumerations are limited to be a list of identifiers with integer assignments, which dictionaries can do too. But they can also do other cool things, like have identifiers that are strings, which I believe you may have been thinking of:
const MyDict = {
NORMAL_KEY = 0,
'STRING_KEY' : 1, # uses a colon instead of equals sign
}
func _ready():
print("MyDict.NORMAL_KEY is ", MyDict.NORMAL_KEY) # valid
print("MyDict.STRING_KEY is ", MyDict.STRING_KEY) # valid
print("MyDict[NORMAL_KEY] is ", MyDict[NORMAL_KEY]) # INVALID
print("MyDict['STRING_KEY'] is ", MyDict['STRING_KEY']) # valid
# Dictionary['KEY'] only works if the key is a string.
This is useful in its own way, but even in this scenario, we assume to already have the string matching the identifier name explicitly in hand, meaning we may as well print that string manually as in the first example.
The naive approach I done for me, in a Singleton (in fact in a file that contain a lot of static funcs, referenced by a class_name)
static func get_error(global_error_constant:int) -> String:
var info := Engine.get_version_info()
var version := "%s.%s" % [info.major, info.minor]
var default := ["OK","FAILED","ERR_UNAVAILABLE","ERR_UNCONFIGURED","ERR_UNAUTHORIZED","ERR_PARAMETER_RANGE_ERROR","ERR_OUT_OF_MEMORY","ERR_FILE_NOT_FOUND","ERR_FILE_BAD_DRIVE","ERR_FILE_BAD_PATH","ERR_FILE_NO_PERMISSION","ERR_FILE_ALREADY_IN_USE","ERR_FILE_CANT_OPEN","ERR_FILE_CANT_WRITE","ERR_FILE_CANT_READ","ERR_FILE_UNRECOGNIZED","ERR_FILE_CORRUPT","ERR_FILE_MISSING_DEPENDENCIES","ERR_FILE_EOF","ERR_CANT_OPEN","ERR_CANT_CREATE","ERR_QUERY_FAILED","ERR_ALREADY_IN_USE","ERR_LOCKED","ERR_TIMEOUT","ERR_CANT_CONNECT","ERR_CANT_RESOLVE","ERR_CONNECTION_ERROR","ERR_CANT_ACQUIRE_RESOURCE","ERR_CANT_FORK","ERR_INVALID_DATA","ERR_INVALID_PARAMETER","ERR_ALREADY_EXISTS","ERR_DOES_NOT_EXIST","ERR_DATABASE_CANT_READ","ERR_DATABASE_CANT_WRITE","ERR_COMPILATION_FAILED","ERR_METHOD_NOT_FOUND","ERR_LINK_FAILED","ERR_SCRIPT_FAILED","ERR_CYCLIC_LINK","ERR_INVALID_DECLARATION","ERR_DUPLICATE_SYMBOL","ERR_PARSE_ERROR","ERR_BUSY","ERR_SKIP","ERR_HELP","ERR_BUG","ERR_PRINTER_ON_FIR"]
match version:
"3.4":
return default[global_error_constant]
# Regexp to use on #GlobalScope documentation
# \s+=\s+.+ replace by nothing
# (\w+)\s+ replace by "$1", (with quotes and comma)
printerr("you must check and add %s version in get_error()" % version)
return default[global_error_constant]
So print(MyClass.get_error(err)), or assert(!err, MyClass.get_error(err)) is handy
For non globals I made this, though it was not your question, it is highly related.
It would be useful to be able to access to #GlobalScope and #GDScript, maybe due a memory cost ?
static func get_enum_flags(_class:String, _enum:String, flags:int) -> PoolStringArray:
var ret := PoolStringArray()
var enum_flags := ClassDB.class_get_enum_constants(_class, _enum)
for i in enum_flags.size():
if (1 << i) & flags:
ret.append(enum_flags[i])
return ret
static func get_constant_or_enum(_class:String, number:int, _enum:="") -> String:
if _enum:
return ClassDB.class_get_enum_constants(_class, _enum)[number]
return ClassDB.class_get_integer_constant_list(_class)[number]

Avoid counting values of Ints with for loop in Kotlin

I have a list of A class objects
data class A{
val abc: Abc
val values: Int?
}
val list = List<A>
If I want to count how many objects I have in list I use:
val count= a.count()
or val count= a.count(it -> {})
How to append all values in the list of objects A avoiding for loop? Generaly Im looking for proper kotlin syntax with avoiding code below
if (a!= null) {
for (i in list) {
counter += i.values!!
}
}
Either use sumBy or sum in case you have a list of non-nullable numbers already available, i.e.:
val counter = list.sumBy { it.values ?: 0 }
// or
val counter = extractedNonNullValues.sum()
The latter only makes sense if you already mapped your A.values before to a list of non-nullable values, e.g. something like:
val extractedNonNullValues= list.mapNotNull { it.values } // set somewhere else before because you needed it...
If you do not need such an intermediate extractedNonNullValues-list then just go for the sumBy-variant.
I don't see you doing any appending to a list in the question. Based on your for loop I believe what you meant was "How do I sum properties of objects in my list". If that's the case you can use sumBy, the extension function on list that takes a labmda: ((T) -> Int) and returns an Int like so:
val sum = list.sumBy { a -> a.values ?: 0 }
Also, calling an Int property values is pretty confusing, I think it should be called value. The plural indicates a list...
On another note, there is a possible NPE in your original for loop. Avoid using !! on nullable values as, if the value is null, you will get an NPE. Instead, use null coalescing (aka elvis) operator to fall back to a default value ?: - this is perfectly acceptable in a sum function. If the iteration is not to do with summing, you may need to handle the null case differently.

Stop Rounding with NSExpression in Calculator [duplicate]

I want to calculate a string, which I'm doing by this:
NSExpression *expression = [NSExpression expressionWithFormat:calculationString];
float result = [[expression expressionValueWithObject:nil context:nil] floatValue];
NSLog(#"%f", result);
The problem is, when calculationstring is 1/2, the result is 0. I tried to change float with double and NSNumber and the %f to %f and %#, but I always just get 0. What to I have to change?
Also if it matters, I am in Europe, so I have commas instead of points for this value, but it shouldn't matter as I am logging with %f which shows it as points. Just for information
Basically, you just need to tell it that you are performing floating point operation,
1.0/2
1.0/2.0
1/2.0
Will all work
Typing in NSExpression is much like in C: literals that look like integers (no decimal point/comma) are treated as integers and thus use integer division. (Under integer division, 1/2 is zero. If you want 0.5, you need floating point division.) This happens when the expression is parsed and evaluated, so attempting to change the type of the result or the formatting of the output has no effect -- those things happen after parsing and evaluation.
If your calculationString is entirely under your control, it's easy to make sure that you use floating point literals anywhere you want floating point division. (That is, use 1.0/2 instead of 1/2.) If not, you'll need to change it such that it does -- here it's probably better to decompose the parsed NSExpression and change an operand rather than munge the string.
Followup edit on the "decompose" bit: String munging in content that you know to have higher-order structure is generally problematic. And with NSExpression, you already have a parser (who's smarter than a simple regex) decomposing the string for you — that is in fact what NSExpression is all about.
So, if you're working with a user-provided string, don't try to change the expression by changing the string. Let NSExpression parse it, then use properties of the resulting object to pick it apart into its constituent expressions. If your string is simply "1/2", then your expression has an array of two arguments and the function "divide:by:" — you can replace it with an equivalent function where one of the arguments is explicitly a floating-point value:
extension NSExpression {
var floatifiedForDivisionIfNeeded: NSExpression {
if function == "divide:by:", let args = arguments, let last = args.last,
let firstValue = args.first?.constantValue as? NSNumber {
let newFirst = NSExpression(forConstantValue: firstValue.doubleValue)
return NSExpression(forFunction: function, arguments: [newFirst, last])
} else {
return self
}
}
}
I think You need to User DDMathParser Which is best in this situation. I have used it in One of my project which is facing same problem as you have faced
DDMathEvaluator *eval = [DDMathEvaluator defaultMathEvaluator];
id value=[eval evaluateString:#"1/2" withSubstitutions:nil error:&error];
NSLog(#"Result %#",value);
Result 0.5
Rickster's solution worked, but had problems with expressions like 5*5/2, where the first argument (here 5*5) was not just a number.
I found a different solution here that works for me: https://stackoverflow.com/a/46554342/6385925
for people who still have this problem i did a somewhat quick fix:
extension String {
var mathExpression: String {
var returnValue = ""
for value in newString.components(separatedBy: " ") {
if value.isOperator {
returnValue += value
} else {
returnValue += "\(Double(value) ?? 0)"
}
}
return returnValue
}
var isOperator: Bool {
["+", "-", "/", "x", "*"].contains(self)
}
}

incomparable types: int and Number in java 8

Suppose I have the following code:
class proba {
boolean fun(Number n) {
return n == null || 0 == n;
}
}
This compiles without problem using openjdk 7 (debian wheezy), but fails to compile when using openjdk 8, with the following error (even when using -source 7):
proba.java:3: error: incomparable types: int and Number
return n == null || 0 == n;
^
1 error
How to go around this:
Is there a compiler option for this construct to continue working in java 8?
Should I make lots of consecutive ifs with instanceof checks of all of Number's subclasses and casting and then comparing one-by one? This seems ugly...
Other suggestions?
This is actually a bugfix (see JDK-8013357): the Java-7 behavior contradicted the JLS §15.21:
The equality operators may be used to compare two operands that are convertible (§5.1.8) to numeric type, or two operands of type boolean or Boolean, or two operands that are each of either reference type or the null type. All other cases result in a compile-time error.
In your case one operand is numeric type, while other is reference type (Number is not convertible to the numeric type), so it should be a compile-time error, according to the specification.
This change is mentioned in Compatibility Guide for Java 8 (search for "primitive").
Note that while your code compiles in Java-7 it works somewhat strangely:
System.out.println(new proba().fun(0)); // compiles, prints true
System.out.println(new proba().fun(0.0)); // compiles, prints false
System.out.println(new proba().fun(new Integer(0))); // compiles, prints false
That's why Java-7 promotes 0 to Integer object (via autoboxing), then compares two objects by reference which is unlikely what you want.
To fix your code, you may convert Number to some predefined primitive type like double:
boolean fun(Number n) {
return n == null || 0 == n.doubleValue();
}
If you want to compare Number and int - call Number.intValue() and then compare.

Visual Studio C++ 2008 Manipulating Bytes?

I'm trying to write strictly binary data to files (no encoding). The problem is, when I hex dump the files, I'm noticing rather weird behavior. Using either one of the below methods to construct a file results in the same behavior. I even used the System::Text::Encoding::Default to test as well for the streams.
StreamWriter^ binWriter = gcnew StreamWriter(gcnew FileStream("test.bin",FileMode::Create));
(Also used this method)
FileStream^ tempBin = gcnew FileStream("test.bin",FileMode::Create);
BinaryWriter^ binWriter = gcnew BinaryWriter(tempBin);
binWriter->Write(0x80);
binWriter->Write(0x81);
.
.
binWriter->Write(0x8F);
binWriter->Write(0x90);
binWriter->Write(0x91);
.
.
binWriter->Write(0x9F);
Writing that sequence of bytes, I noticed the only bytes that weren't converted to 0x3F in the hex dump were 0x81,0x8D,0x90,0x9D, ... and I have no idea why.
I also tried making character arrays, and a similar situation happens. i.e.,
array<wchar_t,1>^ OT_Random_Delta_Limits = {0x00,0x00,0x03,0x79,0x00,0x00,0x04,0x88};
binWriter->Write(OT_Random_Delta_Limits);
0x88 would be written as 0x3F.
If you want to stick to binary files then don't use StreamWriter. Just use a FileStream and Write/WriteByte. StreamWriters (and TextWriters in generally) are expressly designed for text. Whether you want an encoding or not, one will be applied - because when you're calling StreamWriter.Write, that's writing a char, not a byte.
Don't create arrays of wchar_t values either - again, those are for characters, i.e. text.
BinaryWriter.Write should have worked for you unless it was promoting the values to char in which case you'd have exactly the same problem.
By the way, without specifying any encoding, I'd expect you to get non-0x3F values, but instead the bytes representing the UTF-8 encoded values for those characters.
When you specified Encoding.Default, you'd have seen 0x3F for any Unicode values not in that encoding.
Anyway, the basic lesson is to stick to Stream when you want to deal with binary data rather than text.
EDIT: Okay, it would be something like:
public static void ConvertHex(TextReader input, Stream output)
{
while (true)
{
int firstNybble = input.Read();
if (firstNybble == -1)
{
return;
}
int secondNybble = input.Read();
if (secondNybble == -1)
{
throw new IOException("Reader finished half way through a byte");
}
int value = (ParseNybble(firstNybble) << 4) + ParseNybble(secondNybble);
output.WriteByte((byte) value);
}
}
// value would actually be a char, but as we've got an int in the above code,
// it just makes things a bit easier
private static int ParseNybble(int value)
{
if (value >= '0' && value <= '9') return value - '0';
if (value >= 'A' && value <= 'F') return value - 'A' + 10;
if (value >= 'a' && value <= 'f') return value - 'a' + 10;
throw new ArgumentException("Invalid nybble: " + (char) value);
}
This is very inefficient in terms of buffering etc, but should get you started.
A BinaryWriter() class initialized with a stream will use a default encoding of UTF8 for any chars or strings that are written. I'm guessing that the
binWriter->Write(0x80);
binWriter->Write(0x81);
.
.
binWriter->Write(0x8F);
binWriter->Write(0x90);
binWriter->Write(0x91);
calls are binding to the Write( char) overload so they're going through the character encoder. I'm not very familiar with C++/CLI, but it seems to me that these calls should be binding to Write(Int32), which shouldn't have this problem (maybe your code is really calling Write() with a char variable that's set to the values in your example. That would account for this behavior).
0x3F is commonly known as the ASCII character '?'; the characters that are mapping to it are control characters with no printable representation. As Jon points out, use a binary stream rather than a text-oriented output mechanism for raw binary data.
EDIT -- actually your results look like the inverse of what I would expect. In the default code page 1252, the non-printable characters (i.e. ones likely to map to '?') in that range are 0x81, 0x8D, 0x8F, 0x90 and 0x9D

Resources