Mocking new enum value to test default case in switch - enums

I'm trying to test a default case in a switch that checks an enum. I've seen a few posts and got to this solution:
int nValues = EnumType.values().length;
try (MockedStatic<EnumType> mocked = mockStatic(EnumType.class)) {
val UNSUPPORTED = mock(EnumType.class);
doReturn(nValues).when(UNSUPPORTED).ordinal();
doReturn(nValues).when(UNSUPPORTED).getValue();
doReturn("UNSUPPORTED").when(UNSUPPORTED).name();
mocked.when(EnumType::values).thenReturn(new EnumType[] {
EnumType.A,
EnumType.B,
EnumType.C,
EnumType.D,
UNSUPPORTED});
assertThatThrownBy(() -> mapper.callSwitch(UNSUPPORTED))
.isInstanceOf(CustomException.class);
}
However this gives me the following error on the switch statement
Java.lang.ArrayIndexOutOfBoundsException: Index 4 out of bounds for length 4
One of the comments on this answer https://stackoverflow.com/a/7233572/2696646 seems to describe a solution to my problem, but as far as I can see I did exactly that.
Here's my enum and switch:
public enum EnumType{
A(0),
B(1),
C(2),
D(3);
}
switch (status) {
case A:
return "aaaa";
case B:
return "bbbb";
case C:
return "cccc";
case D:
return "dddd";
default:
// throw some custom exception
}
Can anyone explain to me what I'm doing wrong?

Related

Compiler error in Java 8, no error in Java 6, why?

I have this code:
import java.util.ArrayList;
import java.util.List;
import com.google.common.base.Function;
import com.google.common.collect.FluentIterable;
class B { }
class C extends B { }
class D { }
class Test {
void test() {
List<B> lb = new ArrayList<B>();
List<C> lc = new ArrayList<C>();
Iterable<B> it = lb == null ? lc : lb; // a
FluentIterable.from(lb == null ? lc : lb).transform(new Function<B, D>() { // b
#Override
public D apply(B b) {
return null;
}
});
}
}
Under Java 8 line //b gives me these compiler errors:
Incompatible types. Found: 'java.util.List<C>', required: 'java.util.List<capture<? extends B>>'
Incompatible types. Found: 'java.util.List<B>', required: 'java.util.List<capture<? extends B>>'
Under Java 6 the same line compiles fine.
Line //a
Iterable<B> it = lb == null ? lc : lb;
produces compile error
Incompatible types. Found: 'java.util.List<C>', required: 'java.lang.Iterable<B>'
under both Java 6 and Java 8, which is correct.
But Guava's FluentIterable.from is just a wrapper around Iterable. Why does it not produce any error under Java 6 and does produce errors under Java 8? How does it differ from what I have at line //a?
Thank you.
TL;DR The language specification for Type Inference was changed in Java 8.
Line //a is a compile time error as a List<C> is not assignable to a variable of type Iterable<B>. If it were then you could compile the following unsound code
List<C> listOfC = new ArrayList<C>();
Iterable<B> listOfB = listOfC; // if this were allowed it would be bad
listOfB.add(new B()); // ...because now my list of C's has a B in it
C reallyAB = listOfC.get(0); // ...and now i have an object of type B in a variable of type C
reallyAB.methodThatsDefinedOnAC(); // ...and crash
Detecting the unsound assignment is straightforward for the compiler, because the variable on line //a has an explict type.
Line //b is more complicated because the method has a generic type parameter, and so the compiler must infer the types for a method at the same time as checking the arguments. The rules surrounding how type parameters are inferred changed significantly with the release of Java 8, which introduced Lambdas into the language, and aimed to improve type inferance in most cases.
https://docs.oracle.com/javase/specs/jls/se8/html/jls-18.html
A detailed analysis of why ternary operators now fail to satisfy generic type constraints was provided here: Generics compilation error with ternary operator in Java 8, but not in Java 7
The incompatibility between the specifications is recorded in this jdk bug: https://bugs.openjdk.java.net/browse/JDK-8044053
You can overcome the limitation in the rules by providing an explicit type. You could assign the expressions result to a variable and pass the variable into the invocation, or you could cast the expression.
void test() {
List<B> lb = new ArrayList<B>();
List<C> lc = new ArrayList<C>();
FluentIterable.from((List<? extends B>)(lb == null ? lc : lb)).transform(new Function<B, D>() { // b
#Override
public D apply(B b) {
return null;
}
});
}

decompiling a bin file of protobufs-net

i have a serialized bin file of protobufs, written mainly in protobufs-net.
i want to decompile it, and see the structure of it.
i used some toolds like :
https://protogen.marcgravell.com/decode
and i also used protoc:
protoc --decode_raw < ~/Downloads/file.bin
and this is part of the result i get:
1 {
1: "4f81b7bb-d8bd-e911-9c1f-06ec640006bb"
2: 0x404105b1663ef93a
3: 0x4049c6158c593f36
4: 0x40400000
5 {
1: "53f8afde-04c6-e811-910e-4622e9d1766e"
2 {
1: "e993fba0-8fc9-e811-9c15-06ec640006bb"
}
2 {
1: "9a7c7210-3aca-e811-9c15-06ec640006bb"
2: 1
}
2 {
1: "2d7d12f1-2bc9-e811-9c15-06ec640006bb"
}
3: 18446744073709551615
}
6: 46
7: 1571059279000
}
how i can decompile it? i want to know the structure and change data in it and make a new bin file.
Reverse engineering a .proto file is mostly a case of looking at the output of the tools such as you've mentioned, and trying to write a .proto that looks similar. Unfortunately, a number of concepts are ambiguous if you don't know the schema, as multiple different data types and shapes share the same encoding details, but... we can make guesses.
Looking at your output:
1 {
...
}
tells us that our root message probably has a sub-message at field 1; so:
message Root {
repeated Foo Foos = 1;
}
(I'm guessing at the repeated here; if the 1 only appears once, it could be single)
with everything at the next level being our Foo.
1: "4f81b7bb-d8bd-e911-9c1f-06ec640006bb"
2: 0x404105b1663ef93a
3: 0x4049c6158c593f36
4: 0x40400000
5: { ... }
6: 46,
7: 1571059279000
this looks like it could be
message Foo {
string A = 1;
sfixed64 B = 2;
sfixed64 C = 3;
sfixed32 D = 4;
repeated Bar E = 5; // again, might not be "repeated" - see how many times it occurs
int64 F = 6;
int64 G = 7;
}
however; those sfixed64 could be double, or fixed64; and those sfixed32 could be fixed32 or float; likewise, the int64 could be sint64 or uint64 - or int32, sint32, uint32 or bool, and I wouldn't be able to tell (they are all just "varint"). Each option gives a different meaning to the value!
our Bar definitely has some kind of repeated, because of all the 2:
1: "53f8afde-04c6-e811-910e-4622e9d1766e"
2 { ... }
2 { ... }
2 { ... }
3: 18446744073709551615
let's guess at:
message Bar {
string A = 1;
repeated Blap B = 2;
int64 C = 3;
}
and finally, looking at the 2 from the previous bit, we have:
1: "e993fba0-8fc9-e811-9c15-06ec640006bb"
and
1: "9a7c7210-3aca-e811-9c15-06ec640006bb"
2: 1
and
1: "2d7d12f1-2bc9-e811-9c15-06ec640006bb"
so combining those, we might guess:
message Blap {
string A = 1;
int64 B = 2;
}
Depending on whether you have more data, there may be additional fields, or you may be able to infer more context. For example, if an int64 value such as Blap.B is always 1 or omitted, it might actually be a bool. If one of the repeated elements always has at most one value, it might not be repeated.
The trick is to to play with it until you can deserialize the data, re-serialize it, and get the exact same payload (i.e. round-trip).
Once you have that: you'll want to deserialize it, mutate the thing you wanted to change, and serialize.

How to prevent unwanted variable assignment inside condition statement?

Missing equal sign inside assignment (typing = instead of ==) make unwanted assignment inside a condition statement.
For example, consider the scenario below (this example is in C, but the question is valid also for interpreted code).
CASE A:
int g=1;
if ( g == 3 )
{
printf("g is 3");
}
else
{
printf("g is not 3");
}
//this return: "g is not 3"
CASE B: (typo: missing = inside condition)
int g=1;
if ( g = 3 )
{
printf("g is 3");
}
else
{
printf("g is not 3");
}
//this return: "g is 3" because of the assignment
Both the cases are formally correct, so the code will work but not as we want; and may be hard to debug.
How to prevent this situation? There is a solution that cover the interpreted code (for example javascript), apart static analyzers?
The thing is, using an assignment inside a condition body for if, while, or for is perfectly valid C and is very often used intentionally. For example, I often find myself using the following skeleton code to create a window when writing a Win32 API GUI:
if((hWnd = CreateWindowExW(...)) == NULL)
{
MessageBoxW(NULL, L"Window creation failed", L"Error", MB_OK | MB_ICONSTOP);
return GetLastError();
}
If the test is solely for equality and you want to avoid using the = operator accidentally, one thing you can do is get into the habit of putting the r-value on the left side of the operator, so that if you accidentally use =, it will produce a compilation error:
char *p = malloc(100000);
if(NULL == p)
{
// handle null pointer
}
Obviously, this only works if at least one side of the comparison is an r-value or a const variable.

C++: std::bind -> std::function

I have several functions which receive the following type:
function<double(int,int,array2D<vector<double *>>*)>
Where array2D is a custom type. Further, I have a function which takes the following as arguments:
double ising_step_distribution(double temp,int i,int j,array2D<vector<double *>>* model)
Right now, in order to bind the first value, temp, and return a functor which has the correct signature, I am writing:
double temp = some_value;
function<double(int,int,array2D<vector<double *>>*)> step_func =
[temp](int i, int j, array2D<vector<double *>>* model){
return ising_step_distribution(temp,i,j,model);
}
}
And this works. However, the following breaks:
auto step_func =
[temp](int i, int j, array2D<vector<double *>>* model){
return ising_step_distribution(temp,i,j,model);
}
}
With the following error:
candidate template ignored:
could not match
'function<double (int, int, array2D<vector<type-parameter-0-0 *, allocator<type-parameter-0-0 *> > > *)>'
against
'(lambda at /Users/cdonlan/home/mcmc/main.cpp:200:25)'
void mix_2D_model(function<double(int,int,array2D<vector<T*>>*)> step_distribution_func,...
And so, the code clump is ugly, obfuscative and repetitive (because I am making many of these).
I have been reading the documentation, and I understand that I should be able to write:
function<double(int,int,array2D<vector<double *>>*)> step_func =
bind(ising_step_distribution,temp,_1,_2,_3);
However, the only examples I have seen are for functions of type function<void()>. This one fails with an error:
// cannot cast a bind of type
// double(&)(double,int,int,array2D<vector<double *>>*)
// as function<double(int,int,...)
How do I get a visually clean bind and cast?
How do I get a visually clean bind and cast?
One way is:
using F = function<double(int,int,array2D<vector<double *>>*)>;
auto step_func =
[temp](int i, int j, array2D<vector<double *>>* model){
return ising_step_distribution(temp,i,j,model);
}
}
And then:
auto step_func_2 = F(step_func);
mix_2D_model(step_func_2, ...);
Or:
mix_2D_model(F(step_func), ...);

Why does Newtonsoft Replace function only replace the value if it is changed?

Take the following code:
JProperty toke = new JProperty("value", new JValue(50)); //toke.Value is 50
toke.Value.Replace(new JValue(20)); //toke.Value is 20
This works as expected. Now examine the following code:
JValue val0 = new JValue(50);
JProperty toke = new JProperty("value", val0); //toke.Value is 50
JValue val1 = new JValue(20);
toke.Value.Replace(val1); //toke.Value is 20
This also works as expected, but there is an important detail. val0 is no longer part of the toke's JSON tree, and val1 is part of the JSON tree; this means that val0 has no valid parent, while val1 does.
Now take this code.
JValue val0 = new JValue(50);
JProperty toke = new JProperty("value", val0); //toke.Value is 50
JValue val1 = new JValue(50);
toke.Value.Replace(val1); //toke.Value is 50
The behavior is different; val0 is still part of toke's JSON tree, and val1 is not. Now val0 has a valid parent, while val1 does not.
This is a critical distinction, and if you are using Newtonsoft JSON tree's to represent a structure, and storing JTokens as references into the tree, the way the references are structure can change based on the value being Replaced, which seems incorrect.
Is there any flaw with my reasoning? Or is behavior incorrect, as I believe it is?
I think you have a valid point: Replace should replace the token instance and set the parent properly even if the tokens have the same values.
This works as you would expect if the property value is a JObject and you replace it with an identical JObject:
JObject obj1 = JObject.Parse(#"{ ""foo"" : 1 }");
JProperty prop = new JProperty("bar", obj1);
JObject obj2 = JObject.Parse(#"{ ""foo"" : 1 }");
prop.Value.Replace(obj2);
Console.WriteLine("obj1 parent is " +
(ReferenceEquals(obj1.Parent, prop) ? "prop" : "not prop")); // "not prop"
Console.WriteLine("obj2 parent is " +
(ReferenceEquals(obj2.Parent, prop) ? "prop" : "not prop")); // "prop"
However, the code seems to have been deliberately written to work differently for JValues. In the source code we see that JToken.Replace() calls JContainer.ReplaceItem(), which in turn calls SetItem(). In the JProperty class, SetItem() is implemented like this:
internal override void SetItem(int index, JToken item)
{
if (index != 0)
{
throw new ArgumentOutOfRangeException();
}
if (IsTokenUnchanged(Value, item))
{
return;
}
if (Parent != null)
{
((JObject)Parent).InternalPropertyChanging(this);
}
base.SetItem(0, item);
if (Parent != null)
{
((JObject)Parent).InternalPropertyChanged(this);
}
}
You can see that it checks whether the value is "unchanged", and if so, it returns without doing anything. If we look at the implementation of IsTokenUnchanged() we see this:
internal static bool IsTokenUnchanged(JToken currentValue, JToken newValue)
{
JValue v1 = currentValue as JValue;
if (v1 != null)
{
// null will get turned into a JValue of type null
if (v1.Type == JTokenType.Null && newValue == null)
{
return true;
}
return v1.Equals(newValue);
}
return false;
}
So, if the current token is a JValue, it checks whether it Equals the other token, otherwise the token is automatically considered to have changed. And Equals for a JValue is of course based on whether the underlying primitives themselves are equal.
I cannot speak to the reasoning behind this implementation decision, but it seems to be worth reporting an issue to the author. The "correct" fix, I think, would be to make SetItem use ReferenceEquals(Value, item) instead of IsTokenUnchanged(Value, item).

Resources