How to refactor cascade if statements - refactoring

I found this question on https://github.com/arialdomartini/Back-End-Developer-Interview-Questions#snippets
And I am curious about your opinion, I just can't find an decent solution of this refactor, and what pattern would apply in this very common case.
function()
{
HRESULT error = S_OK;
if(SUCCEEDED(Operation1()))
{
if(SUCCEEDED(Operation2()))
{
if(SUCCEEDED(Operation3()))
{
if(SUCCEEDED(Operation4()))
{
}
else
{
error = OPERATION4FAILED;
}
}
else
{
error = OPERATION3FAILED;
}
}
else
{
error = OPERATION2FAILED;
}
}
else
{
error = OPERATION1FAILED;
}
return error;
}
Do you have any idea of how to refactor this?

Actually, I feel there is way more space for refactoring than what suggested by Sergio Tulentsev.
The questions in the repo you linked are more about starting a conversation on code than closed-ended questions. So, I think it is worth to discuss the smells and design flaws of that code, to set up the refactoring goals.
Smells
I see these problems:
The code violates some of the SOLID principles. It surely violates the Open Closed Principle, as it is not possible to extend it without changing its code. E.g., adding a new operation would require adding a new if/else branch;
It also violate the Single Responsibility Principle. It just does too much. It performs error checks, it's responsible to execute all the 4 operations, it contains their implementations, it's responsible to check their results and to chain their execution in the right order;
It violates the Dependency Inversion Principle, because there are dependencies between high-level and low-level components;
It has a horrible Cyclomatic complexity
It exhibits high coupling and low cohesion, which is exactly the opposite of what is recommended;
It contains a lot of code duplication: the function Succeeded() is repeated in each branch; the structure of if/elses is replicated over and over; the assignment of error is duplicated.
It could have a pure functional nature, but it relies instead on state mutation, which makes reasoning about it not easy.
There's an empty if statement body, which might be confusing.
Refactoring
Let's see what could be done.
Here I'm using a C# implementation, but similar steps can be performed with whatever language.
I renamed some of the elements, as I believe honoring a naming convention is part of the refactoring.
internal class TestClass
{
HResult SomeFunction()
{
var error = HResult.Ok;
if(Succeeded(Operation1()))
{
if(Succeeded(Operation2()))
{
if(Succeeded(Operation3()))
{
if(Succeeded(Operation4()))
{
}
else
{
error = HResult.Operation4Failed;
}
}
else
{
error = HResult.Operation3Failed;
}
}
else
{
error = HResult.Operation2Failed;
}
}
else
{
error = HResult.Operation1Failed;
}
return error;
}
private string Operation1()
{
// some operations
return "operation1 result";
}
private string Operation2()
{
// some operations
return "operation2 result";
}
private string Operation3()
{
// some operations
return "operation3 result";
}
private string Operation4()
{
// some operations
return "operation4 result";
}
private bool Succeeded(string operationResult) =>
operationResult == "some condition";
}
internal enum HResult
{
Ok,
Operation1Failed,
Operation2Failed,
Operation3Failed,
Operation4Failed,
}
}
For the sake of simplicity, I supposed each operation returns a string, and that the success or failure is based on an equality check on the string, but of course it could be whatever. In the next steps, it would be nice if the code is independent from the result validation logic.
Step 1
It would be nice to start the refactoring with the support of some test harness.
public class TestCase
{
[Theory]
[InlineData("operation1 result", HResult.Operation1Failed)]
[InlineData("operation2 result", HResult.Operation2Failed)]
[InlineData("operation3 result", HResult.Operation3Failed)]
[InlineData("operation4 result", HResult.Operation4Failed)]
[InlineData("never", HResult.Ok)]
void acceptance_test(string failWhen, HResult expectedResult)
{
var sut = new SomeClass {FailWhen = failWhen};
var result = sut.SomeFunction();
result.Should().Be(expectedResult);
}
}
Our case is a trivial one, but being the quiz supposed to be a job interview question, I would not ignore it.
Step 2
The first refactoring could be getting rid of the mutable state: each if branch could just return the value, instead of mutating the variable error. Also, the name error is misleading, as it includes the success case. Let's just get rid of it:
HResult SomeFunction()
{
if(Succeeded(Operation1()))
{
if(Succeeded(Operation2()))
{
if(Succeeded(Operation3()))
{
if(Succeeded(Operation4()))
return HResult.Ok;
else
return HResult.Operation4Failed;
}
else
return HResult.Operation3Failed;
}
else
return HResult.Operation2Failed;
}
else
return HResult.Operation1Failed;
}
We got rid of the empty if body, making in the meanwhile the code slightly easier to reason about.
Step 3
If now we invert each if statement (the step suggested by Sergio)
internal HResult SomeFunction()
{
if (!Succeeded(Operation1()))
return HResult.Operation1Failed;
if (!Succeeded(Operation2()))
return HResult.Operation2Failed;
if (!Succeeded(Operation3()))
return HResult.Operation3Failed;
if (!Succeeded(Operation4()))
return HResult.Operation4Failed;
return HResult.Ok;
}
we make it apparent that the code performs a chain of executions: if an operation succeeds, the next operation is invoked; otherwise, the chain is interrupted, with an error. The GOF Chain of Responsibility Pattern comes to mind.
Step 4
We could move each operation to a separate class, and let our function receive a chain of operations to execute in a single shot. Each class would deal with its specific operation logic (honoring the Single Responsibility Principle).
internal HResult SomeFunction()
{
var operations = new List<IOperation>
{
new Operation1(),
new Operation2(),
new Operation3(),
new Operation4()
};
foreach (var operation in operations)
{
if (!_check.Succeeded(operation.DoJob()))
return operation.ErrorCode;
}
return HResult.Ok;
}
We got rid of the ifs altogether (but one).
Notice how:
The interface IOperation has been introduced, which is a preliminary move to decouple the function from the operations, complying the with the Dependency Inversion Principle;
The list of operations can easily be injected into the class, using the Dependency Injection.
The result validation logic has been moved to a separate class Check, injected into the main class (Dependency Inversion and Single Responsibility are satisfied).
internal class SimpleStringCheck : IResultCheck
{
private readonly string _failWhen;
public Check(string failWhen)
{
_failWhen = failWhen;
}
internal bool Succeeded(string operationResult) =>
operationResult != _failWhen;
}
We gained the ability to switch the check logic without modifying the main class (Open-Closed Principle).
Each operation has been moved to a separate class, like:
internal class Operation1 : IOperation {
public string DoJob()
{
return "operation1 result";
}
public HResult ErrorCode => HResult.Operation1Failed;
}
Each operation knows its own error code. The function itself became independent from it.
Step 5
There is something more to refactor on the code
foreach (var operation in operations)
{
if (!_check.Succeeded(operation.DoJob()))
return operation.ErrorCode;
}
return HResult.Ok;
}
First, it's not clear why the case return HResult.Ok; is handled as a special case: the chain could contain a terminating operation never failing and returning that value. This would allow us to get rid of that last if.
Second, our function still has 2 responsibility: to visit the chain, and to check the result.
An idea could be to encapsulate the operations into a real chain, so our function could reduce to something like:
return operations.ChainTogether(_check).Execute();
We have 2 options:
Each operation knows the next operation, so starting from operation1 we could execute the whole chain with a single call;
Operations are kept unaware of being part of a chain; a separate, encapsulating structure adds to operations the ability to be executed in sequence.
I'm going on with the latter, but that's absolutely debatable. I'm introducing a class modelling a ring in a chain, moving the code away from our class:
internal class OperationRing : IRing
{
private readonly Check _check;
private readonly IOperation _operation;
internal IRing Next { private get; set; }
public OperationRing(Check check, IOperation operation)
{
_check = check;
_operation = operation;
}
public HResult Execute()
{
var operationResult = _operation.DoJob();
if (_check.Succeeded(operationResult))
return Next.Execute();
return _operation.ErrorCode;
}
}
This class is responsible to execute an operation and to handle the execution to the next ring if it succeeded, or to interrupt the chain returning the right error code.
The chain will be terminated by a never-failing element:
internal class AlwaysSucceeds : IRing
{
public HResult Execute() => HResult.Ok;
}
Our original class reduces to:
internal class SomeClass
{
private readonly Check _check;
private readonly List<IOperation> _operations;
public SomeClass(Check check, List<IOperation> operations)
{
_check = check;
_operations = operations;
}
internal HResult SomeFunction()
{
return _operations.ChainTogether(_check).Execute();
}
}
In this case, ChainTogether() is a function implemented as an extension of List<IOperation>, as I don't believe that the chaining logic is responsibility of our class.
That's not the right answer
It's absolutely debatable that the responsibilities have been separated to the most appropriate classes. For example:
is chaining operations a task of our function? Or should it directly receive the chained structure?
why the use of an enumerable? As Robert Martin wrote in "Refactoring: Improving the Design of Existing Code": enums are code smells and should be refactored to polymorphic classes;
how much is too much? Is the resulting design too complex? Does the complexity of the whole application need this level of modularisation?
Therefore, I'm sure there are several other ways to refactor the original function. In a job interview, or in a pair programming session, I expect a lot of discussions and evaluations to occur.

You could use early returns here.
function() {
if(!SUCCEEDED(Operation1())) {
return OPERATION1FAILED;
}
if(!SUCCEEDED(Operation2())) {
return OPERATION2FAILED;
}
if(!SUCCEEDED(Operation3())) {
return OPERATION3FAILED;
}
if(!SUCCEEDED(Operation4())) {
return OPERATION4FAILED;
}
# everything succeeded, do your thing
return S_OK;
}

Related

writing a typesafe visitor with labeled rules

I am migrating my prototype from a listener to a visitor pattern. In the prototype, I have a grammar fragment like this:
thingList: thing+ ;
thing
: A aSpec # aRule
| B bSpec # bRule
;
Moving to a visitor pattern, I am not sure how I write visitThingList. Every visitor returns a specializes subclass of "Node", and I would love somehow when to be able to write something like this, say a "thingList" cares about the first thing in the list some how ...
visitThingList(cx: ThingListContext): ast.ThingList {
...
const firstThing = super.visit(cx.thing(0));
The problem with this is in typing. Each visit returns a specialized type which is a subclass of ast.Node. Because I am using super.visit, the return value will be the base class
of my node tree. However, I know because I am looking at the grammar
and because I wrote both vistARule and visitBRule that the result of the visit will be of type ast.Thing.
So we make visitThingList express it's expectation with cast ...
visitThingList(cx: ThingListContext): ast.ThingList {
const firstThing = super.visit(cx.thing(0));
if (!firstThing instanceof ast.Thing) {
throw "no matching visitor for thing";
}
// firstThing is now known to be of type ast.Thing
...
In much of my translator, type problems with ast Nodes are a compile time issue, I fix them in my editor. In this case, I am producing a more fragile walk, which will only reveal the fragility at runtime and then only with certain inputs.
I think I could change my grammar, to make it possible to encode the
type expectations of vistThingList() by creating a vistThing() entry point
thingList: thing+ ;
thing: aRule | bRule;
aRule: A aSpec;
bRule: B bSpec;
With vistThing() typed to match the expectation:
visitThing(cx: ThingContext): ast.Thing { }
visitThingList(cx: ThingListContext) {
const firstThing: ast.Thing = this.visitThing(cx.thing(0));
Now visitThingList can call this.visitThing() and the type enforcement of making sure all rules that a thing matches return ast.Thing belongs to visitThing(). If I do create a new rule for thing, the compiler will force me to change the return type of visitThing() and if I make it return something which is NOT a thing, visitThingList() will show type errors.
This also seems wrong though, because I don't feel like I should have to change my grammar in order to visit it.
I am new to ANTLR and wondering if there is a better pattern or approach to this.
When I was using the listener pattern, I wrote something like:
enterThing(cx: ThingContext) { }
enterARule(cx : ARuleContext) { }
enterBRule(cx : BRuleContext) { }
Not quite: for a labeled rule like thing, the listener will not contain enterThing(...) and exitThing(...) methods. Only the enter... and exit... methods for the labels aSpec and bSpec will be created.
How would I write the visitor walk without changing the grammar?
I don't understand why you need to change the grammar. When you keep the grammar like you mentioned:
thingList: thing+ ;
thing
: A aSpec # aRule
| B bSpec # bRule
;
then the following visitor could be used (again, there is no visitThing(...) method!):
public class TestVisitor extends TBaseVisitor<Object> {
#Override
public Object visitThingList(TParser.ThingListContext ctx) {
...
}
#Override
public Object visitARule(TParser.ARuleContext ctx) {
...
}
#Override
public Object visitBRule(TParser.BRuleContext ctx) {
...
}
#Override
public Object visitASpec(TParser.ASpecContext ctx) {
...
}
#Override
public Object visitBSpec(TParser.BSpecContext ctx) {
...
}
}
EDIT
I do not know how, as i iterate over that, to call the correct visitor for each element
You don't need to know. You can simply call the visitor's (super) visit(...) method and the correct method will be invoked:
class TestVisitor extends TBaseVisitor<Object> {
#Override
public Object visitThingList(TParser.ThingListContext ctx) {
for (TParser.ThingContext child : ctx.thing()) {
super.visit(child);
}
return null;
}
...
}
And you don't even need to implement all methods. The ones you don't implement, will have a default visitChildren(ctx) in them, causing (as the name suggests) all child nodes under them being traversed.
In your case, the following visitor will already cause the visitASpec and visitBSpec being invoked:
class TestVisitor extends TBaseVisitor<Object> {
#Override
public Object visitASpec(TParser.ASpecContext ctx) {
System.out.println("visitASpec");
return null;
}
#Override
public Object visitBSpec(TParser.BSpecContext ctx) {
System.out.println("visitBSpec");
return null;
}
}
You can test this (in Java) like this:
String source = "... your input here ...";
TLexer lexer = new TLexer(CharStreams.fromString(source));
TParser parser = new TParser(new CommonTokenStream(lexer));
TestVisitor visitor = new TestVisitor();
visitor.visit(parser.thingList());

Querying single database row using rxjava2

I am using rxjava2 for the first time on an Android project, and am doing SQL queries on a background thread.
However I am having trouble figuring out the best way to do a simple SQL query, and being able to handle the case where the record may or may not exist. Here is the code I am using:
public Observable<Record> createRecordObservable(int id) {
Callable<Record> callback = new Callable<Record>() {
#Override
public Record call() throws Exception {
// do the actual sql stuff, e.g.
// select * from Record where id = ?
return record;
}
};
return Observable.fromCallable(callback).subscribeOn(Schedulers.computation());
}
This works well when there is a record present. But in the case of a non-existent record matching the id, it treats it like an error. Apparently this is because rxjava2 doesn't allow the Callable to return a null.
Obviously I don't really want this. An error should be only if the database failed or something, whereas a empty result is perfectly valid. I read somewhere that one possible solution is wrapping Record in a Java 8 Optional, but my project is not Java 8, and anyway that solution seems a bit ugly.
This is surely such a common, everyday task that I'm sure there must be a simple and easy solution, but I couldn't find one so far. What is the recommended pattern to use here?
Your use case seems appropriate for the RxJava2 new Observable type Maybe, which emit 1 or 0 items.
Maybe.fromCallable will treat returned null as no items emitted.
You can see this discussion regarding nulls with RxJava2, I guess that there is no many choices but using Optional alike in other cases where you need nulls/empty values.
Thanks to #yosriz, I have it working with Maybe. Since I can't put code in comments, I'll post a complete answer here:
Instead of Observable, use Maybe like this:
public Maybe<Record> lookupRecord(int id) {
Callable<Record> callback = new Callable<Record>() {
#Override
public Record call() throws Exception {
// do the actual sql stuff, e.g.
// select * from Record where id = ?
return record;
}
};
return Maybe.fromCallable(callback).subscribeOn(Schedulers.computation());
}
The good thing is the returned record is allowed to be null. To detect which situation occurred in the subscriber, the code is like this:
lookupRecord(id)
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new Consumer<Record>() {
#Override
public void accept(Record r) {
// record was loaded OK
}
}, new Consumer<Throwable>() {
#Override
public void accept(Throwable throwable) {
// there was an error
}
}, new Action() {
#Override
public void run() {
// there was an empty result
}
});

Testing that an array is ordered randomly

I am testing my code with PHPunit. My code has several ordering-methods: by name, age, count and random. Below the implementation and test for sorting by count. These are pretty trivial.
class Cloud {
//...
public function sort($by_property) {
usort($this->tags, array($this, "cb_sort_by_{$by_property}"));
return $this;
}
private function cb_sort_by_name($a, $b) {
$al = strtolower($a->get_name());
$bl = strtolower($b->get_name());
if ($al == $bl) {
return 0;
}
return ($al > $bl) ? +1 : -1;
}
/**
* Sort Callback. High to low
*/
private function cb_sort_by_count($a, $b) {
$ac = $a->get_count();
$bc = $b->get_count();
if ($ac == $bc) {
return 0;
}
return ($ac < $bc) ? +1 : -1;
}
}
Tested with:
/**
* Sort by count. Highest count first.
*/
public function testSortByCount() {
//Jane->count: 200, Blackbeard->count: 100
//jane and blackbeard are mocked "Tags".
$this->tags = array($this->jane, $this->blackbeard);
$expected_order = array("jane", "blackbeard");
$given_order = array();
$this->object->sort("count");
foreach($this->object->get_tags() as $tag) {
$given_order[] = $tag->get_name();
}
$this->assertSame($given_order, $expected_order);
}
But now, I want to add "random ordering"
/**
* Sort random.
*/
public function testSortRandom() {
//what to test? That "shuffle" got called? That the resulting array
// has "any" ordering?
}
The implementation could be anything from calling shuffle($this->tags) to a usort callback that returns 0,-1 or +1 randomly. Performance is an issue, but testability is more important.
How to test that the array got ordered randomly? AFAIK it is very hard to stub global methods like shuffle.
Assuming you are using shuffle your method should look like this
sortRandom() {
return shuffle($this->tags);
}
Well, you don't need to test if keys are shuffled but if array is still returned.
function testSortRandom(){
$this->assertTrue(is_array($this->object->sortRandom()));
}
You should test your code, not php core code.
This is actually not really possible in any meaningful sense. If you had a list with just a few items in, then it'd be entirely possible that sorting by random would indeed look like it's sorted by any given field (and as it happens the odds of it being in the same order as sorting by any other field are pretty high if you don't have too many elements)
Unit testing a sort operation seems a bit daft if you ask me though if the operation doesn't actually manipulate the data in any way. Feels like unit testing for the sake of it rather than because it's actually measuring that something works as intended.
I decided to implement this with a global-wrapper:
class GlobalWrapper {
public function shuffle(&$array);
shuffle($array);
}
}
In the sort, I call shuffle through that wrapper:
public function sort($by_property) {
if ($by_property == "random") {
$this->global_wrapper()->shuffle($this->tags);
}
//...
}
Then, in the tests I can mock that GlobalWrapper and provide stubs for global functions that are of interest. In this case, all I am interested in, is that the method gets called, not what it outputs[1].
public function testSortRandomUsesShuffle() {
$global = $this->getMock("GlobalWrapper", array("shuffle"));
$drupal->expects($this->once())
->method("shuffle");
$this->object->set_global_wrapper($drupal);
$this->object->sort("random");
}
[1] In reality I have Unit Tests for this wrapper too, testing the parameters and the fact it does a call-by-ref. Also, this wrapper was already implemented (and called DrupalWrapper) to allow me to stub certain global functions provided by a third party (Drupal). This implementation allows me to pass in the wrapper using a set_drupal() and fetch it using drupal(). In above examples, I have called these set_global_wrapper() and global_wrapper().

How to call delegate only once / one time with moles?

How is it possible to call a delegated Method only once / one time with moles?
MyClass.AllInstances.ResultateGet = delegate { return new ResultatInfoCollection(); };
I want to call the Method "ResultateGet" only one time because the init is quite complex the first time without a delegate.
target.UpdateResultate(); //calls delegate "ResultateGet"
//Assert some stuff
target.Verify(); //needs original function "ResultateGet" so unit test is useful
I am generally interested how to call a moles delegate one time ore a specific number of times before the original function is called and not the delegate.
Update:
I found a way, that seems a little bit cumbersome. Any better Solution?
ResultatInfoCollection x = new ResultatInfoCollection();
MolesContext.ExecuteWithoutMoles(() => x = target.Resultate);
Also, see my answer to: How to assign/opt from multiple delegates for a 'moled' method? This provides an example of gating logic inside the anonymous method.
Ooh, good question! I have encountered this, myself. What you are looking for is called a "fallthrough" behavior (execution of the original code). The anonymous method to which Moles detours must contain a switching mechanism that falls through, after the first call. Unfortunately, I don't believe a fallthrough feature is included in Moles, at this time.
Your updated workaround is exactly what you need -- calling fallthrough would do the same thing. I suggest adding a sentinel value, doFallthrough, that gates the calls:
bool doFallthrough = false;
ResultatInfoCollection x = new ResultatInfoCollection();
MyClass.AllInstances.ResultateGet = delegate {
if (!doFallthrough)
{
doFallthrough = true;
return new ResultatInfoCollection();
}
MolesContext.ExecuteWithoutMoles(() => x = target.Resultate);
};
Calling a specific number of times simply requires a change to the sentinel value type:
int doFallthrough = 0;
ResultatInfoCollection x = new ResultatInfoCollection();
MyClass.AllInstances.ResultateGet = delegate {
if (++doFallthrough < 5)
return new ResultatInfoCollection();
MolesContext.ExecuteWithoutMoles(() => x = target.Resultate);
};
Old question, but since I found it when I was searching, I'll answer it for the next person with my solution.
Using MolesContext.ExecuteWithoutMoles to call the original function works just fine in most cases, however, if you are moling any other functions or classes downstream from this call, they won't be moled, either.
Given the following class:
public class TheClass
{
public int TheFunction(int input){
return input + TheOtherFunction();
}
public int TheOtherFunction(){
return DateTime.Now.Minutes;
}
}
If you use the MolesContext.ExecuteWithoutMoles approach:
MTheClass.AllInstances.TheOtherFunctionInt = (instance) => {
return 5;
};
MTheClass.AllInstances.TheFunctionInt = (instance, input) =>
{
//do your stuff here, for example:
Debug.WriteLine(input.ToString());
var result = MolesContext.ExecuteWithoutMoles<int>(() => instance.TheFunction(input));
//do more stuff, if desired
return result;
};
Your mole for OtherFunction will not be hit, because it was (indirectly) executed within the "without moles" scope.
However, you can add and remove moles delegates at any time, so that allows you to do the following, as outlined in the Moles Documentation (p. 24)
MTheClass.AllInstances.TheOtherFunctionInt = (instance) => {
return 5;
};
MolesDelegates.Func<TheClass, int, int> molesDelegate = null;
molesDelegate = (instance, input) =>
{
//do your stuff here, for example:
Debug.WriteLine(input.ToString());
int result = 0;
try{
MTheClass.AllInstances.TheFunctionInt = null;
result = instance.TheFunction(input);
}
finally{
MTheClass.AllInstances.TheFunctionInt = molesDelegate;
}
//do more stuff, if desired
return result;
};
MTheClass.AllInstances.TheFunctionInt = molesDelegate;
The OtherFunction moles is still hit. With this method, you can remove moling just from the specific method without impacting your other moles. I've used this, and it works. The only trouble I can see is that it won't work if you have a recursive function, or possibly a multi-threaded situation.

SubSonic 3 ActiveRecord generated code with warnings

While using SubSonic 3 with ActiveRecord T4 templates, the generated code shows many warnings about CLS-compliance, unused items, and lack of GetHashCode() implementation.
In order to avoid them, I did the following modifications:
// Structs.tt
[CLSCompliant(false)] // added
public class <#=tbl.CleanName#>Table: DatabaseTable
{ ...
// ActiveRecord.tt
[CLSCompliant(false)] // added
public partial class <#=tbl.ClassName#>: IActiveRecord
{
#region Built-in testing
#pragma warning disable 0169 // added
static IList<<#=tbl.ClassName#>> TestItems;
#pragma warning restore 0169 // added
...
public override Int32 GetHashCode() // added
{
return this.KeyValue().GetHashCode();
}
...
Is there a better way to get rid of the warnings? Or a better GetHashCode() implementation?
Currently, the only way to get rid of the warnings is to update your t4 templates and submit a bug/fix to Rob. Or wait until somebody else does.
As for the GetHashCode implementation, I don't think you're going to find a good way to do this through templates. Hash code generation is very dependent on what state your object contains. And people with lots of letters after their name work long and hard to come up with hash code algorithms that are fast and return results with low chances of collision. Doing this from within a template that may generate a class with millions of different permutations of the state it may hold is a tall order to fill.
Probably the best thing Rob could have done would be to provide a default implementation that calls out to a partial method, checks the result and returns it if found. Here's an example:
public partial class Foo
{
public override int GetHashCode()
{
int? result = null;
TryGetHashCode(ref result);
if (result.HasValue)
return result.Value;
return new Random().Next();
}
partial void TryGetHashCode(ref int? result);
}
public partial class Foo
{
partial void TryGetHashCode(ref int? result)
{
result = 5;
}
}
If you compile this without the implementation of TryGetHashCode, the compiler completely omits the call to TryGetHashCode and you go from the declaration of result to the check to see if it has value, which it never will, so the default implementation of the hash code is returned.
I wanted a quick solution for this as well. The version that I am using does generate GetHashCode for tables that have a primary key that is a single int.
As our simple tables use text as their primary keys this didn't work out of the box. So I made the following change to the template near line 273 in ActiveRecord.tt
<# if(tbl.PK.SysType=="int"){#>
public override int GetHashCode() {
return this.<#=tbl.PK.CleanName #>;
}
<# }#>
<# else{#>
public override int GetHashCode() {
throw new NotImplementedException();
}
<# }#>
This way GetHashCode is generated for all the tables and stops the warnings, but will throw an exception if called (which we aren't).
We use this is for a testing application, not a website or anything like that, and this approach may not be valid for many situations.

Resources