Is it possible to check for enum.equals() usage? - sonarqube

We recently ran into a bug in our code where someone had used .equals() to compare enums. One of the fields had been changed to a different enum - but we got no compiler error due to the use of .equals() instead of ==.

Can you look at this specification and tell if this is matching the problem you want to catch? (assuming you are talking about Java)
https://jira.sonarsource.com/browse/RSPEC-4551

Related

How can I defeat the Go optimizer in benchmarks?

In Rust, you can use the black_box function to force the compiler to
assume that the function's argument is used (forcing it not to optimize away code that generates that value)
not be able to inspect how its return value is produced (preventing it from doing constant-folding and other such optimizations).
Is there a similar facility in Go (to accomplish either task)?
Is there a similar facility in Go (to accomplish either task)?
No.
If you want to use a result: Assign to an exported global.
I believe runtime.KeepAlive is recommended, as per the following Github issue. Unfortunately, it's unclear whether anything exists for function arguments, or whether KeepAlive is even guaranteed to work.
https://github.com/golang/go/issues/27400

Is receive(:method).never completely equivalent to not_to receive(:method)

Using RSpec 3.7.0. I would like to know whether writing
expect(instance).not_to receive(:method)
is completely identical to writing
expect(instance).to receive(:method).never
or if there are any (even subtle) differences or side effects.
As per this link https://github.com/rspec/rspec-mocks/issues/895
You can use either of these to cause an example to fail if a method is called
Author #myronmarston also provided an example

How do I get a callstack in Haskell?

I am trying to track down a non-exhaustive pattern in a libraries code. Specifically HDBC's mysql implementation. It is trying to match over types in my program and map them to mysql's types I believe. I can't seem to get a callstack for this error which means that since there are a number of parameters to the SQL query it is difficult to track down exactly what is causing it.
Is it possible to get a callstack in haskell so I would know which parameter was causing the error? Also I would think that this should be caught by the compiler since it should be able to look at my types and the patterns and make sure that there was a corresponding match.
You can use the GHCi debugger to identify where the exception is coming from.
I walk through a full example here.
You might also take a look at the Debug.Trace library.

Delphi, evaluate formula string

Duplicate
Best algorithm for evaluating a mathematical expression?
mathematical expression parser in Delphi?
I need a program in Delphi that get one variable equation from Edit1 such as "F(x)=4*X+2*log(x)+4*power(X,2)"and get X value variable from Edit2 then show the result F(X) in Edit3. Please help me.
Thanks.
You probably need to have a look at this component - TbcParser.
http://www.bestcode.com/html/tbcparser.html
This component has source code included.
You can also check out JCL, which comes with an expression evaluator in the file JclExprEval.pas. It's free and open source.
Have a look at
http://www.efg2.com/Lab/Library/Delphi/MathFunctions/Parsers.htm
Also, if you have JEDI and/or FastReport libraries installed you can use their parsers. We use TParser10 from http://cc.embarcadero.com/item/15974 which is one of the fastest available if not the fastest. It is freeware and work flawlessly up to D2007. I heard that it works also in D2009. Not tested yet though.
If you want to write an own implementation and not use a ready to use library this will take you some time to do. Just search for "formula parser". I would start with a tokenizer and then build a parse tree from the tokens.
It STRONGLY depdends on your decimal separator. Use StrToFloat or in newewr versions of Delphi - TryStrToFloat.

Is there any downside to redundant qualifiers? Any benefit?

For example, referencing something as System.Data.Datagrid as opposed to just Datagrid. Please provide examples and explanation. Thanks.
The benefit is that you don't need to add an import for everything you use, especially if it's the only thing you use from a particular namespace, it also prevents collisions.
The downside, of course, is that the code balloons out in size and gets harder to read the more you use specific qualifiers.
Personally I tend to use imports for most things unless I know for sure I will only be using something from a particular namespace once or twice, so it won't impact the readability of my code.
You're being very explicit about the type you're referencing, and that is a benefit. Although, in the very same process you're giving up code clarity, which clearly is a downside in my case, as I want code to be readable and understandable. I go for the short version unless I have a conflict in different namespaces which can only be solved with the explicit referencing to classes.. Unless I make an alias for it with the keyword using:
using Datagrid = System.Data.Datagrid;
Actually the full path is global::System.Data.DataGrid. The point of using a more qualified path is to avoid having to use additional using statements, especially if the introduction of another using will cause problems with type resolution. More fully qualified identifiers exist so that you can be explicit when you need to be explicit, but if the class's namespace is clear, then the DataGrid version is clearer to many.
I generally use the shortest form available in order to keep the code as clean and readable as possible. That's what using directives are for, after all, and tooltips in the VS editor give you instant detail on the provenance of a type.
I also tend to use a namespace tag for RCWs in a COM interop layer, to call out those variables explicitly in the code (they may need special attention on lifecycle and collection), eg
using _Interop = Some.Interop.Namespace;
In terms of performance there is no upside/downside. Everything is resolved at compile time and the generated MSIL is identical whether you use fully-qualified names or not.
The reason why its use is prevalent in the .NET world is because of auto-generated code, such as designer markup. In that case it would be better to fully-qualify names like class names because of possible conflicts with other classes you may have in your code.
If you have a tool like ReSharper, it will actually tell you what fully-qualified references you have are unnecessary (e.g. by graying them out) so you can lop them off. If you frequently cut-paste code across your various code bases, it would be a must to fully qualify them. (then again, why would you want to do cut-paste all the time; it's a bad form of code reuse!)
I don't think there is really a downside, just readability vs actual time spent coding. In general if you don't have namespaces with ambiguous object I don't think it's really needed. Another thing to consider is level of use. If you have one method that uses reflection and you are alright with typeing System.Reflection 10 times, then it's not a big deal but if you plan on using a namespace alot then I would recommend an include.
Depending on your situation, extra qualifiers will generate a warning (if this is what you mean by redundant). If you then treat warnings as errors, that's a pretty serious downside.
I've run into this with GCC for example.
struct A {
int A::b; // warning!
}

Resources