We use SonarQube against our application. One of the SonarQube rules says:
Selectors of lower specificity should come before overriding selectors of higher specificity
The details are here. As my application has many violations, changing the order by hand isn't really feasible. I'm wondering if there's a way to use scss-lint, stylelint or something else in a "fix" mode that could change the order of the selectors. I looked but couldn't find anything in stylelint. Maybe it can't safely be done automatically, as changing the order could affect specificity and therefore change the application behaviour...
As I personal! know there is no Linter which provide that. (I am curious about it.) But just some thoughts about the need of following that 'rule':
Indeed: writing SASS/CSS the way Selectors with lower specifity comes first is a good practicse. The CSS structure becomes more readable and it is easier to build up your code structure as there is a clearer systematic in your head (and the code).
But just up from the mechanic CSS works there is REALLY NO NEED to do it this way. The code simply doesn't become safer doing so or less safe and the pages don't load slower not doing it. That is what the mechanic of specifity has been done for: as of the specifity not the order of the selectors counts and you are able to write your code in the order you need it. Only if the specifity is the same the order counts.
So, maybe this rule leads to 'better' code. But: NOT ALL RULES NEEDS TO BE FULLFILLED. Not all rules Google tries to establish with their best practice rules they offer in their browser, nor all rules other analysis tools provide needs to be followed.
And if not in this project as it needs resources to correct it ... it maybe could but has not be a target for next project ;-)
(This question regards search/6.)
I was wondering if there is a way -rather than manual tracing- to pause the execution of search/6 every time a new solution for a single variable was found?
I would like to accomplish this to further investigate what is happening during search in constrained models.
For example, if you are trying to solve the classic sudoku problem, and you have written a set of constraints and a print method for your board, it can be useful to print the board after setting the constraints, but before searching, in order to evaluate the strongness of your constraints. However, once search is called to solve the sudoku, you don't really have an overview of the single results being built underneath unless you do a trace.
It would be very useful if something was possible in the likes of:
(this is just an abstract example)
% Let's imagine this is a (very poorly) constrained sudoku board
?- problem(Sudoku),constraint(Sudoku),print(Sudoku).
[[1,3,_,2,_,_,7,4,_],
[_,2,5,_,1,_,_,_,_],
[4,8,_,_,6,_,_,5,_],
[_,_,_,7,8,_,2,1,_],
[5,_,_,_,9,_,3,7,_],
[9,_,_,_,3,_,_,_,5],
[_,4,_,_,_,6,8,9,_],
[_,5,3,_,_,1,4,_,_],
[6,_,_,_,_,_,_,_,_]]
Now for the search:
?- problem(Sudoku),constraint(Sudoku),search_pause(Sudoku,BT),print(Sudoku,BT).
[[1,3,6,2,_,_,7,4,_],
[_,2,5,_,1,_,_,_,_],
[4,8,_,_,6,_,_,5,_],
[_,_,_,7,8,_,2,1,_],
[5,_,_,_,9,_,3,7,_],
[9,_,_,_,3,_,_,_,5],
[_,4,_,_,_,6,8,9,_],
[_,5,3,_,_,1,4,_,_],
[6,_,_,_,_,_,_,_,_]]
Board[1,3] = 6
Backtracks = 1
more ;
Using existing Visualization tools
Have a look at the Visualization Tools Manual. You can get the kind of matrix display you want by adding a viewable_create/2 annotation to your code, and launching a Visualisation Client from TkECLiPSe's Tools-menu.
Using your own instrumented search routine
You can replace the indomain_xxx choice methods in search/6 with a user-defined one where you can print information before and/or after propagation.
If that is not enough, you can replace the whole built-in search/6 with your own, which is not too difficult, see e.g. the ECLiPSe Tutorial chapter on tree search or my answer to this question.
Tracing using data-driven facilities
Using ECLiPSe's data-driven control facilities, you can quite easily display information when certain things happen to your variables. In the simplest case you do something on variable instantiation:
?- suspend(printf("X was instantiated to %w%n",[X]), 1, X->inst),
writeln(start), X=3, writeln(end).
start
X was instantiated to 3
end
Based on this idea, you can write code that allows you to follow labeling and propagation steps even when they happen inside a black-box search routine. See the link for details.
I'm reading Petzold's free (.PDF) WP7 book, and he says that he always changes "EventArgs e" to "EventArgs args" in event handlers (which makes sense to me, as "e" sometimes conflicts with what I want to name the Exception object); but he also says he removes the accessibility modifiers that are automatically prepended sometimes.
I'm wondering why he does so, and:
1) Should I adopt the same practice
2) If that's a better way (Petzold is no wet-behind-the-ears greenhorn), why doesn't MS create these methods that way by default?
(I'm assuming this is a matter of removing the private access modifier from a method. If it's removing (say) public then that's a semantic change, and a different matter.)
It's definitely a matter of personal preference. I used to favour removing the access modifier when it was the default, but these days I prefer to be explicit.
Benefits of leaving it implicit:
Less clutter (important when writing books, which may well be relevant here)
As the default is always "the most private you can express explicitly" it makes it more obvious which members have been "promoted" to have wider access
Benefits of making it explicit:
Readers who don't know the language as thoroughly are left in no doubt
It introduces one more mental reminder that you need to actively think about this. (I'm sure people often leave things in a default way without further thought otherwise - I see relatively few virtual methods in C# compared with Java, just because of the defaults.)
Assuming you follow mental reminders, it shows the reader that this was a deliberate decision.
If you want an "appeal to authority" you may be interested to know that Miguel de Icaza favours (vehemently) the former approach, and Eric Lippert favours the latter.
(Mathematica version: 8.0.4)
lst = Names["Internal`*"];
Length[lst]
Pick[lst, StringMatchQ[lst, "*Bag*"]]
gives
293
{"Internal`Bag", "Internal`BagLength", "Internal`BagPart", "Internal`StuffBag"}
The Mathematica guidebook for programming By Michael Trott, page 494 says on the Internal context
"But similar to Experimental` context, no guarantee exists that the behavior and syntax of the functions will still be available in later versions of Mathematica"
Also, here is a mention of Bag functions:
Implementing a Quadtree in Mathematica
But since I've seen number of Mathematica experts here suggest Internal`Bag functions and use them themselves, I am assuming it would be sort of safe to use them in actual code? and if so, I have the following question:
Where can I find a more official description of these functions (the API, etc..) like one finds in documenation center? There is nothing now about them now
??Internal`Bag
Internal`Bag
Attributes[Internal`Bag]={Protected}
If I am to start using them, I find it hard to learn about new functions by just looking at some examples and trial and error to see what they do. I wonder if someone here might have a more complete and self contained document on the use of these, describe the API and such more than what is out there already or a link to such place.
The Internal context is exactly what its name says: Meant for internal use by Wolfram developers.
This means, among other things, the following things hold about anything you might find in there:
You most likely won't be able to find any official documentation on it, as it's not meant to be used by the public.
It's not necessarily as robust about invalid arguments. (Crashing the kernel can easily happen on some of them.)
The API may change without notice.
The function may disappear completely without notice.
Now, in practice some of them may be reasonably stable, but I would strongly advise you to steer away from them. Using undocumented APIs can easily leave you in for a lot of pain and a nasty surprise in the future.