If I have a method that hides a button, I would probably call it HideButton
If I have a method that shows a button, I would probably call it ShowButton
But what do you guys call a ShowIfThisHideIfThat style method?
Possible Choices:
TestForButtonVisibility (this kind of sounds like it will return true/false but not actually do the work)
TestButton
ShowHideButton (style I'm currently using)
Been at this a number of years and still don't have a style that I like for these types of methods. I'm working mostly in C# and a little Java, Ruby, and F#. What do you use for method names in this regard?
// example of the style of method
public void ShouldIShowOrHideButton()
{
Button.Visible = ((chkSomeSetting.Checked) && (DateTime.Now.Day < 8));
}
How about updateButtonVisibilty()?
Might be overengineering, but the reason you may have problems because it's doing two things. So making sure you only have a function doing a single thing, howabout a method to determine if the button should be shown or not (takes parameters, returns bool), then set the value of the button directly.
Button.Visibilty = DetermineIfButtonShouldBeShow(...);
My preference has been to keep toggle methods rather than separate methods for hide/show.
ToggleButtonVisibility()
This allows you to put your testing code in there and the expected result/output would be a properly visible/invisible button.
Edit: Toggle is a personal preference that stems from a partial background in working with binary gates, architecture, etc where a toggle may go through several separate gates before reaching an end state. The word itself could be changed to anything else such as Update, Determine, Finalize, or even Steve. It really boils down to what makes sense to you and what your standard is.
Edit: Now your question is edited to include the example
// example of the style of method
public void ShouldIShowOrHideButton()
{
Button.Visible = ((chkSomeSetting.Checked) && (DateTime.Now.Day < 8));
}
My answer is neither. I would do two things:
Move the Button.Visible part outside the function, so the function just computes the logic and returns bool.
Name the function according to its internal logic not according to whether it is for a button or not. So if your function checks for a wedding day it would be called IsWeddingDay, if it checks for a monthly meeting it would be IsMonthlyMeeting.
The code would be
Button.Visible = IsMonthlyMeeting()
and the logic can be subsequently used to control any other widgets if needed.
Old Answer:
You probably need to explain more what ShowIfThisHideIfThat does.
If it depends on one condition, like:
if (condition)
ShowBotton()
else
HideButton()
then I would use
Button.SetVisibility(condition)
as per Lazarenko's comment above, or if the language has properties:
Button.Visible = condition
If you have two conditions like what ShowIfThisHideIfThat seems to imply, equivalent to:
if (cond1)
ShowButton()
else if (cond2)
HideButton()
else
LeaveButtonAsItIs()
then the logic in my opinion is complicated and I wouldn't use one function. Sure, the code is equivalent to
Button.Visible = cond1 || (!cond2 && Button.Visible)
but you lose the understandability.
I would use one of these:
setXVisibility
refreshX or refreshXStatus
How about using SetButtonVisibility( )
The confusion seems to stem from a mixing of the business logic and the UI logic. The test isn't whether the button should be shown. Code is going to use the test to decide if the button should be shown. It probably depends on whether some feature should be available. Consider:
if (IsFeatureEnabled()) {
ShowButton();
} else {
HideButton();
}
This is the code where business logic (IsFeatureEnabled()) meets UI (ShowButton()/HideButton()).
Related
I am creating my first project that uses ui-router.
My project has about 10 views, each with their own controller and state. I am trying to modularise/encapsulate/decouple as best as possible but I am having trouble working out where to put the onExit and onEnter state callbacks.
The first option is to put it in app.js which is currently defining all of my states, however I feel that this would not be a good place as it could cause this file to blow up and become hard to read as more states are introduced and the logic gets more complex.
The second option I looked into was to put it into a controller (I have one for each state), however from researching it doesn't seem to be best practice to do this.
The third option is to create a service that is resolved, however with this option I would end up with either a giant service full of state change functions for each of the states (not decoupled) or an additional service per state which would contain the state change functionality, and I worry that would increase project complexity.
What is the standard way to achieve this?
Are there any other options that I am missing?
Our strategy for this has been to disregard the onEnter and onExit on the state object, because as you are discovering, they feel like they are in the wrong place in terms of separation of concerns (app.js).
For onEnter: we handle setup in an activate() function in each controller, which we manually execute inside the controller. This happens to also match the callback that will get executed in Angular 2.0, which was not an accident ;).
function activate() {
// your setup code here
}
// execute it. this line can be removed in Angular 2.0
activate();
For onExit: We rarely need an exit callback, but when we do, we listen for the $scope $destroy event.
$scope.$on("$destroy", function() {
if (timer) {
$timeout.cancel(timer);
}
});
LUA novice, experimenting with GUI using iup.GetParam using LUA 5.1.
I have a simple use of iup.GetParam (which works fine with a simple callback function testing for OK & Cancel) and am trying to add some simple data validation for the parameters (e.g. testing a parameter for being alphanumeric), but am unsure of the correct approach.
I've searched the reference manual (and for code examples), but drawn a blank so far.
Using the string validation example, if I want to reject the
character entered by the user and display the old value of the
parameter, do I simply return 0 from the callback function, or, do
I also have to reset the value of the parameter to its previous
value before the return? Or is the right approach something
completely different?
In either case, do I have to refresh / update the GUI display with a
separate iup call, or does GetParam handle that for me?
Whatever combination I try, it doesn't appear to work (the parameter happily displays the non-alphanumerics). Debugging shows the validation test and return working as coded, so the advice I'm seeking is to get confirmation of the right approach. Sharing a simple working example would be great.
simply return 0
No, IUP will do everything for you, in this case
Download the "getparam.wlua" from the examples folder, then add to its callback this:
elseif (param_index == 1) then
return 0
You will notice that the integer value is now read-only.
After reading the documentation about the second parameter in knockout applyBindins, I understand the reason behind it:
Optionally, you can pass a second parameter to define which part of
the document you want to search for data-bind attributes. For example,
ko.applyBindings(myViewModel,
document.getElementById('someElementId')). This restricts the
activation to the element with ID someElementId and its descendants,
which is useful if you want to have multiple view models and associate
each with a different region of the page.
But I have not seen anything about performance. My thoughts (which are based on nothing) are that it makes sense that ko bindings will work faster if to restring the binding not to the whole document, but to a small part of it.
So does it make sense to use ko.applyBindings(myViewModel, $('#someElementId')[0]) without having multiple view models just for the sake of performance. (I heard about Mr. Knuth, so it would be nice to refrain from optimization citation).
applyBindings is a thin wrapper over applyBindingsToNodeAndDescendantsInternal (which is where ALL the magic happens) and if you don't supply a node it uses window.document.body
There is no difference between calling applyBindings(viewModel) and applyBindings(viewMode,rootNode) unless you have a very large DOM and only want to bind a small fraction. In this case use the second parameter and Knockout will have a lot less to scan on the initial setup.
Once the bindings are in place they are reacting locally to observable changes, unlike a framework like AngularJS that can potential scan the entire DOM for changes. The root node is irrelevant at this point.
ko.applyBindings = function (viewModel, rootNode) {
if (rootNode && (rootNode.nodeType !== 1) && (rootNode.nodeType !== 8))
throw new Error("ko.applyBindings: first parameter should be your view model; second parameter should be a DOM node");
rootNode = rootNode || window.document.body; // Make "rootNode" parameter optional
applyBindingsToNodeAndDescendantsInternal(viewModel, rootNode, true);
};
I have some data being loaded from a server, but there's no guarantee that I'll have it all when the UI starts to display it to the user. Every frame there's a tick function. When new data is received a flag is set so I know that it's time to load it into my data structure. Which of the following ways is a more sane way to decide when to actually run the function?
AddNewStuffToList()
{
// Clear the list and reload it with new data
}
Foo_Tick()
{
if (updated)
AddNewStuffToList();
// Rest of tick function
}
Versus:
AddNewStuffToList()
{
if (updated)
{
// Clear the list and reload it with new data
}
}
Foo_Tick()
{
AddNewStuffToList();
// Rest of tick function
}
I've omitted a lot of the irrelevant details for the sake of the example.
IMHO first one. This version separates:
when to update data (Foo_Tick)
FROM
how to loading data (AddNewStuffToList()).
2nd option just mixing all things together.
You should probably not run the function until it is updated. That way, the function can be used for more purposes.
Let's say you have 2 calls that both are going to come and put in data to the list. With the first set up, checking the variable inside of the function, you could only check if one call has came in. Instead, if you check it in the function that calls the data, you can have as many input sources as you want, without having to change the beginning function.
Functions should be really precise on what they are doing, and should avoid needing information created by another function unless it is passed in.
In the first version the simple variable check "updated" will be checked each time and only if true would AddNewStuffToList be called.
With the second version you will call AddNewStuffToList followed by a check to "updated" every time.
In this particular instance, given that function calls are generally expensive compared to a variable check I personally prefer the first version.
However, there are situations when a check inside the function would be better.
e.g.
doSomething(Pointer *p){
p->doSomethingElse();
}
FooTick(){
Pointer *p = new Pointer();
// do stuff ...
// lets do something
if (p){
doSomething(p);
}
}
This is clumbsy because every time you call doSomething you should really check you're
not passing in a bad pointer. What if this is forgotten? we could get an access violation.
In this case, the following is better as you're only writing the check in one place and
there is no extra overhead added because we always want to ensure we're not passing in a bad pointer.
doSomething(Pointer *p){
if (p){
p->doSomethingElse();
}
}
So in general, it depends on the situation. There are no right and wrong answers, just pros and cons here.
I'm developing a SL3 application with Prism. I need to have support for validation (both field level (on the setter of the bound property) and before save (form level)), including a validation summary, shown when the save button is pressed.
But the samples I can find googling are either SL3 with a lot of code in code behind (very uncool and un-Prismy), or WPF related.
Does anyone know a reference application with some actual validation I can look into?
Cheers,
Ali
There aren't any from Microsoft at present, but I'll pass this one onto the PRISM team tomorrow to see if we can get a basic Form Validation example inside the next rev of PRISM.
That being said, you can put a validator per Form that essentially validates each field (semantic and/or syntax validation) and should all pass, will return a true/false state.
A way I typically do this is I attach a "CanSave" method to my Commands ie:
SaveOrderCommand = new DelegateCommand<object>(this.Save, this.CanSave);
private bool CanSave(object arg)
{
return this.errors.Count == 0 && this.Quantity > 0;
}
Then in the this.CanSave, i then put either the basic validation inside this codebase, or I call a bunch of other validators depending on the context - some would be shared across all modules (ie IsEmailValid would be one Validator i place in my Infrastructure Module as a singleton and pass in my string, it would then true/false as a result). Once they all pass, ensure CanSave returns true. If they fail, the CanSave will return False.
Now if they fail and you want to trigger a friendly reminder to the user that its failed theres a number of techniques you can use here. I've typically flagged the said control at validation as being "failed".. (i wrote my own mind you, so up to you which toolkits you could use here - http://www.codeplex.com/SilverlightValidator is a not bad one).
Now, I typically like to do more with Forms that have validation on them by not only highlighting the said control (red box, icon etc) but also explain to the user in more detail whats required of them - thus custom approach is a solution I've opted for.
At the end of the day, you're going to have to do some of the heavy lifting to validate your particular form - but look into ways to re-use validators where they make sense (email, SSN etc are easy ones to re-use).
HTH?
Scott Barnes - Rich Platforms Product Manager - Microsoft.