Defining mutations in GraphQL via fields: Is this bad practice? - graphql

Suppose you have a user type, and a user has many posts. Then imagine you want to find a user, and delete all of their posts. One way to do this is to implement the following mutation field:
field deleteAllPosts, types[Types::PostType] do
argument :user_id, types.String
resolve -> (obj,args,ctx){
posts = Posts.where(user_id:args[:user_id])
posts.each{|post| post.destroy}
}
end
Then the query
mutation {
deleteAllPosts(user_id:1)
}
will delete all the posts of the user with id 1.
Before I did this, I thought about doing it a different way, which I've not seen anyone else do. I wanted to check that this different way doesn't have any pitfalls, or reasons I shouldn't use it.
The idea is to instead put a deletePost field for PostType, and a findUser field on mutation (which would typically be a query field). Assuming it's obvious how those fields would be defined, I would then make the query
mutation{
findUser(id:1){
posts{
deletePost{
id
}
}
}
}
Is this a bad idea?
Edit in response to feedback: One thing I'm concerned about is the possibility that a user could, in principle, make the deletePost selection inside of a query. But I'm tempted to say that that's "their fault". I'd like to say "this selection can only be made if it is inside of a mutation query", but I don't think that's possible in GraphQL.
In order to avoid the XY problem, here is why I am keen to use this idea rather than the initial one. It feels more expressive (said differently, it feels less redundant). Suppose that, after a while, you decide that you want to delete all the posts for those users belonging to a particular group. Then in what I regard as the 'convention', you should create a whole new mutation field:
field deleteAllPostsInGroup, types[Types::PostType] do
argument :group_id, types.String
resolve -> (obj,args,ctx){
posts = Group.find_by(args[:group_id]).users.map{|u| u.posts}.flatten
posts.each{|post| post.destroy}
}
end
whereas in my suggested convention you just define a trivial findGroup field (but you have to define it on mutation, where it doesn't belong), and make the query:
mutation{
findGroup(id:1){
users{
posts{
deletePost{
id
}
}
}
}
}
I suppose that really what I'm trying to do is use a query to find some data, and then mutate the data I've found. I don't know how to do this in GraphQL.
Second Edit: It seems like there is a well-defined component of this question, which I have asked here. It may turn out that one of these questions answers the other, and can be closed, but I don't know which way round yet.

This is basically a code quality issue and is similar to asking about the point of the DRY principle or encapsulation.
A quote from https://graphql.org/learn/queries/ reads:
In REST, any request might end up causing some side-effects on the server, but by convention it's suggested that one doesn't use GET requests to modify data. GraphQL is similar - technically any query could be implemented to cause a data write. However, it's useful to establish a convention that any operations that cause writes should be sent explicitly via a mutation.
This is a good convention as it makes maintenance, testing and debugging easier. Side-effects, whether intentional or not, can be awfully difficult to track and understand. Particularly if you have them in GraphQL queries, which can be arbitrarily large and complex. There is nothing preventing you from querying and modifying the same object and it's siblings at the same time, and doing this multiple times in one query by simple nesting. It is very easy to get this wrong.
Even if you pull it off, code readability and maintainability suffer. E.g. if you knew that only your mutations ever modified the data, and queries had no effect on it, you would immediately know where to start looking for the implementation of a particular behaviour. It is also a lot easier to reason about how your program works in general.
If you only write small, properly named, granular mutations, you can reason about what they do more easily than you could if you had a complex query which updated different data at different points.
Last but not necessarily least, sticking to conventions is useful if you ever need to transfer your work to someone else.
In short - it is all about making the lives of yourself and others easier in the future.
EDIT
OK, so I see where you are going with this - you want to give the flexibility of a GraphQL query to the mutations. Sure, this particular example would work. Not going this way would only be about the future. There is no point in discussing this if deletePost is the only operation you will ever define.
If that's not the case, then what if you wanted to delete, let's say, 5 specific user posts? Would you give extra parameters to findGroup and then pass those down the tree? But then why does findGroup method have to know about what you will do with it's results? That kind of defies the idea of a flexible query itself. What if you also wanted to perform mutations on users? More parameters for findGroup? What if users and posts can be queried in a different way, like, users by domains, posts by categories, etc.? Define the same parameters there too? How would you ensure that with every operation (especially if you do a few of them at once) all the relational links are properly erased in your database? You would have to imagine every possible combination of queries and query-mutations and code appropriately for them. Since query size is unlimited it could end up being very hard to do. And even if the purpose of an individual query-mutation (deletePost) is clear and easy to grasp, the overall query would not be. Quickly your queries would become too complex to understand even for you and you'd probably begin breaking them down to smaller ones, which would only do specific mutations. This way you'd go back to the original convention but a more complex version of it. You would probably also end up defining some regular mutations too. How would you update or add posts, e.g.? That would spread your logic all over the place.
These questions would not occur if you were writing mutations. That's slightly more work in exchange for better maintainability.
These are all potential issues in the future (and there are probably more). If these don't concern you, then please go ahead with the implementation. I personally would run away from a project that did this, but if you are really clever, I don't see anything that would technically completely prevent you from achieving what you want :]

Related

2 JSON Schema Questions, Is the type keyword required and what is the differencen between Core and Validation

Okay I have been UP and DOWN the internet and I cannot find an answer that DEFINITIVELY answers the following question.
"Is the type keyword required?" If it is not then can some one, for all that is holy, please, in EXCRUCIATING detail, describe what should happen when it is not provided, validation-wise.
I have found this...
http://json-schema.org/draft/2020-12/json-schema-validation.html#rfc.section.6.1.1
But I have found so many other examples where a schema object can be defined and not have this keyword.
For example I have found this repo with testing examples.
https://github.com/json-schema-org/JSON-Schema-Test-Suite/blob/master/tests/draft7/additionalProperties.json
Here they have a schema at line 5. It does not have a type but does look like they are talking about an object. Also on lines 21 - 25 they describe a test where an array is valid.
Can someone please clarify this for me.
Also for the second one,... What is the difference between the Core and the Validation as defined here...
https://json-schema.org/specification.html
Thank you in advanced
1. Is the type keyword required?
No. Keywords will respond to instances of the types they're designed for, otherwise they will be ignored (silently pass validation). So
{ "minimum": 5 }
will pass anything as long as it's not a number less than 5. Objects, strings, arrays, etc., all pass. But as soon as you introduce a number, this keyword becomes interested and it'll do its thing.
Every keyword has a type or set of types that it responds to. type is one of the ones that responds to all of them.
2. What are the different specs for?
We (the spec authors) thought it would make things a little simpler if we split the specification into two parts: one for the schema construction keywords (e.g. $id, $schema, allOf, properties, etc.), and one for value validation and annotation (e.g. minimum, minLength, etc.). It does mean that you have to look into several documents in order to create a validator, however.
It also allows us to revise one of them without the other, though we've never done that.
This split was done several iterations ago, and we've just kept it as it seems to work well.

how to respect Post.CommentsAllowed if Post and Comment are separate aggregate roots?

In a classic example of 2 aggregate roots like:
class Post
{
string AuthorId;
string Title;
string Content;
boolean AllowComments;
...
}
class Comment
{
string AuthorId;
string Content;
DateTime Date;
...
}
When creating new comment, how to ensure that comments are added only to the post that have Post.AllowComments = true?
Having on mind that when user starts writing comment Post.AllowComments could very well be true but in the meantime (while comment is being written) the Post author might change it to false.
Or, even at the time of the submission => when we check Post.AreCommentsAllowed() it could return true but then when we do CommentRepository.Save(comment) it could be false.
Of course, one Post might have many Comments so it might not be practical to have single aggregate where Post have collection of Comments.
Is there any other solution to this?
PS.
I could do db transaction within which i'd check it but i'm looking for a DDD purist solution.
i'm looking for a DDD purist solution.
Basic idea first: if our Comments logic needs information from our Post aggregate, then what we normally do is pass a copy of that information as an argument.
So our application code would, in this case, get a local copy of AllowComments, presumably by retrieving the handle to the Post aggregate root and invoking some query in its interface.
Within the comments aggregate, you would then be able to use that information as necessary (for instance, as an argument to some branching logic)
Race conditions are hard.
A microsecond difference in timing shouldn’t make a difference to core business behaviors. -- Udi Dahan
If data here must be consistent with data there, then the answer is that we have to lock both when we are making our change.
In an application where information is stored locally, that's pretty straight forward, at least on paper: you just need to acquire locks on both objects at the same time, so that the data doesn't change out from under you. There's a bit of care required to ensure that you get deadlocked (aka the dining philosophers problem).
In a distributed system, this can get really nasty.
The usual answers in "modern purist DDD" is that you either relax the consistency requirement (the lock that you are reading is allowed to change while you are working) and you mitigate the inconsistencies elsewhere (see Memories, Guesses, and Apologies by Pat Helland) OR you change your aggregate design so that all of the information is enclosed within the same aggregate (here, that would mean making the comment entities parts of the post aggregate).
Also: creation patterns are weird; you expect that the entity you are intending to create doesn't yet exist in the repository (but off the happy path maybe it does), so that business logic doesn't fit as smoothly into the usual "get the handle from the repository" pattern.
So the conditional logic needs to sneak somewhere else -- maybe into the Post aggregate? maybe you just leave it in the application code? ultimately, somebody is going to have to tell the application code if anything is being saved in the repository.
As far as I can tell, there isn't a broad consensus on how to handle conditional create logic in DDD, just lots of different compromises that might be "good enough" in their local context.

Pointless getter checks when updating an object

Hopefully this will not come across as a silly or pedantic question, but I'm curious.
Occasionally I'll be in a situation where an existing object's properties may need to be updated with new variables, and I'll do it like this (in no particular language):
public void Update(date, somevar){
if(date > this.Date){
this.Var = somevar;
}
}
The idea being that if the date passed to the function is more recent than the date in the current object, the variable is updated. Think of it as like a basic way of caching something.
Now, the interesting part is that I know somevar will never be "old" when compared to this.Var, but it may be the same. So as far as I can see, checking the date is pointless, and therefore a pointless operation for the program to perform.
So what this is really about is whether it's better - in whatever way - to perform a write to this.Var every time Update is called, or getting this.Date, comparing it, then possibly performing the write. And just to throw in something interesting here, what if Update were to be called multiple times?
If the example I've given makes no sense or has holes in it, I apologise; I can't think of another way of giving an example, but hopefully you can see the point I'm trying to make here.
Unless for some reason assignment is an expensive operation (e.g. it always triggers a database write), this isn't going to make your programme faster.
The point of putting checks in your setters is usually to enforce data integrity, i.e. to preserve programme invariants, and thus the correctness of your other code, which is rather more important.

Would you abstract your LINQ queries into extension methods

On my current project we set ourselves some goals for the code metrics "Maintainability Index" and "Cyclometic Complexity". Maintainability Index should be 60 or higher and Cyclometic Complexity 25 or less. We know that the Maintainability Index of 60 and higher is a pretty high one.
We also use a lot of linq to filter/group/select entities. I found out that these linq queries aren't scoring that high on Maintainability Index.
Abstracting this queries into extension methods is giving me a higher Maintainability Index, which is good. But in most of the cases the extension methods are not generic anymore because I use them with my Types instead of generic types.
For example the following linq-query vs extension method:
Linq query
List.Where(m => m.BeginTime >= selectionFrom && m.EndTime <= selectionTo)
Extension method:
public static IEnumerable<MyType> FilterBy(this IEnumerable<MyType> source, DateTime selectionFrom, DateTime selectionTo)
{
return (IEnumerable<MyType>)source.Where(m => m.BeginTime >= selectionFrom && m.EndTime <= selectionTo);
}
List.FilterBy(selectionFrom, selectionTo);
The extension method gives me a Maintainability Index improvement of 6 points, and gives a nice fluent syntax.
On the other hand I have to add a static class, it's not generic.
Any ideas on what approach would have your favor? Or maybe have different ideas about how to refactor the linq queries to improve Maintainability Index?
You shouldn't add classes for the sake of metrics. Any metrics are meant to make your code better but following rules blindly, even the best rules, may in fact harm your code.
I don't think it's a good idea to stick to certain Maintainability and Complexity indexes. I believe they are useful for evaluating old code, i.e. when you inherited a project and need to estimate its complexity. However, it's absurd to extract a method because you haven't scored enough points.
Only refactor if such refactoring adds value to the code. Such value is a complex human metric inexpressible in numbers, and estimating it is exactly what programming experience is about—finding balance between optimization vs readability vs clean API vs cool code vs simple code vs fast shipping vs generalization vs specification, etc.
This is the only metric you should follow but it's not always the metric everyone agrees upon...
As for your example, if the same LINQ query is used over and over, it makes perfect sense to create an EnumerableExtensions in Extensions folder and extract it there. However, if it used once or twice, or is subject to change, verbose query is so much better.
I also don't understand why you say they are not generic with somewhat negative connotations. You don't need generics everywhere! In fact, when writing extension methods, you should consider the most specific types you can choose as to not pollute other classes' method set. If you want your helper to only work with IEnumerable<MyType>, there is absolutely no shame in declaring an extension method exactly for this IEnumerable<MyType>. By the way, there's redundant casting in your example. Get rid of it.
And don't forget, tools are stupid. So are we, humans.
My advice to you would be ... don't be a slave to your metrics! They are machine generated and only intended to be used as guidance. They are never going to be a replacement for a skilled experienced programmer.
Which do you think is right for your application?
I for one agree with the extension method strategy. I've used it without a problem in a handful of real-world apps.
To me, it is not only about the metrics, but also the re-usability of the code there. See the following psuedo-examples:
var x = _repository.Customers().WhichAreGoldCustomers();
var y = _repository.Customers().WhichAreBehindInPayments();
Having those two extension methods accomplishes your goal for metrics, and it also provides "one place for the definition of what it is to be a gold customer." You don't have different queries being created in different places by different developers when they need to work with "gold customers."
Additionally, they are composable:
var z = _repository.Customers().WhichAreGoldCustomers().WhichAreBehindInPayments();
IMHO this is a winning approach.
The only problem we've faced is that there is a ReSharper bug that sometimes the Intellisense for the extension methods goes crazy. You type ".Whic" and it lets you pick the extension method you want, but when you "tab" on it, it puts something completely different into the code, not the extension method that you selected. I've considered switching from ReSharper for this, but... nah :)
NO: in this case I would ignore the cyclomatic complexity - what you had originally was better.
Ask yourself what is more explanatory. This:
List.Where(m => m.BeginTime >= selectionFrom && m.EndTime <= selectionTo)
or this:
List.FilterBy(selectionFrom, selectionTo);
The first clearly expresses what you want, whereas the second does not. The only way to know what "FilterBy" means is to go into the source code and look at its implementation.
Abstracting query fragments into extension methods makes sense with more complex scenarios, where it's not easy to judge at a glance what the query fragment is doing.
I have used this technique in places, for example a class Payment has a corresponding class PaymentLinqExtensions which provides domain specific extensions for Payments.
In the example you give I'd choose a more descriptive method name. There is also the question of whether the range is inclusive or exclusive, Otherwise it looks OK.
If you have multiple objects in your system for which the concept of having a date is common then consider an interface, maybe IHaveADate (or something better :-)
public static IQueryable<T> WithinDateRange(this IQueryable<T> source, DateTime from, DateTime to) where T:IHaveADate
(IQueryable is interesting. I don't think IEnumerable can cast to it which is a shame. If you're working with database queries then it can allow your logic to appear in the final SQL that is sent to the server which is good. There is the potential gotcha with all LINQ that your code is not executed when you expect it to be)
If date ranges are an important concept in your application, and you need to be consistent about whether the range starts at midnight on the end of "EndDate" or midnight at the start of it, then a DateRange class may be useful. Then
public static IQueryable<T> WithinDateRange(this IQueryable<T> source, DateRange range) where T:IHaveADate
You could also, if you feel like it, provide
public static IEnumerable<T> WithinDateRange(this IEnumerable<T> source, DateRange range, Func<DateTime,T> getDate)
but this to me feels more something to do with DateRange. I don't know how much it would be used, though your situation may vary. I've found that getting too generic can make things hard to understand, and LINQ can be hard to debug.
var filtered = myThingCollection.WithinDateRange(myDateRange, x => x.Date)

Is checking Perl function arguments worth it?

There's a lot of buzz about MooseX::Method::Signatures and even before that, modules such as Params::Validate that are designed to type check every argument to methods or functions. I'm considering using the former for all my future Perl code, both personal and at my place of work. But I'm not sure if it's worth the effort.
I'm thinking of all the Perl code I've seen (and written) before that performs no such checking. I very rarely see a module do this:
my ($a, $b) = #_;
defined $a or croak '$a must be defined!';
!ref $a or croak '$a must be a scalar!";
...
#_ == 2 or croak "Too many arguments!";
Perhaps because it's simply too much work without some kind of helper module, but perhaps because in practice we don't send excess arguments to functions, and we don't send arrayrefs to methods that expect scalars - or if we do, we have use warnings; and we quickly hear about it - a duck typing approach.
So is Perl type checking worth the performance hit, or are its strengths predominantly shown in compiled, strongly typed languages such as C or Java?
I'm interested in answers from anyone who has experience writing Perl that uses these modules and has seen benefits (or not) from their use; if your company/project has any policies relating to type checking; and any problems with type checking and performance.
UPDATE: I read an interesting article on the subject recently, called Strong Testing vs. Strong Typing. Ignoring the slight Python bias, it essentially states that type checking can be suffocating in some instances, and even if your program passes the type checks, it's no guarantee of correctness - proper tests are the only way to be sure.
If it's important for you to check that an argument is exactly what you need, it's worth it. Performance only matters when you already have correct functioning. It doesn't matter how fast you can get a wrong answer or a core dump. :)
Now, that sounds like a stupid thing to say, but consider some cases where it isn't. Do I really care what's in #_ here?
sub looks_like_a_number { $_[0] !~ /\D/ }
sub is_a_dog { eval { $_[0]->DOES( 'Dog' ) } }
In those two examples, if the argument isn't what you expect, you are still going to get the right answer because the invalid arguments won't pass the tests. Some people see that as ugly, and I can see their point, but I also think the alternative is ugly. Who wins?
However, there are going to be times that you need guard conditions because your situation isn't so simple. The next thing you have to pass your data to might expect them to be within certain ranges or of certain types and don't fail elegantly.
When I think about guard conditions, I think through what could happen if the inputs are bad and how much I care about the failure. I have to judge that by the demands of each situation. I know that sucks as an answer, but I tend to like it better than a bondage-and-discipline approach where you have to go through all the mess even when it doesn't matter.
I dread Params::Validate because its code is often longer than my subroutine. The Moose stuff is very attractive, but you have to realize that it's a way for you to declare what you want and you still get what you could build by hand (you just don't have to see it or do it). The biggest thing I hate about Perl is the lack of optional method signatures, and that's one of the most attractive features in Perl 6 as well as Moose.
I basically concur with brian. How much you need to worry about your method's inputs depends heavily on how much you are concerned that a) someone will input bad data, and b) bad data will corrupt the purpose of the method. I would also add that there is a difference between external and internal methods. You need to be more diligent about public methods because you're making a promise to consumers of your class; conversely you can be less diligent about internal methods as you have greater (theoretical) control over the code that accesses it, and have only yourself to blame if things go wrong.
MooseX::Method::Signatures is an elegant solution to adding a simple declarative way to explain the parameters of a method. Method::Signatures::Simple and Params::Validate are nice but lack one of the features I find most appealing about Moose: the Type system. I have used MooseX::Declare and by extension MooseX::Method::Signatures for several projects and I find that the bar to writing the extra checks is so minimal it's almost seductive.
Yes its worth it - defensive programming is one of those things that are always worth it.
The counterargument I've seen presented to this is that checking parameters on every single function call is redundant and a waste of CPU time. This argument's supporters favor a model in which all incoming data is rigorously checked when it first enters the system, but internal methods have no parameter checks because they should only be called by code which will pass them data which has already passed the checks at the system's border, so it is assumed to still be valid.
In theory, I really like the sound of that, but I can also see how easily it can fall like a house of cards if someone uses the system (or the system needs to grow to allow use) in a way that was unforeseen when the initial validation border is established. All it takes is one external call to an internal function and all bets are off.
In practice, I'm using Moose at the moment and Moose doesn't really give you the option to bypass validation at the attribute level, plus MooseX::Declare handles and validates method parameters with less fuss than unrolling #_ by hand, so it's pretty much a moot point.
I want to mention two points here.
The first are the tests, the second the performance question.
1) Tests
You mentioned that tests can do a lot and that tests are the only way
to be sure that your code is correct. In general i would say this is
absolutly correct. But tests itself only solves one problem.
If you write a module you have two problems or lets say two different
people that uses your module.
You as a developer and a user that uses your module. Tests helps with the
first that your module is correct and do the right thing, but it didn't
help the user that just uses your module.
For the later, i have one example. i had written a module using Moose
and some other stuff, my code ended always in a Segmentation fault.
Then i began to debug my code and search for the problem. I spend around
4 hours of time to find the error. In the end the problem was that i have
used Moose with the Array Trait. I used the "map" function and i didn't
provide a subroutine function, just a string or something else.
Sure this was an absolutly stupid error of mine, but i spend a long time to
debug it. In the end just a checking of the input that the argument is
a subref would cost the developer 10 seconds of time, and would cost me
and propably other a lot of more time.
I also know of other examples. I had written a REST Client to an interface
completly OOP with Moose. In the end you always got back Objects, you
can change the attributes but sure it didn't call the REST API for
every change you did. Instead you change your values and in the end you
call a update() method that transfers the data, and change the values.
Now i had a user that then wrote:
$obj->update({ foo => 'bar' })
Sure i got an error back, that update() does not work. But sure it didn't
work, because the update() method didn't accept a hashref. It only does
a synchronisation of the actual state of the object with the online
service. The correct code would be.
$obj->foo('bar');
$obj->update();
The first thing works because i never did a checking of the arguments. And i don't throw an error if someone gives more arguments then i expect. The method just starts normal like.
sub update {
my ( $self ) = #_;
...
}
Sure all my tests absolutely works 100% fine. But handling these errors that
are not errors cost me time too. And it costs the user propably a lot
of more time.
So in the end. Yes, tests are the only correct way to ensure that your code
works correct. But that doesn't mean that type checking is meaningless.
Type checking is there to help all your non-developers (on your module)
to use your module correctly. And saves you and others time finding
dump errors.
2) Performance
The short: You don't care for performance until you care.
That means until your module works to slow, Performance is always fast
enough and you don't need to care for this. If your module really works
to slow you need further investigations. But for these investigions
you should use a profiler like Devel::NYTProf to look what is slow.
And i would say. In 99% slowliness is not because you do type
checking, it is more your algorithm. You do a lot of computation, calling
functions to often etc. Often it helps if you do completly other solutions
use another better algorithm, do caching or something else, and the
performance hit is not your type checking. But even if the checking is the
performance hit. Then just remove it where it matters.
There is no reason to leave the type checking where performance don't
matters. Do you think type checking does matter in a case like above?
Where i have written a REST Client? 99% of performance issues here are
the amount of request that goes to the webservice or the time for such an
request. Don't using type checking or MooseX::Declare etc. would propably
speed up absolutly nothing.
And even if you see performance disadvantages. Sometimes it is acceptable.
Because the speed doesn't matter or sometimes something gives you a greater
value. DBIx::Class is slower then pure SQL with DBI, but DBIx::Class
gives you a lot for these.
Params::Validate works great,but of course checking args slows things down. Tests are mandatory(at least in the code I write).
Yes it's absolutely worth it, because it will help during development, maintenance, debugging, etc.
If a developer accidentally sends the wrong parameters to a method, a useful error message will be generated, instead of the error being propagated down to somewhere else.
I'm using Moose extensively for a fairly large OO project I'm working on. Moose's strict type checking has saved my bacon on a few occassions. Most importantly it has helped avoid situations where "undef" values are incorrectly being passed to the method. In just those instances alone it saved me hours of debugging time..
The performance hit is definitely there, but it can be managed. 2 hours of using NYTProf helped me find a few Moose Attributes that I was grinding too hard and I just refactored my code and got 4x performance improvement.
Use type checking. Defensive coding is worth it.
Patrick.
Sometimes. I generally do it whenever I'm passing options via hash or hashref. In these cases it's very easy to misremember or misspell an option name, and checking with Params::Check can save a lot of troubleshooting time.
For example:
sub revise {
my ($file, $options) = #_;
my $tmpl = {
test_mode => { allow => [0,1], 'default' => 0 },
verbosity => { allow => qw/^\d+$/, 'default' => 1 },
force_update => { allow => [0,1], 'default' => 0 },
required_fields => { 'default' => [] },
create_backup => { allow => [0,1], 'default' => 1 },
};
my $args = check($tmpl, $options, 1)
or croak "Could not parse arguments: " . Params::Check::last_error();
...
}
Prior to adding these checks, I'd forget whether the names used underscores or hyphens, pass require_backup instead of create_backup, etc. And this is for code I wrote myself--if other people are going to use it, you should definitely do some sort of idiot-proofing. Params::Check makes it fairly easy to do type checking, allowed value checking, default values, required options, storing option values to other variables and more.

Resources