I have been using the BatchSqlUpdate class successfully for a while. The only annoyance using it is that named parameters need to be registered before running any query using the declareParameter or setParameter methods. This means that the type of the parameters has to be declared as well. However, Spring also provides a NamedParameterJdbcTemplate class which has a very convenient batchUpdate method that takes named parameters as input (an array of maps or SqlParameterSource objects) without the need of declaring them first. On top of that, this class can be reused easily and I also believe it's thread-safe.
So I have a couple of questions about this:
What's the recommended way to perform (multiple) batch updates?
Why is this feature duplicated across two different classes that also behave differently?
Why does BatchSqlUpdate require declared parameters if NamedParameterJdbcTemplate does not?
Thanks for the thoughts!
Giovanni
After doing some research, I reached the following conclusions.
First of all, I realized that the NamedParameterJdbcTemplate class is the only one accepting named parameters for batch updates. The method batchUpdate(String sql,Map[] batchValues) was added in Spring 3 to achieve this.
The BatchSqlUpdate class contains an overridden update(Object... params) method that adds the given statement parameters to the queue rather than executing them immediately, as stated in the javadoc. This means that the statements will be executed only when the flush() method is called or the batch size exceeded the maximum. This classed doesn't support named parameters, although it contains a updateByNamedParam() method inherited from SqlUpdate. This is unfortunate since this method allows reuse of the same map for the named parameters, whereas the NamedParameterJdbcTemplate.batchUpdate() method requires an array of maps, with the associated overhead, code bloating and complications in reusing the array of maps if the batch size is variable.
I think it would be useful to have an overridden version of updateByNamedParam(Map paramMap) in BatchSqlUpdate to behave just like update(Object... params) but with added support for named parameters.
Related
I'm new to Magento 2. In Magento 1, as you know, we can call any method from other class(es) more easily, thanks to Mage::
In Magento 2, I notice every time I want to use a method(s) from other class(es), I have to inject the dependency(ies) first which can make the constructor looks very long with so many injections. I read we can use object manager but it's not preferable. Not sure why.
The most obvious advantage for me using dependencies instead of object manager, is that you can leverage it anywhere in your class. Using object manager you have to call the methods for each function individually. It may seem a more practical approach at first, but with more complex code your functions will bloat because you always have to refer to object manager instead of referring to the method directly via dependency. I'd rather have a "big block of construction" on top instead of all these object manager instances in my functions.
Also, it can be quite tricky to use object manager correctly. Maybe have a look at this:
https://magento.stackexchange.com/questions/117098/magento-2-to-use-or-not-to-use-the-objectmanager-directly
When developing large scale command line applications that are composed of multiple classes, which need to use options passed from the command line, how do you construct the code such that you can use those options?
I write code like this:
class DatabaseHandler
def initialize(cmd_options = {})
#cmd_options = cmd_options
end
def some_method
puts #cmd_options[:cmd_parameter]
end
end
which, seems tedious and unnecessary to me. What is the Ruby best practice for using command line option parameters in your project's classes? Help appreciated.
Ruby classes are simply that: classes. Good OOD applies, even if it's a command line application. If you need custom behaviour, use dependency injection or configure it using arguments. You may share the command line options using global variables, but as always, that comes at a price: shadowing, increasing complexity to understand the collaborators of a class, difficulty finding the source of a data, etc.
I'd suggest using factory methods to parse the input and return the correct configuration to be passed to a class. If you want nice examples of dependency injection, watch some Sandi Metz talks, she really knows her stuff.
Command-line options are no different from any other kind of configuration; configuration is configuration, no matter where it came from. So, you deal with them the same way you deal with other configuration, e.g. using
a global singleton
Dependency Injection (which is what you are doing in your example)
the Reader Monad
…
For example, for request parameters (which is probably the closest analog to command-line arguments in a Web context), Rails uses a global method named params which returns a Hash-like object mapping parameter names to arguments. So, that would be an example of a global singleton.
I know early versions of Grails used prototype scope for controllers because actions were all closures at that time.
I know that the current version documentation recommends singleton scoped controllers for controllers that use methods as actions.
From the following post it seems that methods and singleton scope are more desirable or recommended, but it's not clear why.
ttp://grails.1312388.n4.nabble.com/Default-scope-for-controllers-doc-td4657986.html
We have a large project that uses prototype scoped controllers with actions as methods. Changing to the recommended controller scope involves risk and retesting, and removing any non-singleton friendly state from existing controllers.
I'd like to know why does Grails recommend singleton scope for method as actions controllers? Is it just because that's more common and similar to Spring MVC, and avoids confusion, or is there an opportunity for performance improvement, or what? What do I gain if I make the switch to singleton controllers? What is the cost if I don't make the switch?
I haven't worked much with Rails, but (at least in the versions I played with, things may be different now) the controller is the model, containing the data to be rendered by the view. During request handling you store values in controller instance fields before handing off processing to view renderers. So there it makes sense to create a new controller instance for each request so they're distinct.
Grails was inspired by Rails and uses several of its conventions, most famously convention-over-configuration. But the ability to use the controller as the model was also added as a feature, although it wasn't well documented and I doubt many used it.
The typical way a controller action works when using a GSP to render the response (as opposed to forwarding or redirecting, or rendering directly in the controller, e.g. with render foo as JSON) is to return a Map with one or more key/value pairs from the action, and often the return keyword is omitted since it's optional in Groovy:
class MyController {
def someAction() {
def theUser = ...
def theOtherObject = ...
[user: theUser, other: theOtherObject]
}
}
Here the model map has two entries, one keyed by user and the other keyed by other, and those will be the variable names used in the GSP to access the data.
But what most people don't know is that you could also do it like this:
class MyController {
def user
def other
def someAction() {
user = ...
other = ...
}
}
In this case a model map isn't returned from the action, so Grails would populate the model from all properties of the controller class, and in this case the same GSP would work for both approaches since the variable names in the second approach are the same as the map keys in the first.
The option to make controllers singletons was added in 2.0 (technically 1.4 before it was renamed to 2.0, see this JIRA issue) and we also added support for methods as actions in addition to retaining support for closures. The original assumption was that using closures would enable some interesting features, but that never happened. Using methods is more natural since you can override them in subclasses, unlike closures which are just class-scope fields.
As part of the 2.0 rework we removed that Rails-inspired feature on the assumption that since it was essentially undocumented, that the impact on the few odd apps that used the feature wouldn't be significant. I don't recall anyone ever complaining about the loss of that feature.
Although controller classes are typically readily garbage-collectible and creating an instance per request doesn't affect much, it's rare to need per-request state in a controller, so singletons usually make more sense. The default prototype scope was left for backwards compatibility but it's easy to change the default with a Config.groovy property (and the file generated by the create-app script does this).
Although each request does get a new request and response, and if sessions are used each user will have their own, but those are not instance fields of the controllers. They look like they are because we can access request, response, session, params, etc. inside of actions, but those are actually the property access forms of the getRequest(), getResponse(), getSession(), and getParams() methods that are mixed into all controllers during compilation by AST transforms. The methods access their objects not as class fields but via ThreadLocals, so there is per-request state but it's not stored in the controller instance.
I don't remember if there was much in the way of benchmarking to compare using methods versus closures, but Lari Hotari probably did some. If there was a difference it was probably not significant. You could test this in your own application by converting just one or a few controllers and doing before and after tests. If the performance, scaling, and memory differences aren't significant between the two approaches you'll probably be safe staying with prototype score and/or closures. If there is a difference and you don't have instance fields in your controllers, then it'll probably be worth the effort to convert to singletons and methods.
If you do have instance fields they can probably be converted to request attributes - request.foo = 42 is a metaclass shortcut for request.setAttribute('foo', 42), so you could store per-request data safely there instead.
If I want to handle many parameters from for example a web request and pass it between classes (layers) - what is the preferred way?
I know it is easy to pass optional numbers of parameters through the constructor as a map.
I can also pass a map directly and if the keys match the receiving objects property names it should work in a similar way
Or I could just pass the map and then instantiate for example domain classes from that
I could use a special class as data carrier with given number of properties
I have a domain class (not database domain but business domain) that needs data from the user interface.
What is the best way to pass data through the layers and how do I know that all required data is being passed if using a data structure - like a map - with key values? If I would have a more static constructor with a given number of parameters, then I would know that the parameters are being passed. But how do I secure this when using a more dynamic approach? With unit tests?
Well in Grails command objects are an excellent choice. You can pass them up to various layers without issues. They are pretty analogous to domain classes, only without the whole persistence functionality.
Otherwise I would recommend using plain old Groovy classes (POGOs). Groovy allows you to keep your code very short (compared to Java and many other languages as well) and offers very handy transforms for common design patterns you might need (e.g. Canonical, Immutable, IndexedProperty, DelegatesTo...).
Compared to command objects POGOs do require you to write e.g. validation code by yourself, but this can be as simple as
boolean isValid() {
name && lastName && countryCode in ['US', 'CA']
}
You can keep static factories in a POGO to help you construct them in the various circumstances. Plus you can define more than one class in a file so you can keep the POGO code wherever it makes most sense. I would definitely prefer this approach to simple maps because the code is better encapsulated, POGOs can be unit tested & documented.
I'm wanting to dynamically create and query tables using Datamapper.
While Datamapper allows you to work with legacy tables and schemas, and in this way set the table name used this is only during initialisation, not within the application.
Is there an easy way to tell Datamapper to migrate/upgrade a Model with an assigned table name in application, and to then tell it to query this table?
This should not be a problem.
All Ruby classes can be created, and re-defined at run-time. Even initialization is at run-time. Initialization just happens to be executed first, before other code is executed.
That is why monkey-patches work so easily. It's just additional code at initialization that just re-defines classes to add extra methods, variables etc.
There is no Ruby code that is "special" in the sense that it only runs at compile time. Ruby is an interpreted language.
To dynamically create a class, see Dynamically creating class in Ruby.
Assuming you don't need to dynamically create classes from an array of strings, you can define additional methods with define_method, or call Datamapper methods at runtime to add attributes.
To define new methods in a class:
Post.send :define_method, :new_method_name do
end
To define a new property using the Datamapper property:
class Post
include DataMapper::Resource
property :title, String # the static way
end
Post.send :property, :title, String # add property the dynamic way (at run-time)
Do note that any tables or properties you define at run-time will not be available if you restart your server, unless the code that dynamically generates these are re-executed.
To update your tables at runtime, you simply do the same thing as normal, that is, call:
DataMapper.auto_upgrade!
To upgrade only a single table, you can also do:
Post.auto_upgrade!
2nd warning: If you have multiple processes, the dynamic code will need to be run in each process, or the additional table Models and Properties will not be available.
This is a problem if you have multiple worker processes, as might happen in production (eg. Nginx with multiple Unicorn workers, or multiple Mongrel workers behind a Ha_proxy).
If you have a single process server, then that is not a problem. However, if you have multiple worker processes, you must run the dynamic code to generate these extra classes and properties in EACH process to make it available.
This is actually the same for initialization, because each process goes through initialization (or if forked, inherit any initialization).
The easiest way without changing anything under the hood is to use separate databases instead of tables (assuming that any relationships will also be stored in the separate database) and open a connection to an additional repository in the block.
DataMapper.setup(:external, "adapter://username:password#hostname/dbname")
DataMapper.repository(:external) do...end