How to declare Custom Marshaller as Singleton Bean or Prototype - spring

I am clear that when we use some object that doesnt change its state , it's always wise to declare as Singleton . Like writing Custom DateFormatter . because it will rearrange the value passed from different random threads in a given date format .
Now i have a similar situation where in a Spring based application that spawns multiple threads at various stages and marshalls different types of java objects into JSON. I need to write custom JSON Marshaller with specific mapper settings and having two methods marshall() unmarshall(). I am here bit confused , to understand how customJsonMarshaller() bean will behave if it is provided default (Singleton) scope in application . Will that impact the performance of application throughput ? since there will be only single instance of cusomtMarshaller, will the other threads wait until a given thread comes out of marshal( < T > object) ? Or is it like multiple threads acan still work in parallel since the object is different for different threads and method variables are stored in separate thread stack ?
Or Do i better declare as prototype scope ?
So from multiple threads at a time below code will get called
String result = customJSONMarshaller.marshall( < T > object) ;
where < T > will be different in each request .

Related

Trying to identify if a data injection method has a name already

Lets say we have a class "Car" than has different pieces of data ( maker, model, color, fabrication date, registration date, etc). The class has no method to get data, but it knows to as for it from another object (sent via constructor, let's cal it for short DS).- and the same for when needing to update changes.
A method getColor() would be implemented like this
if(! this->loaded('color')){
this->askDS('color') // this will do the necesarry work to generate a request to DS
}
return this->information('color');
Nothing too fancy so far. No comes the part i want to find out if it has a name, or if there are libraries / frameworks that do this already.
DS has a list of methods registered dinamically based on the class that needs data. For car we have:
input: car serial number, output: method to use to read the numbers to extract raw values
input: car raw color value, output: color code
input: car color code, manufacturer, year, mode, output:human-readable color (for example navy blue)
Now, DS or any method does not have an ordered list of using command to start from serial number and return the color blue, but if can construct a chain of methods that from one set of data, it can run them in order and get the desired data.
For our example above, DS runs 1,2,3 in that order and injects the data resulted from all methods into the class object that needed it.
Now if the car needs registration info, we have method (4) that gets that from the police database with an api request.
So, given:
- a type of model (class/object)
- a list of methods that take a fixed list of input(object properties) and give out a fixed list of output (object properties)
- a class DS that can glue the methods and run the needed ones for a model to get from property A (serial) to properby B (human readable colour) without the model or DS having a preconfigured way to get this data but finding it as needed.
does this have a name or is it already implemented somewhere ?
I've implemented a very basic prototype and it works very nice and i think this implementation method has useful features:
if you have a set of methods that do sql queries and then your app switches to using an api, you only need to change the methods and don't have to touch any other part of the application
when looking for a chain of methods that resolve the 'need' the object has, you can find a method chain, run it, if it fails keep looking for another list of methods based on the currently available data - so if you have multiple sources for a piece of data, it can try multiple versions
starting from the above paragraph i could start with an app that only has sql queries for data retrieval - when i find out a part of the app overloads the sql server i could add a method to retrieve data from cache with a lower cost than the one from database (or multiple layered caches, each with different costs)
i could probably add business logi in the mix the same ways as cache, and based on the user location / options present different data
this requires less coding overall, and decouples the data source from the object, making each piece easier to mock/test
what is needed to make this fast is a caching solution for the discovered method chains, since matching hundreds of thousands of methods per model type would be time-consuming but I don't think this is very hard to do - just store all found chains in memory as you find them and some metadata to be able to resume a search from any point in time - when you update the methods, just clear the cache, take a performance hit for the first requests
Thank you for your time
What you describe sounds like a somewhat roundabout way of doing Dependency Injection. Quote:
"Passing the service to the client, rather than allowing a client to
build or find the service, is the fundamental requirement of the
pattern."
Depending on what language you're using, there should be several Dependency Injection frameworks/libraries available.

Why does Grails recommend singleton scope for controllers with actions as methods?

I know early versions of Grails used prototype scope for controllers because actions were all closures at that time.
I know that the current version documentation recommends singleton scoped controllers for controllers that use methods as actions.
From the following post it seems that methods and singleton scope are more desirable or recommended, but it's not clear why.
ttp://grails.1312388.n4.nabble.com/Default-scope-for-controllers-doc-td4657986.html
We have a large project that uses prototype scoped controllers with actions as methods. Changing to the recommended controller scope involves risk and retesting, and removing any non-singleton friendly state from existing controllers.
I'd like to know why does Grails recommend singleton scope for method as actions controllers? Is it just because that's more common and similar to Spring MVC, and avoids confusion, or is there an opportunity for performance improvement, or what? What do I gain if I make the switch to singleton controllers? What is the cost if I don't make the switch?
I haven't worked much with Rails, but (at least in the versions I played with, things may be different now) the controller is the model, containing the data to be rendered by the view. During request handling you store values in controller instance fields before handing off processing to view renderers. So there it makes sense to create a new controller instance for each request so they're distinct.
Grails was inspired by Rails and uses several of its conventions, most famously convention-over-configuration. But the ability to use the controller as the model was also added as a feature, although it wasn't well documented and I doubt many used it.
The typical way a controller action works when using a GSP to render the response (as opposed to forwarding or redirecting, or rendering directly in the controller, e.g. with render foo as JSON) is to return a Map with one or more key/value pairs from the action, and often the return keyword is omitted since it's optional in Groovy:
class MyController {
def someAction() {
def theUser = ...
def theOtherObject = ...
[user: theUser, other: theOtherObject]
}
}
Here the model map has two entries, one keyed by user and the other keyed by other, and those will be the variable names used in the GSP to access the data.
But what most people don't know is that you could also do it like this:
class MyController {
def user
def other
def someAction() {
user = ...
other = ...
}
}
In this case a model map isn't returned from the action, so Grails would populate the model from all properties of the controller class, and in this case the same GSP would work for both approaches since the variable names in the second approach are the same as the map keys in the first.
The option to make controllers singletons was added in 2.0 (technically 1.4 before it was renamed to 2.0, see this JIRA issue) and we also added support for methods as actions in addition to retaining support for closures. The original assumption was that using closures would enable some interesting features, but that never happened. Using methods is more natural since you can override them in subclasses, unlike closures which are just class-scope fields.
As part of the 2.0 rework we removed that Rails-inspired feature on the assumption that since it was essentially undocumented, that the impact on the few odd apps that used the feature wouldn't be significant. I don't recall anyone ever complaining about the loss of that feature.
Although controller classes are typically readily garbage-collectible and creating an instance per request doesn't affect much, it's rare to need per-request state in a controller, so singletons usually make more sense. The default prototype scope was left for backwards compatibility but it's easy to change the default with a Config.groovy property (and the file generated by the create-app script does this).
Although each request does get a new request and response, and if sessions are used each user will have their own, but those are not instance fields of the controllers. They look like they are because we can access request, response, session, params, etc. inside of actions, but those are actually the property access forms of the getRequest(), getResponse(), getSession(), and getParams() methods that are mixed into all controllers during compilation by AST transforms. The methods access their objects not as class fields but via ThreadLocals, so there is per-request state but it's not stored in the controller instance.
I don't remember if there was much in the way of benchmarking to compare using methods versus closures, but Lari Hotari probably did some. If there was a difference it was probably not significant. You could test this in your own application by converting just one or a few controllers and doing before and after tests. If the performance, scaling, and memory differences aren't significant between the two approaches you'll probably be safe staying with prototype score and/or closures. If there is a difference and you don't have instance fields in your controllers, then it'll probably be worth the effort to convert to singletons and methods.
If you do have instance fields they can probably be converted to request attributes - request.foo = 42 is a metaclass shortcut for request.setAttribute('foo', 42), so you could store per-request data safely there instead.

Groovy pass request params between classes

If I want to handle many parameters from for example a web request and pass it between classes (layers) - what is the preferred way?
I know it is easy to pass optional numbers of parameters through the constructor as a map.
I can also pass a map directly and if the keys match the receiving objects property names it should work in a similar way
Or I could just pass the map and then instantiate for example domain classes from that
I could use a special class as data carrier with given number of properties
I have a domain class (not database domain but business domain) that needs data from the user interface.
What is the best way to pass data through the layers and how do I know that all required data is being passed if using a data structure - like a map - with key values? If I would have a more static constructor with a given number of parameters, then I would know that the parameters are being passed. But how do I secure this when using a more dynamic approach? With unit tests?
Well in Grails command objects are an excellent choice. You can pass them up to various layers without issues. They are pretty analogous to domain classes, only without the whole persistence functionality.
Otherwise I would recommend using plain old Groovy classes (POGOs). Groovy allows you to keep your code very short (compared to Java and many other languages as well) and offers very handy transforms for common design patterns you might need (e.g. Canonical, Immutable, IndexedProperty, DelegatesTo...).
Compared to command objects POGOs do require you to write e.g. validation code by yourself, but this can be as simple as
boolean isValid() {
name && lastName && countryCode in ['US', 'CA']
}
You can keep static factories in a POGO to help you construct them in the various circumstances. Plus you can define more than one class in a file so you can keep the POGO code wherever it makes most sense. I would definitely prefer this approach to simple maps because the code is better encapsulated, POGOs can be unit tested & documented.

Spring BatchSqlUpdate vs NamedParameterJdbcTemplate using named parameters

I have been using the BatchSqlUpdate class successfully for a while. The only annoyance using it is that named parameters need to be registered before running any query using the declareParameter or setParameter methods. This means that the type of the parameters has to be declared as well. However, Spring also provides a NamedParameterJdbcTemplate class which has a very convenient batchUpdate method that takes named parameters as input (an array of maps or SqlParameterSource objects) without the need of declaring them first. On top of that, this class can be reused easily and I also believe it's thread-safe.
So I have a couple of questions about this:
What's the recommended way to perform (multiple) batch updates?
Why is this feature duplicated across two different classes that also behave differently?
Why does BatchSqlUpdate require declared parameters if NamedParameterJdbcTemplate does not?
Thanks for the thoughts!
Giovanni
After doing some research, I reached the following conclusions.
First of all, I realized that the NamedParameterJdbcTemplate class is the only one accepting named parameters for batch updates. The method batchUpdate(String sql,Map[] batchValues) was added in Spring 3 to achieve this.
The BatchSqlUpdate class contains an overridden update(Object... params) method that adds the given statement parameters to the queue rather than executing them immediately, as stated in the javadoc. This means that the statements will be executed only when the flush() method is called or the batch size exceeded the maximum. This classed doesn't support named parameters, although it contains a updateByNamedParam() method inherited from SqlUpdate. This is unfortunate since this method allows reuse of the same map for the named parameters, whereas the NamedParameterJdbcTemplate.batchUpdate() method requires an array of maps, with the associated overhead, code bloating and complications in reusing the array of maps if the batch size is variable.
I think it would be useful to have an overridden version of updateByNamedParam(Map paramMap) in BatchSqlUpdate to behave just like update(Object... params) but with added support for named parameters.

Specify table name mid application Ruby-Datamapper

I'm wanting to dynamically create and query tables using Datamapper.
While Datamapper allows you to work with legacy tables and schemas, and in this way set the table name used this is only during initialisation, not within the application.
Is there an easy way to tell Datamapper to migrate/upgrade a Model with an assigned table name in application, and to then tell it to query this table?
This should not be a problem.
All Ruby classes can be created, and re-defined at run-time. Even initialization is at run-time. Initialization just happens to be executed first, before other code is executed.
That is why monkey-patches work so easily. It's just additional code at initialization that just re-defines classes to add extra methods, variables etc.
There is no Ruby code that is "special" in the sense that it only runs at compile time. Ruby is an interpreted language.
To dynamically create a class, see Dynamically creating class in Ruby.
Assuming you don't need to dynamically create classes from an array of strings, you can define additional methods with define_method, or call Datamapper methods at runtime to add attributes.
To define new methods in a class:
Post.send :define_method, :new_method_name do
end
To define a new property using the Datamapper property:
class Post
include DataMapper::Resource
property :title, String # the static way
end
Post.send :property, :title, String # add property the dynamic way (at run-time)
Do note that any tables or properties you define at run-time will not be available if you restart your server, unless the code that dynamically generates these are re-executed.
To update your tables at runtime, you simply do the same thing as normal, that is, call:
DataMapper.auto_upgrade!
To upgrade only a single table, you can also do:
Post.auto_upgrade!
2nd warning: If you have multiple processes, the dynamic code will need to be run in each process, or the additional table Models and Properties will not be available.
This is a problem if you have multiple worker processes, as might happen in production (eg. Nginx with multiple Unicorn workers, or multiple Mongrel workers behind a Ha_proxy).
If you have a single process server, then that is not a problem. However, if you have multiple worker processes, you must run the dynamic code to generate these extra classes and properties in EACH process to make it available.
This is actually the same for initialization, because each process goes through initialization (or if forked, inherit any initialization).
The easiest way without changing anything under the hood is to use separate databases instead of tables (assuming that any relationships will also be stored in the separate database) and open a connection to an additional repository in the block.
DataMapper.setup(:external, "adapter://username:password#hostname/dbname")
DataMapper.repository(:external) do...end

Resources