Hibernate 6 equivelant of SQLFunction render - spring-boot

I am struggling to find out how to migrate my code containing org.hibernate.dialect.function.SQLFunction.render method … to Hibernate 6
SessionFactoryImplementor d = this.entityManagerFactory.unwrap(SessionFactoryImplementor.class);
SQLFunction fnc = d.getSqlFunctionRegistry()
.findSQLFunction("fncName“);
String render = fnc.render(null, expressions,
this.entityManagerFactory.unwrap(SessionFactoryImplementor.class));
the first part I guess should be
SqlFunction fnc = (SqlFunction) d.getQueryEngine().getSqmFunctionRegistry().findFunctionDescriptor("fncName“);
but I am stuck with the the 2nd part

SessionFactoryImplementor is documented as an SPI, and so we don't really recommend using it in this way.
/**
* Defines the internal contract between the {#link SessionFactory} and the internal
* implementation of Hibernate.
At least, if you do this sort of thing, you do it at your own risk.
As you've noticed, SqmFunctionDescriptor, which is the replacement for SQLFunction, is a much more technical API, which was a very necessary change, because Hibernate's handling of functions is now much more sophisticated.
I can't really imagine why you would want to be hand-rendering a function invocation, but I'm sure you must have your reasons. Whatever they are, I think it's not likely to work, at least not without an unreasonable amount of messiness.
Sorry.

Related

Can I set the record limit of Laravel's paginate method with user information?

I'm using the paginate method of the query builder and I would like to allow the user to choose the number of items per page.
$paginate= Model::paginate($request->input('per_page'));
Doing this could I be opening a loophole for SQL Injection or is this value sanitized first?
You can use a validator to make sure $request->input('per_page') is an integer or whatever validation you want.
documentation here : https://laravel.com/docs/9.x/validation
Such methods must be protected. This is what models are for.
But you are right, it's better to be safe than sorry and verify your premises. This is especially true with popular frameworks, because sometimes the creators crave for simplicity above everything else, often forgetting even security.
But it seems that in this case, Laravel QueryBuilder casts the perPage value to integer, making it immune to SQL injection:
protected function compileOffset(Builder $query, $offset)
{
return 'offset '.(int) $offset;
}
Then I dug a bit into the history, and found that the protection has been added almost a decade ago, so you can be sure that with any supported version of Laravel this part is safe.
That said, validating user input is still a good idea. Even being protected from SQL injections, you don't want any unexpected behavior. I don't think 500000 or -100 are good values whatsoever. If you can see that the data is not valid, it's a good strategy to bail off already, without waiting for some bizarre things to happen. So you may consider validating this input value just like any other input, like good programmers always do.

When to use Encapsulate Collection?

In the smell Data Class as Martin Fowler described in Refactoring, he suggests if I have a collection field in my class I should encapsulate it.
The pattern Encapsulate Collection(208) says we should add following methods:
get_unmodified_collection
add_item
remove_item
and remove these:
get_collection
set_collection
To make sure any changes on this collection need go through the class.
Should I refactor every class which has a collection field with this pattern? Or it depends on some other reasons like frequency of usage?
I use C++ in my project now.
Any suggestion would be helpful. Thanks.
These are well formulated questions and my answer is:
Should I refactor every class which has a collection field with this
pattern?
No, you should not refactor every class which has a collection field. Every fundamentalism is a way to hell. Use common sense and do not make your design too good, just good enough.
Or it depends on some other reasons like frequency of usage?
The second question comes from a common mistake. The reason why we refactor or use design pattern is not primarily the frequency of use. We do it to make the code more clear, more maintainable, more expandable, more understandable, sometimes (but not always!) more effective. Everything which adds to these goals is good. Everything which does not, is bad.
You might have expected a yes/no answer, but such one is not possible here. As said, use your common sense and measure your solution from the above mentioned viewpoints.
I generally like the idea of encapsulating collections. Also encapsulating plain Strings into named business classes. I do it almost always when the classes are meaningful in the business domain.
I would always prefer
public class People {
private final Collection<Man> people;
... // useful methods
}
over the plain Collection<Man> when Man is a business class (a domain object). Or I would sometimes do it in this way:
public class People implements Collection<Man> {
private final Collection<Man> people;
... // delegate methods, such as
#Override
public int size() {
return people.size();
}
#Override
public Man get(int index) {
// Here might also be some manipulation with the returned data etc.
return people.get(index);
}
#Override
public boolean add(Man man) {
// Decoration - added some validation
if (/* man does not match some criteria */) {
return false;
}
return people.add(man);
}
... // useful methods
}
Or similarly I prefer
public class StreetAddress {
private final String value;
public String getTextValue() { return value; }
...
// later I may add more business logic, such as parsing the street address
// to street name and house number etc.
}
over just using plain String streetAddress - thus I keep the door opened to any future change of the underlying logic and to adding any useful methods.
However, I try not to overkill my design when it is not needed so I am as well as happy with plain collections and plain Strings when it is more suited.
I think it depends on the language you are developing with. Since there are already interfaces that do just that C# and Java for example. In C# we have ICollection, IEnumerable, IList. In Java Collection, List, etc.
If your language doesn't have an interface to refer to a collection regarless of their inner implementation and you require to have your own abstraction of that class, then it's probably a good idea to do so. And yes, you should not let the collection to be modified directly since that completely defeats the purpose.
It would really help if you tell us which language are you developing with. Granted, it is kind of a language-agnostic question, but people knowledgeable in that language might recommend you the best practices in it and if there's already a way to achieve what you need.
The motivation behind Encapsulate Collection is to reduce the coupling of the collection's owning class to its clients.
Every refactoring tries to improve maintainability of the code, so future changes are easier. In this case changing the collection class from vector to list for example, changes all the clients' uses of the class. If you encapsulate this with this refactoring you can change the collection without changes to clients. This follows on of SOLID principles, the dependency inversion principle: Depend upon Abstractions. Do not depend upon concretions.
You have to decide for your own code base, whether this is relevant for you, meaning that your code base is still being changed and has to be maintained (then yes, do it for every class) or not (then no, leave the code be).

Magento, magic getters v getData

I have been using magento for a while now and always cant decide between using the magic getter and getData()
Can someone explain the main difference, apart from the slight performance overhead (and it must be very slight).
I am thinking in terms:
Future code proof (i think magento 2 will not be using magic getter)
Stylistically
Performance
Stability
Any other reasons to use 1 over the other
There is no clear way to go based on the core code as it uses a mixture of both
There's no one answer to fit all situations and it's best to decide based on the model you are using and the particular use case.
Performance is quite poor for magic methods, as well as the extra overhead of converting from CamelCase to under_score on each accessor.
the magic methods are basically a wrapper for getData() anyway, with extra overhead.
There's is one advantage of using magic methods though, for example:
if you use getAttributeName() rather than getData('attribute_name')
at some point in the future, the model may be updated to include a real, concrete getAttributeName() method, in which case your code will still work fine. However if you have used getData(), you access the attribute directly, and bypass the new method, which could include some important calculations which you are bypassing.
In my opinion, the safest way is to always use getData($key). The magic getter uses the same method as you already pointed out.
The advantage is that you can find all references to getData in your code and change it appropriately in case the getData() method is refactored. Compare that with having to find out all magic method calls where they are always named differently.
The second thing is that the magic getter can screw you up easily when you have a method which is named the same way (I think getName() got me once and it took quite some time to debug).
So my vote is definitely for using getData().
As stated before, it's best to use getData over the magic methods. Just wanted to add 2 quick points:
1) The performance overhead is not that slight, especially because of the implementation of _underscore in Varien_Object (as mentioned by Andrew).
2) The implementation of getData has some logic that helps "pretify" code, and although it is a little slower than typical getData calls, is still much faster than magic methods.
If you have nested Varien_Object's so that you need to perform a call like:
$firstObject->getData('second_object')->getData('third_object')->getData('some_string');
you can also perform that call like this:
$firstObject->getData('second_object/third_object/some_string');

Is extracting method parameters as an object a good idea?

http://www.jetbrains.com/idea/webhelp/extract-parameter-object.html
I have always found extracting method parameters as an object a good idea, for methods which have a large number of parameters.
public void Method(A a, B b, C c, D d, E e);
becomes
public class Wrapper {A; B; C; D}
public void Method(Wrapper wrapper);
This allows me to:
Have better readability in my code
Perform validation of these parameters in the Wrapper class and reuse it across layers/components if need be.
Provide less brittle method signatures.
Are there any other advantages/disadvantages you see to this that would help convince someone who's is writing methods with lot of parameters?. I am coding C# 4 if that makes a difference.
The only disadvantage I can think of is having an additional abstraction in your system, requiring you to extract (although trivially) the actual data before access. I'm even not sure whether it can be called a disadvantage.
Most important advantage of parameters encapulation is having a robust well-defined interface which can accommodate future changes.
A deeper advantage is that as you wrap the parameters in a new class, you realize that some behavior can be moved to the new class. This is because the bodies of the methods that modify the parameters are likely to manipulate the parameters similarly. Moving this common behavior into the new class allows you to remove much code duplication. Parameter validation is just one example of a behavior that can be moved into the new class.

Is validation a cross cutting concern?

Some of my coworkers consider validation as an example of a cross cutting concern and think Aspect Oriented Programming is a good way to handle validation concerns. To use PostSharp notation they think something like this is a good idea:
[InRange(20.0, 80.0)]
public double Weight
{
get { return this.weight; }
set { this.weight = value; }
}
My opinion is that validation is an inherent part of an algorithm and there is no need to push it behind the scenes using AOP. However it is much like a gut feeling and I have not a very clear justification for it.
When do you think it is a good idea to handle validation with AOP and when it is better to handle it inline with your main code?
Looks a lot like Microsoft DataAnnotations as used by MVC.
You aren't really "pushing it behind the scenes", since all the relevant information is right there in the attribute constructor. You're just pushing the noisy boilerplate behind the scenes and into its own class. You also no longer need an explicit backing field, and you can just use auto-getters and setters. So now, 8 lines (or more) of code can be reduced to 1 line and 1 attribute.
I think that if your alternative is to put some validation code inside of the setter, and you do that multiple times in your project to where it becomes repetitive boilerplate, then yes, I think it's a valid cross-cutting concern and appropriate for using PostSharp. Otherwise, I don't think one or two places justify bringing in a third party tool just yet.
I'd think that it is a cross cutting concern although i have never implemented it with AOP specifically.
That said, there are many different validation scenarios and I doubt they can all be exactly black or white.
Microsoft Patterns and Practice - Cross cutting concerns

Resources