I'm new to Quarkus and I'm analysing the migration viability of a quite complex "Spring Boot" application.
One of the biggest challenges we have is regarding database access, which is performed with tons of native queries using "JdbcTemplate" like the example below:
return getJdbcTemplate().query( sql, rs -> {
Map< String, String > result = new HashMap<>();
while ( rs.next() ) {
result.put( rs.getString( "COLUMN_1" ), rs.getString( "COLUMN_2" ) );
}
return result;
} );
Sometimes the result is cast to a basic class (like "String.class"), but the absolute majority of cases the result is generated by a "ResultSetExtractor".
After some research I found indications that Panache accepts native query execution (as described here) and, following this post, I found this one with a very nice alternative using "entityManager.createNativeQuery()".
For the sake of simplicity (and my sanity), I've decided to use the "entityManager.createNativeQuery()" approach, converting the implemented "ResultSetExtractor" into "ResultTranformers", given that this won't demand, among other things, the "entitization" of some POJOs that are used on data retrieval by now.
The main objective at the end of this compatibility change is to have an application that can be executed on "Spring-boot" and "Quarkus" (at least regarding the persistence layer).
One question I have is regarding the "entityManager" declaration below:
#PersistenceContext
private EntityManager entityManager;
Considering that we will use, at this first moment, the "quarkus-spring-di" compatibility extension (which means the class that holds the "entityManager" is annotated with "#Repository" and not with "#ApplicationScoped"), the "entityManager" will be initialized?
And, considering all that was presented here, do you guys have any suggestions or warnings about the decided course of action?
Thanks and best regards!
Related
I'm currently using Apache UIMA to retrieve a list of occurrences of phenotype terms. However, the documentation (Why do so many bioinformatics software APIs lack good documentation!) seems to only point towards the CAS debugger GUI rather than being able to return the annotation index.
http://i.stack.imgur.com/giNoj.png - Picture of the CAS GUI, I want it to return the annotation index in the bottom left
Like I said, the docs don't really answer this (https://uima.apache.org/documentation.html), but generally I want to be able to call the process() method in the Annotator class, and for it to return the annotation index once it has found any and all occurrences.
Sorry if it's a silly question with an obvious answer, I've spent three hours going through the docs so far and haven't come any closer to finding the answer, if anyone's tried integrating it into a project in a similar way and can point me in the right direction, it would be much appreciated!
The process methods change the state inside the CAS. After calling ae.process(cas) or ae.process(jcas), the annotations are stored in the CAS. Just get the annotation index from the (J)Cas.
Apache uimaFIT might also be convenient for you as it provides various "select" methods to access annotations in the (J)CAS, e.g.:
// CAS version
Type tokenType = CasUtil.getType(cas, "my.Token");
for (AnnotationFS token : CasUtil.select(cas, tokenType)) {
...
}
// JCas version
for (Token token : JCasUtil.select(jcas, Token.class)) {
...
}
More detailed information on this API can be found in the uimaFIT documentation, in particular in the sections on pipelines and on access methods.
Disclosure: I am working on Apache uimaFIT.
Grails users know that Data access layer of this framework offer an AOP programming via seperation cross-layer from other soft layers : afterInsert, afterUpdate,beforeInsert .... methods .
class Person{
def afterInsert(){
//... Will be executed after inserting record into Person table
}
}
I search on the type of this methods vis-a-vis Constructor(instantiation ): Asynchronous or not . And i don't find the answer .
My question : if not, Does GORM will be breaked if we force those methods to be asynchronous.
UPDATE :
Indeed, we want send mails without using a ready plugin as we have our own API.
There are a great number of ways to accomplish what you are looking for, and without knowing all your requirements it's difficult to give you a solution that meets all of them. However, based on your question and the comments provided you could use the built in Asynchronous features in Grails to accomplish this.
This is just a sketch/example of something I came up with off the top of my head.
import static grails.async.Promises.*
class Person {
...
def afterUpdate() {
def task1 = task {
// whatever code you need to run goes here.
}
onComplete([task1]) {
// anything you want to run after the task completes, or nothing at all.
}
}
...
}
This is just one option. Again, there are a lot of options available to you. You could send a JMS message instead and have it processed on a different machine. You could use some type of eventing system, you could even use Spring AOP and Thread pools and abstract this even further. It depends on what your requirements are, and what your capabilities are as well.
I've done a little research on typed/generic aspects. An important fact about aspects is obliviousness. So the concerns of the aspects should be orthogonal to the domain concerns. Nevertheless there are investigations to make AspectJ type safe (StrongAspectJ) / introduce per-type aspects using generics. One paper mentioned an implementation of the Flyweight pattern as an aspect. Now I'm wondering if there are more use cases for generic aspects?
PostSharp is weakly typed, i.e. the advices see arguments and return values as 'objects'. There is some support for generic aspects in PostSharp (aspects can be generic classes), but it is not very useful since the advises are weakly typed.
Note that behind the cover, the glue code generated by PostSharp is strongly typed. But everything is downcast to an object when exposed to aspect code.
I'm considering implementing strongly-typed advised in a next version of PostSharp, possible with support of generic arguments. The reason would be run-time performance, because boxing of value types into an object brings a considerable performance overhead. Note that generics are implemented differently in .NET than in Java, so the point may need to be discussed differently on both platforms.
Feel free to contact me personally if you need any help for your thesis.
Auto-generating some of the boilerplate to make a class callable via RMI is another use case. That example implements some around advice for a bunch of methods.
pointcut callsToServer(Type T):
call(public T Server.*(..)) && this(Client)
T around(Type T): callsToServer(T) {
T obj = null;
try {
obj = proceed();
} catch (java.rmi.RemoteException ex) {}
return obj;
}
Generics allow you to say "we are going to return an object of the same type the method signature says". This is true, of course, if we just return the object. We might be able to do something similar with "after throwing" advice, but we wouldn't be able to manipulate the return value to translate a RemoteException into a null return value.
Hi all patient developers using spring data graph. Since there is so less documentation and pretty poor test coverage it is sometimes very difficult to understand what is the expected behavior of the underlying framework how the framework is supposed to work. Currently i have some questions related to new fetching approach introduced in SDG 1.1. As opposite to SDG 1.1 write\read through in 2.0 only relations and related object annotated with #Fetch annotation are eagerly fetched others are supposed to be fetched lazily .. and now my first question:
Is it possible to configure SDG so that if the loading of entity and
invoking getter on lazy relation takes place in the same transaction,
requested collection is fetch automatically? Kind of Persistence
Context in transaction scope, or maybe it is planned for the feature
releases.
How can I fetch lazy collection at once for #RelatedTo annotation ? fetch() method on from Neo4jOperation allows to fetch only one entity. Do i have to iterate through whole list and fetch entity for each object? What would be the best way to check if given object is already fetched / initialized or not?
As suggestion i think it would be more intuitive if there will be kind of lazy loading exception thrown instead of getting NPE when working with not initialized objects. Moreover the behavior is misleading since when object is not initialized and all member properties are null apart from id, equals method can provide true for different objects which has not been initialized, which is quite serious issues considering for example appliance of sets
Another issue which i noticed when working with SDG 2.0.0.RC1 is following: when i add new object to not fetched collection sometimes is properly added and persisted,however sometimes is not. I wrote test for this case and it works in non deterministic way. Sometimes it fails sometimes end with success. Here is the use case:
Group groupFromDb = neoTemplate.findOne(group.getId(), Group.class);
assertNotNull(groupFromDb);
assertEquals("Number of members must be equals to 1", 1, groupFromDb.getMembers().size());
User secondMember = UserMappingTest.createUser("secondMember");
groupFromDb.addMember(secondMember);
neoTemplate.save(groupFromDb);
Group groupAfterChange = neoTemplate.findOne(groupFromDb.getId(), Group.class);
assertNotNull(groupAfterChange);
assertEquals("Number of members must be equals to saved entity", groupFromDb.getMembers().size(), groupAfterChange.getMembers().size());
assertEquals("Number of members must be equals to 2", 2, groupAfterChange.getMembers().size());
This test fails sometimes on the last assert, which would mean that sometimes member is added to the set and sometimes not. I guess that the problem lies somewhere in the ManagedFieldAccessorSet, but it is difficult to say since this is non deterministic. I run the test with mvn2 and mvn3 with java 1.6_22 and 1.6_27 and i got always the same result: sometimes is Ok sometimes test fails. Implementation of User equals seems as follows:
#Override
public boolean equals(final Object other) {
if ( !(other instanceof User) ) {
return false;
}
User castOther = (User) other;
if(castOther.getId() == this.getId()) {
return true;
}
return new EqualsBuilder().append(username, castOther.username).isEquals();
}
- I find it also a bit problematic that for objects annotated with #Fetch java HashSet is used which is serializable, while using for lazy loaded fields ManagedFieldAccessorSet is used which is not serializable and causes not serializable exception.
Any help or advice are welcome. Thanks in advance!
I put together a quick code sample showing how to use the fetch() technique Michael describes:
http://springinpractice.com/2011/12/28/initializing-lazy-loaded-collections-with-spring-data-neo4j/
The simple mapping approach was only added to Spring Data Neo4j 2.0, so it is not as mature as the advanced AspectJ mapping. We're currently working on documenting it more extensively.
The lazy loading option was also added lately. So your feedback is very welcome.
Right now SDN doesn't employ a proxy approach for the lazily loaded objects. So the automatic "fetch on access" is not (yet) supported. That's why also no exception is thrown when accessing non-loaded fields and there is no means of "discovering" if an entity was not fully loaded.
In the current snapshot there is the template.fetch() operation to fully load lazy loaded objects and collections.
We'll look into the HashSet vs. ManagedSet issue, it is correct that this is not a good solution.
For the test-case. is the getId() returning a Long object or a long primitive? It might be sensible to use getId().equals(castOther.getId()) here as reference equality is not guaranteed for Number objects.
Clearly separation of concerns is a desirable trait in our code and the first obvious step most people take is to separate data access from presentation. In my situation, LINQ To SQL is being used within data access objects for the data access.
My question is, where should the use of the entity object stop? To clarify, I could pass the entity objects up to the domain layer but I feel as though an entity object is more than just a data object - it's like passing a bit of the DAL up to the next layer too.
Let's say I have a UserDAL class, should it expose an entity User object to the domain when a method GetByID() is called, or should it spit out a plain data object purely for storing the data and nothing more? (seems like wasteful duplication in this case)
What have you guys done in this same situation? Is there an alternative method to this?
Hope that wasn't too vague.
Thanks a lot,
Martin.
I return IQueryable of POCOs from my DAL (which uses LINQ2SQL), so no Linq entity object ever leaves the DAL. These POCOs are returned to the service and UI layers, and are also used to pass data back into the DAL for processing. Linq handles this very well:
IQueryable<MyObjects.Product> products = from p in linqDataContext.Products
select new MyObjects.Product //POCO
{
ProductID = p.ProductID
};
return products;
For most projects, we use LINQ to SQL entities as our business objects.
The LINQ to SQL designer allows you to control the accessibility of the classes and properties that it generates, so you can restrict access to anything that would allow the consumer to violate the business rules and provide suitable public alternatives (that respect the business rules) in partial classes.
There's even an article on implementing your business logic this way on the MSDN.
This saves you from writing a lot of tedious boilerplate code and you can even make your entities serialisable if you want to return them from a web service.
Whether or not you create a separate layer for the business logic really depends on the size of your project (with larger projects typically having greater variation between the business logic and data access layers).
I believe LINQ to Entities attempts to provide a one-stop solution to this conundrum by maintaining two separate models (a conceptual schema for your business logic and a storage schema for your data access).
I personally don't like my entities to spread accross the layers. My DAL return POCO's (of course, it often means extra work, but I found this much cleaner - maybe that this will be simpler in the next .NET version ;-)).
The question is not so simple and there are lots of different thinking of the subject (I keep on asking myself the same question that you are).
Maybe you could take a look at the MVC Storefront sample app : I like the essence of the concept (the mapping that occurs in the data layer especially).
Hope this helps.
There is a similar post here, however, I see your question is more about what you should do, rather than how you should do it.
In small applications I find a second POCO implementation to be wasteful, in larger applications (particularly those that implement web services) the POCO object (usually a Data Transfer Object) is useful.
If your app falls into the later case, you may want to look at ADO.Net Data Services.
Hope that helps!
I have actually struggled with this, as well. Using plain vanilla LINQ to SQL, I quickly abandoned the DBML tooling, because it bound the entities to tightly to the DAL. I was striving for a higher level of persistence ignorance, although Microsoft didn't make it very easy.
What I ended up doing was hand-writing the persistence ignorance layer, by having the DAL inherit from my POCOs. The inherited objects exposed the same properties of the POCO it is inheriting from, so while inside the persistence ignorance layer, I could use attributes to map to the objects. The called then could cast the inherited object back to its base type, or have the DAL do that for them. I preferred the latter case, because it lessened the amount of casting that needed to be done. Granted, this was a primarily read-only implementation, so I would have to revisit it for more complex update scenarios.
The amount of manual coding for this is rather large, because I also have to manually maintain (after coding, to begin with) the context and provider for each data source, on top of the object inheritance and mappings. If this project was being deprecated, I would definitely move to a more robust solution.
Looking forward to the Entity Framework, persistence ignorance is a commonly requested feature according to the design blogs for the EF team. In the meantime, if you decide to go the EF route, you could always look at a pre-rolled persistence ignorance tool, like the EFPocoAdapter project on MSDN, to help.
I use a custom LinqToSQL generator, built upon one I found in the Internet, in place of the default MSLinqToSQLGenerator.
To make my upper layers independent of such Linq objects, I create interfaces to represent each one of them and then use such interfaces in these layers.
Example:
public interface IConcept {
long Code { get; set; }
string Name { get; set; }
bool IsDefault { get; set; }
}
public partial class Concept : IConcept { }
[Table(Name="dbo.Concepts")]
public partial class Concept
{
private long _Code;
private string _Name;
private bool _IsDefault;
partial void OnCreated();
public Concept() { OnCreated(); }
[Column(Storage="_Code", DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true)]
public long Code
{
//***
}
[Column(Storage="_Name", DbType="VarChar(50) NOT NULL")]
public string Name
{
//***
}
[Column(Storage="_IsDefault", DbType="Bit NOT NULL")]
public bool IsDefault
{
//***
}
}
Of course there is much more than this, but that's the idea.
Please keep in mind that Linq to SQL is not a forward looking technology. It was released, it's fun to play with, but Microsoft is not taking it anywhere. I have a feeling it won't be supported forever either. Take a look at the Entity Framework (EF) by Microsoft which incorporates some of the Linq to SQL goodness.