I'm using Java 8 Date/Time APIs in my DTO objects sent to and from clients to the server. I'm using Spring Boot 1.3.2 and its default Jackson 2. I added the com.fasterxml.jackson.datatype:jackson-datatype-jsr310 module as a dependency. Still, the Instant instances are not serialized using the InstantSerializer, they are written as a nested JSON structure with two fields, epochSeconds and nano. As these two methods are discoverable as getters in the Instant class, I guess the default bean serializer takes care of this.
What I would like is to have Instant serialized as longs, just as a Date would have been. I'm not doing any extra customization of the ObjectMapper, so I kind of expected this to work without much fuss.
Related
I have been working with Spring Data JPA repository in my project for some time and I know the below points:
In the repository interfaces, we can add the methods like findByCustomerNameAndPhone() (assuming customerName and phone are fields in the domain object).
Then, Spring provides the implementation by implementing the above repository interface methods at runtime (during the application run).
I am interested on how this has been coded and I have looked at the Spring JPA source code & APIs, but I could not find answers to the questions below:
How is the repository implementation class generated at runtime & methods being implemented and injected?
Does Spring Data JPA use CGlib or any bytecode manipulation libraries to implement the methods and inject dynamically?
Could you please help with the above queries and also provide any supported documentation ?
First of all, there's no code generation going on, which means: no CGLib, no byte-code generation at all. The fundamental approach is that a JDK proxy instance is created programmatically using Spring's ProxyFactory API to back the interface and a MethodInterceptor intercepts all calls to the instance and routes the method into the appropriate places:
If the repository has been initialized with a custom implementation part (see that part of the reference documentation for details), and the method invoked is implemented in that class, the call is routed there.
If the method is a query method (see DefaultRepositoryInformation for how that is determined), the store specific query execution mechanism kicks in and executes the query determined to be executed for that method at startup. For that a resolution mechanism is in place that tries to identify explicitly declared queries in various places (using #Query on the method, JPA named queries) eventually falling back to query derivation from the method name. For the query mechanism detection, see JpaQueryLookupStrategy. The parsing logic for the query derivation can be found in PartTree. The store specific translation into an actual query can be seen e.g. in JpaQueryCreator.
If none of the above apply the method executed has to be one implemented by a store-specific repository base class (SimpleJpaRepository in case of JPA) and the call is routed into an instance of that.
The method interceptor implementing that routing logic is QueryExecutorMethodInterceptor, the high level routing logic can be found here.
The creation of those proxies is encapsulated into a standard Java based Factory pattern implementation. The high-level proxy creation can be found in RepositoryFactorySupport. The store-specific implementations then add the necessary infrastructure components so that for JPA you can go ahead and just write code like this:
EntityManager em = … // obtain an EntityManager
JpaRepositoryFactory factory = new JpaRepositoryFactory(em);
UserRepository repository = factory.getRepository(UserRepository.class);
The reason I mention that explicitly is that it should become clear that, in its core, nothing of that code requires a Spring container to run in the first place. It needs Spring as a library on the classpath (because we prefer to not reinvent the wheel), but is container agnostic in general.
To ease the integration with DI containers we've of course then built integration with Spring Java configuration, an XML namespace, but also a CDI extension, so that Spring Data can be used in plain CDI scenarios.
I have 2 spring boot micro-services core and web:
The core service reacts to some event (EmployeeCreatedEvent) which is triggered by web.
The core service is using jackson serializer to serialize commands, queries, events and messages whereas the web service is using xstream serializer.
i am getting below error in core while handling EmployeeCreatedEvent triggered by web:
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character (’<’ (code 60)):
expected a valid value (JSON String, Number, Array, Object or token ‘null’, ‘true’ or ‘false’)
i am using below properties (jackson for core and default for web):
axon.serializer.general = jackson/default
axon.serializer.events = jackson/default
axon.serializer.messages = jackson/default
can someone suggest whether it is ok to use different serializer for same event in different services.
I agree with #Augusto here and you should make a decision about which serialization format you are going to use across all your services.
I am assuming you started with the default serializer (which is XStream and XML) and later on decided to move to Jackson (which is JSON).
In that case, there are 2 advices I can share with you:
You can write a Custom Serializer which have both implementations and try with both of them and see which one works, for example trying with XML and fallbacking to JSON.
Or you can have a Component which will listen to all Events from your EventStore, deserialize them using XStream and write them back to another EventStore using Jackson. In this case, for this migration period, you will have this component using 2 Event Streams (one for each EventStore) but after the migration is done your whole EventStore will be in JSON. This requires some work but is the best approach in my opinion and will save you a lot of time and pain in the future.
You can look more about configuring 2 sources here.
Can we use JSON Schema Validation in the place of Java Bean Validation JSR303 for Spring Boot Rest APIs for Enterprise Applications? Which one is more efficient to validate request Payload to Spring Boot Rest APIs?
(i.e. performance wise, cross-validation wise and RegEx pattern based validation)
It is a good question and there are no definitive answers for it as, perhaps, it is dependent on the application domain and remains subjective to that. At the base level (which usually covers 90%) of all use cases of validating user input to the REST service, both have the equivalent facility to validate data adequately. Both support constraints which can be used to achieve the same result.
However, on one front Bean Validation stands out is its ability to define custom validators, which can be used to validate very specific domain/application dependent constraints. For example, if there is case where a class which has 3 attributes (say, A,B and C) and a constraint is required that is either A occurs or B & C occurs but not both, then it is not really possible to put a constraint in JSON schema directly, it has to be handled through design of the schema (similarly in XML, actually it is more complicated with XML).
On the other hand in Bean Validation a custom validator can be written to handle this situation quite easily.
However, these kind of cases are few and far between.
Another point to consider is the integration of the Bean Validation in the underlying framework e.g. Spring, Jersey, RESTEasy etc., JSON schema validation is not yet integrated in the frameworks.
Therefore, given the support for the tech, it is perhaps better to stick with Bean Validation 2.0 and leverage the underlying frameworks capability to validation (this is, however, purely my view).
From an application development prospect, Java bean validator is sufficient for the business needs. From a system integration point, JSON schema externalizes the business rules and provides a platform independent interface control. So if your system involves many subsystems, JSON schema gives a better way to verify message payload.
I prefer OpenAPI Specification, which can be regarded roughly as a JSON Schema dialect, to bean validation 2.0 (JSR380).
OpenAPI is the de-facto (correct me) standard to describe RESTful API today. There are tools for validation accroding to OpenAPI spec is available, an incomplete collection can be found at here. And of course it works well with Java/Spring.
OpenAPI validates JSON string rather than a POJO, thus it can handle the following case naturally while bean validation in Java cannot: say i want to validate the object in the request body of a PATCH request, and the object must have a property named A, while the value of A is can be null;
And there are more than validation you can do with an OpenAPI spec in your hand. Because an OpenAPI schema does not only define what the data model of RESTful API looks like, it also describes other aspects (endponts, parameters and status code etc.) of the API in the same file. Out there are a bunch of code generators to auto-generate server-side or client-side code to serve requests or retrive response in whatever language.
If the message payload passed from one "filter" to the next "filter" in Spring XD stream is a custom Java class instance, I suppose some kind of serialization mechanism is required if the "pipe" in between is a remote transport.
What kind of "serialization"/"transformation" is available in Spring XD for this case?
Will Java serialization work for this case? And if the custom class is serializable, will Spring XD automatically serialise/deserialize the object, or we still need to give some hints in the stream definition/module definition?
Thanks,
Simon
XD uses Kryo serialization with remote transports. Java.io.serialization would work in theory, however we don't want to assume that payload types implement java.io.Serializable. Also, I personally don't see any advantage in choosing Java serialization automatically over Kryo if the payload is Serializable. Java serialization is supported via Spring XD's type conversion.
You should be able to create a stream containing something like:
filter1 --outputType=--application/x-java-serialized-object | filter2 --input-type=my.custom.SerializablePayloadType
In this case, the type conversion will use Java serialization before hitting the transport. The message bus will detect that the payload is a byte array and will send it directly to the next module as is. The message containing the bytes will set the content-type header to the declared outputType so that it can be deserialized using Java serialization by the inbound converter.
The above requires that the payload implements Serializable. Also custom payload types must be included in Spring XD's class path, i.e., add a jar to xd/lib on each installed container.
I am asking because I have only seen java beans used with a framework like struts or JSF.
Is it possible to send and access a java bean over an AJAX request?
Servlet myServlet creates and fills a java bean instance, and sets it in the request scope. An already loaded jsp/html page uses AJAX to request data from myServlet. Can this bean be accessed in any way? After thinking for a while, I have come to accept that this cannot be done.
If it can't be done, what would be the best practice when trying to transmit data from a model (i.e. user information from a database) asynchronously to a client when using Tomcat/Servlets and JSP?
It's technically possible if you serialize the javabean to a byte array or even a base64 encoded string using the usual Java Serialization API.
But how would it ever make sense to use a proprietary format to transfer data around? How would non-Java clients (e.g. JavaScript!) ever be able to consume the serialized Java object? These days XML, JSON and even CSV are much more widely supported and accepted. Practically every self-respected programming language has tools to easily convert between XML/JSON/CSV and the model as definied in the programming language in question. E.g. Java has JAX-RS API to easily convert between javabeans and XML or JSON. JavaScript has —obviously— builtin support for JSON (guess what "JS" in JSON stands for).
To learn and play around with the basic concept, head to this answer: How to use Servlets and Ajax?
To learn about the advantages of JAX-RS over servlet, head to this answer: Servlet vs RESTful
You can still use struts or jsf as you would normally to construct markup(html). And then consume the markup that was constructed via ajax and then append to the dom. If you are familiar with jQuery, something like jQuery('#selector').load('actionUrl.action');
But if you are looking to examine a java bean, then you will have to serialize it to xml or json. If you are using a web framework like struts2 or spring, there is likely a mechanism for doing this serialization for you. If you want to edit the bean you will have to serialize, then edit the serialized bean, and then deserialize back to the java bean.