OpenAPI multimodule EAR deployment - open-liberty

I would like to package 2 openapi.yaml definition files with it's corresponding implementation, each one in it's own war file into one ear and deploy it to openliberty. So war this works and when openliberty start up it shows me the url for ~/openapi/ui and the corresponding REST-Services ~/converter1 and ~/converter2. When I use openapi/ui I only can see one Service definition, the second one I can not see. Do I something wrong? Should my scenario work with openliberty?
My general UseCase is to have severel REST-Services defined by OpenApi's grouped together as long as they are in a common domain. Until now I can run each openapi.yaml on its own OpenLiberty but I like to group my REST-Services together into one OpenLiberty Server.
Does somebody knows a solution to my problem?

As you have noted, Open Liberty's MicroProfile OpenAPI support (via the mpOpenAPI-1.0 feature) only supports a single application per server.
If you want to aggregate multiple OpenAPI documents in a single server you have to use WebSphere Liberty's openapi-3.1 feature. See these docs for more info.

Related

Dynamic Camel route configuration at deployment time: Java DSL or XML DSL?

Let me preface this with the fact that I am still very new to Apache Camel. I'm still trying to understand how it all works, and what needs to be done (and HOW to do it) to achieve a particular effect.
I am trying to develop a Spring Boot application that will use Apache Camel to handle the transmission (and possibly also receipt) of data to/from a number of possible sources and destinations. The purpose of the application is to provide a means to produce/generate network traffic, at the network application level, that will be fed into another Spring Boot application - let's call this the target. We are trying to observe and measure the effects various network loads have on the target.
We would like to be able to transmit data via a number of protocols, including: ftp, http/s, file systems (nfs), various mail protocols (smtp, pop) and data streaming protocols for voice and video. There may be other protocols added at a later time. The data itself is irrelevant, we just need to be able to transmit data via various protocols with various loads.
These applications/services will be running in a containerized environment (Docker) that will be run within our local development and test environment, as well as possibly in a cloud environment, such as AWS. We have used Docker, Ansible, Terraform and are currently working towards using Kubernetes and Istio to manage the configuration, deployment, and operation of these applications.
We need to be able to provide specific configurations of Camel routes for particular deployments.
It would appear that the preferred method to configure Camel routes is via Java DSL, rather than XML DSL. The Camel documentation and nearly every other source of information I've found have a strong bias towards using Java DSL. Examples of XML DSL route configuration are far and few.
My initial impression is that going the Java DSL route (excuse the pun), would not work well with our need to be able to deploy a Camel application with a specific route configuration. It seems like you are required to have Java DSL defined route configurations hardwired into the code.
We think that it will be easier to provide a specific route configuration via an XML file that can be included in a deployment, hence why I've been trying to investigate and experiment with XML DSL. Perhaps we are mistaken in this regard.
My question to the community is: Considering what I've described above, can the Java DSL approach be used to meet the requirements as I've described them? Can we use Java DSL in a way that allows for dynamic route configuration? Keep in mind we would not be attempting to change configuration during operation, just in the course of performing a deployment.
If Java DSL could be used for this purpose, it would be very much appreciated if pointers to documentation, examples, etc. could be provided.
For your use cases you could use XML DSL also. Anyhow below book covers most aspects Camel development with examples. In this book authors describes XML DSL use for most of java DSL examples.
https://www.manning.com/books/camel-in-action-second-edition
In below github repository you can find the source code for all the examples listed in above book.
https://github.com/camelinaction/camelinaction2
Simple tutorial and github repository for Apache Camel using Spring boot.
https://www.baeldung.com/apache-camel-spring-boot
https://github.com/eugenp/tutorials/tree/master/spring-boot-modules/spring-boot-camel
Maven Plugin for build and deployment of spring boot container application into Kubernetes cluster
https://maven.fabric8.io/
In case if your company can afford some funding for your effort look at below link which provides commercial offerings around Camel.
https://camel.apache.org/manual/latest/commercial-camel-offerings.html
Thanks
Madhu Gupta
Our team has a few projects which use the Java DSL for building routes. In order to make them dynamic, there are control structures for iterating and setting endpoints based off configurations. That works for us because the routes are basically all the same, just with different sources and sinks.
If you could dynamically add/change the XML DSL files in a way that doesn't involve redeploying your application, that might be a viable route to follow. One might, for example, change the camel.springboot.xml-routes property to point to a folder which changes as needed.

Spring Cloud Config - How do we see this being used?

I have been playing with Spring Cloud Config and like many of the ideas I see there. I would like to better understand how its creators intended on it being used though.
Lets say that I have several services that support a larger API. Because these services are independent from each other, their source is managed in separate repositories. This allows us to version them and deploy them separately from one another. Today, their properties are managed individually.
I like the idea of having a single config server provide all of the configuration information for the individual applications/services that support this larger API. Looking at the default implementation of EnvironmentRepository (which is GIT based), I would have to have a single repository with all of my application config files in it. Because they all live within the same repository, they would all be managed/versioned together in a single place.
How do I make both models work with each other? Would it be better to have a repository per application instead of one for all applications? What are your thoughts?
-Joshua
It might just be a detail of the implementation of the EnvironmentRepository. See here for some discussion on how and when that might happen.

Sharing files and backbeans between different wars Java EE

I am working on a large scale system using PrimeFaces 5.0, Java EE 7, Maven 3.0.5, Netbeans 7.4 & GlassFish 4.0
I want to implement it as (multiple WARs , multiple EJBs , one EAR).
Multiple wars could have common files like (JS, CSS, XHTML, Backbeans & Converters)
i have achieved this using jar which contains this resources.
different WAR files, shared resources
I need a session-scoped bean to be shared between different wars, I found this but i found it more than what i need.
http://docs.oracle.com/cd/E18686_01/coh.37/e18690/glassfish.htm#CEGBDHJB
so my questions is:
Is using a jar is the right approach to share what i want ??
where do i put jars like primefaces or omnifaces in the project where they use the same class loader ??
How can i share session-scoped between different wars ??
I have been working on a ear project with similar requirements as yours, according to our experience :
Sure. We have seperated our war projects and use them as extended controllers to carry out front end logic and passing data to view, and they make their service calls via a jar file called common-services.jar . Our whole service layer is living on a single jar file. However if you ask my personal opinion, I think it would have made a lot sense to create a third war file just for the services, and talk restful with all the front-end repos. That way service calls could be opened to third party users without any further work. So to sum it all up, yes it is an acceptable approach, but you should also consider packing it as war.
On a parent pom above all war, so all war files use the same version and it is managed from a single pom.
Carry all session based operations to your third jar / war we have discussed in question 1. Makes much more sense that way. Or I suppose you will need solutions like single sign on. But my first suggestion works like a charm for us.

Spring boot application.properties maven multi-module projects

We are using spring boot in a multi-module project.
We have a Domain access module which has the common domain object classes, repositories, together with configuration for the datasource, JPA, Hibernate, etc. These are configured using a application.properties. We put all this configuration into the common module to save duplicating these common configurations in the higher level modules.
This all works fine when building the domain module, so the configurations are loaded correctly in the test units.
However the problems start when we try to use the domain module in the higher layer modules; they have their own application.properties which means Spring loads them and not the the Domain module application.properties, which this means the data source is not configured because only the higher module application.properties are loaded.
What we would like is both the domain module and higher level application properties to be loaded by Spring. But we can't see any easy way to do this.
I'm thinking this must be a common problem, and wonder if there any recommended solutions for this problem?
As we are using spring-boot the solution should ideally use annotations instead of applictionContext.xml.
Maybe you should only use application.properties in the top-level aggregator project?
You can always use #PropertySource in the child projects to configure them with a name that is specific to their use case.
Or you can use different names for each project and glue them together in the top-level project using spring.config.location (comma-separated).
I agree with #Dave Syer. The idea of splitting an application into multiple modules is that each of those is an independent unit, in this case a jar file. Theoretically you could split each of those jar files into their own source repositories, and then use them across multiple projects. Let's say you want to reuse these domain classes in both a web and batch application, if all the APPLICATION level configuration is stored within each of the individual modules, it severely reduces their reusability.
IMO only the aggregating module should contain all of the configuration necessary to run as an application, everything else is simply a dependency that can be remixed and reused as necessary.
Maybe another approach could be to define specific profiles for each module and use the application.properties file just to specify which profiles are active
using the spring.profiles.include property.
domain-module
- application.properties
- application-domain.properties
app-module
- application.properties
- application-app.properties
and into the application.properties file of app-module
spring.profiles.include=domain,app
Another thing you can do (besides only using application.properties at the top-level as Dave Syer mentions) is to name the properties file of the domain module something like domainConfig.properties.
That way you avoid the name clash with application.properties.
domainConfig.properties would contain all the data needed for the domain module to be able to tested on it's own. The integration with the rest of the code can easily be done either using multiple #PropertySource (one for domainConfig.properties and one for application.properties) or configuring a PropertySourcesPlaceholderConfigurer bean in your Java Config (check out this tutorial) that refers to all the needed property files
in spring-boot since 2.4 support spring.config.import
e.g
application.name=myapp
spring.config.import=developer.properties
# import from other module
spring.config.import=classpath:application-common.properties
or with spring.config.activate.on-profile
spring.config.activate.on-profile=prod
spring.config.import=prod.properties
ref: https://spring.io/blog/2020/08/14/config-file-processing-in-spring-boot-2-4

xmlaccess deploy portlet with library reference

I have problem with deploying JSR168 portlet using xmlaccess. I have no problem with deploy and join to conrete page but I would like to add shared library reference automatically. Is it possible?? I added shared library named 'libshared' using IBM WS console. Can I add this reference in input xml using by xmlaccess?
I don't think you can do this in xmlaccess. But you may try putting a reference to the library under the Manifest.MF file of the META-INF directory of your portlet's war file.
Or could just put the shared jar file under your /shared/ext directory. Or you could put it inside your wps.ear file. Mind you, either of these two solutions would share your library with the entire portal installation, rather than just select portlets.
You can deploy the application using wsadmin or similar and use that to update the classpath (i.e. for the shared library), you can then use xmlaccess to deploy the portlets and reference the previously deployed application - although I think this may only work in WebSphere Portal 6.1.
Give me a shout if you need further details.
I encountered this as well, a while ago... and researched it to the max, including spending some time chatting with IBM's support in various levels.
The XMLAccess protocol doesn't provide for such "system-level" configuration alongside Portlet application deployment; it can only be used to install, customize and uninstall Portlet applications and related artifacts.
If your deployment strategy involves deploying WAR files directly through XMLAccess, then you will have to manually add the shared-library to the application through the WAS admin console; this will have to be done manually because, when deploying WAR files through XMLAccess, an EAR with some random name is being created by WebSphere Portal to "host" your WAR file; hence you can't script the attachment of a shared library.
(alternatively, you may wish to add the shared library to the server's (WebSphere_Portal) classpath)
If your deployment strategy, instead, involves deploying Portlet applications packaged as EARs, then you're in a better position; you could automate the shared-library attachment as part of your EAR deployment process, then use XMLAccess to inform WebSphere Portal about the location, in the EAR, of your Portlet applications (which is what Michael mentioned above; it works in WebSphere Portal 6.0 as well).
Good luck.

Resources