I've been looking for someone else doing this same thing, but haven't seen a scenario that's quite like this so I thought I'd see if anyone here has any good ideas on how to accomplish this.
My group builds and maintains an open-source neuroimaging data archive tool called XNAT. Previous versions of our application have always required users to run a builder application that took in a build.properties file and used that to initialize the database server configuration, among other things. We're really trying to get down to a single installable war file that we can make available on the NeuroDebian repository. In order to do this, we need to be able to start the application WITHOUT any database configuration information, run through a configuration wizard a la Wordpress or Drupal installations that includes the user inputting the database configuration, and finally setting this configuration information SOMEWHERE and re-starting or re-initializing the application context so that it gets its data source started up, Hibernate entity scans run, all auto-wired or injected dependencies that require the data source or Hibernate transaction manager resolved, and services scanned for #Transactional annotations, and so on.
I can easily see how we can use the new Spring Framework WebApplicationInitializer to detect whether the user has already set up the database configuration and initialize the app properly based on that:
If database has not been configured, create an servlet that just supports the UI for the initialization wizard
If database has been configured, create the regular application context
The problem in the first case is what happens once the user has completed the initialization wizard? We can store the database configuration somewhere and now we're ready to go but... how do we get the regular application context working? Can we just take the code that we'd call in the already initialized scenario and call that? Will that initialize the application properly then, with component scans and so on all being handled or...?
The only solution we have currently is to have the user restart the server manually (it's usually Tomcat) or use the server manager application to restart just our application. That's not very aesthetically pleasing though.
My end goal here will be to write a simple test app that takes in the database credentials and then tries to initialize everything else afterwards, but I'm hoping to see if anyone's thought about this particular issue and/or tried it and has any advice on how to handle it. Any help would be greatly appreciated!
Related
Our application are built on Spring boot, the app will be packaged to a war file and ran with java -jar xx.war -Dspring.profile=xxx. Generally the latest war package will served by a static web server like nginx.
Now we want to know if we can add auto-update for the application.
I have googled, and people suggested to use the Application server which support hot deployment, however we use spring boot as shown above.
I have thought to start a new thread once my application started, then check update and download the latest package. But I have to terminate the current application to start the new one since they use the same port, and if close the current app, the update thread will be terminated too.
So how to you handle this problem?
In my opinion that should be managed by some higher order dev-ops level orchestration system not by either the app nor its container. The decision to replace an app should not be at the dev-ops level and not the app level
One major advantage of spring-boot is the inversion of the traditional application-web-container to web-app model. As such the web container is usually (and best practice with Spring boot) built within the app itself. Hence it is fully self contained and crucially immutable. It therefore should not be the role of the app-web-container/web-app to replace either part-of or all-of itself.
Of course you can do whatever you like but you might find that the solution is not easy because it is not convention to do it in this way.
I am working on a spring boot application.
I wanted to know what happens when the application started running and before it becomes ready for user interaction.
I tried going through the console logs but I am still unsure as to what happens when.
I believe you should elaborate a bit more your question. That's because you can build different types of applications using Spring Boot. In a nutshell, during the start up the application will basically try to load the "beans" defined in the related context(s), pre-configured components, define the active profile, properties files, etc. Also some Spring and application events are generated during the start up.
A good way to understand what's going on behind the scenes is running the application in DEBUG mode. By default, the log level of the application is set as INFO.
Have a look at this link for further details:
http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#boot-features-spring-application
I hope this can help you as start point.
WSO2 Identity Server 5.0.0 (and some patches ;))
It does not appear that custom JDBC user store managers (child of JDBCUserStoreManager) use a JDBC pool. I'm noticing that I can end up session closed errors and sql exceptions whereas the Identity Server itself is still operating OK with its separate database connection (a configured pool).
So I guess I have two questions about this:
Somewhere up the chain, is there a JDBC pool for the JDBCUserStoreManager? If so, are there means to configure that guy more robustly?
Can I create another JDBC datasource in master-datasources.xml which my custom JDBC user store manage could reference?
Instead of using your own datasources/connections, you can import Carbon Datasources and use those (they come with inbuilt pooling and no need to worry about any configurations etc). You can either access these programmatically by directly calling ndatasource component or access them via JNDI.
To access them directly from ndatasource component:
The dependency:
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.ndatasource.core</artifactId>
<version>add_correct_version_here</version>
</dependency>
(You can check repository/components/plugins to find out the correct version for above dependency)
You can inject DataSourceService as in this code (the #scr.reference tag refers to the service you need to inject, this uses maven scr plugin to parse these dependencies when building the bundle).
Note that when you follow this approach you'll have to build the jar as an OSGi bundle as it uses declarative services (and have to place it in repository/components/dropins). Otherwise the dependencies won't be injected at runtime.
Next, you can access all the data sources as:
List<CarbonDataSource> dataSources = dataSourceService.getAllDataSources();
Rajeev's answer was really insightful and helped with investigating and evaluating what I should do. But, I didn't end up using that route. :)
I ended up looking through the Identity Server and Carbon source code and found out that the JDBCUserStoreManager does end up creating a JDBC pool configured by the properties you set for that manager. I had a class called CustomUserStoreConstants for my custom user store manager which had setMandatoryProperty called by default to set:
JDBCRealmConstants.DRIVER_NAME
JDBCRealmConstants.URL
JDBCRealmConstants.USER_NAME
JDBCRealmConstants.PASSWORD
So the pool was configured with these values, BUT that was it...nothing else. So no wonder it wasn't surviving the night!
It turned out that the code setting this up, if it found a value for the JDBCRealmConstants.DATASOURCE in the config params, it would just load up that datasource and ignore any other params set. Seeing that, I got rid of those 4 params listed above and forced my custom user store to only allow having a DATASOURCE and I set it in code with the default JNDI name that I would name that datasource always. With that, I was able to configure my JDBC pool for this datasource with all params such as testOnBorrow, validationQuery, validationInterval, etc in master-datasources.xml. Now the only thing that would ever need to change is the datasource's configuration in that file.
The other reason I went with the datasource in the master-datasources.xml is that I didn't have to decided in my custom user store's code which parameters I would want to have or not have and just manage it all in the xml file easily. This really has advantages with portability of configs and IT involvement for deployments and debugging. I already have other datasources in this file for the IS deployment.
All said, my user store is now living through the night and weekends. :)
I Wanted to create Java EE application in JSF+Spring Framework with WildFly AS. One of the hot requirements is:
Plug and Play Modules This means if I update my application Or Add new module into my Application.
(Obviously Update bean.xml, web.xml, pojo classes , jars etc)
Then without redeployment of my *.war file and with out restarting my Wildfly AS changes occurs.
This is a complicated requirement for a few reasons. How will you handle changes to your DB schema/entity model? How will you handle sessions which are in progress at the time of the upgrade and are actively using the 'old' code? How do you handle changes to container managed code, code that is managed by the container only at deployment time, for example new EJBs etc?
One approach I have seen used in production to achieve some of these requirements is to do rolling updates with application versioning and full schema backwards compatibility. This is done in a clustered environment which is fronted by proxy servers that can allow active sessions using the old version of the application to continue until finished and ensure that new sessions go to servers/contexts containing the new version of the code. So you end up still deploying WARs which contain the new version of your code, and eventually undeploy the old versions when all old sessions have ended/expired. To do this you have to assume the burden in your code to fully support working against two simultaneous versions of your model when new versions introduce changes to it. This is not a trivial burden. You also have to assume the burden of the extra infrastructure to route sessions appropriately.
I know a product like JRebel will let you do hot deploys of code (even things like EJBs) with the idea being that it shortens the develop/test cycle but I haven't seen it used outside of a development environment. Also you would still have to deal with active sessions that were started on the old version /model.
I'd like to create a Configuration object in OSGi, but one that won't be persisted, so it won't be there when the framework is restarted. Similar to START_TRANSIENT for bundles.
Some background: I've got an OSGi (Felix) based client side application, deployed over OBR. The configuration object I'm talking about effectively boots the application. That works fine, but sometimes the content has changed while the context was stopped. In that case, it boots the application as OSGi revives all bundles and adds all configuration options. Then I inject the correct configuration, the application stops and then restarts again.
So it does actually work, but the app starts twice, and I can't get access to the framework before it reconstructs its old state.
Any ideas?
As BJ said there is no standard support for this in the Configuration Admin spec.
However the Felix implementation supports two features which may help you. First, you can set the felix.cm.dir property which configures where the configadmin saves its internal state (which by default will be somewhere under the Framework storage directory). You could set this to a location that you control and then simply wipe it every time you start OSGi (you could also wipe out the entire OSGi Framework storage directory on every start... some people do this but it has wider consequences that what you asked for).
Second, if you need a bit more control, Felix ConfigAdmin supports customising its persistence with a PersistenceManager service. You could probably implement this and return empty/doesn't-exist for the particular pids that you want to control.
The OSGi Config Admin spec does not support this. I also do not know of a non-standard means either for any of the CM impls I am familiar with.
Ok, what I did in the end was the following:
I created a special really small 'boot' bundle, which I do not provision from OBR, instead, I install it from the classpath.
That bundle controls the configuration, and I use START_TRANSIENT the moment I really want to load that configuration.
Not exactly pretty, it gets the job done. I do think transient configuration would make sense to have in OSGi.