We have a huge enterprise application wherein the users navigates through application registration, data entry and finally some results based on the data processing. Currently the navigation logic is handled in java classes; depending on data entered, user action etc the next page to navigate to is decided at each step.
We decided to implement spring web flow to implement the navigation. However, we are not sure if we can cover all possible scenarios beforehand and create a flow considering all the scenarios.
Hence, we are looking for a way in which we can dynamically create the flow at runtime depending on different conditions in the application. How can we accomplish that in SWF? Any help would be highly appreciated.
Under the covers, Spring WebFlow uses a flow registry (FlowDefinitionRegistryImpl) to handle mapping of url's to internal resources. Internally, the flow registry is (ultimately) a wrapper around a hashmap.
I believe that a flow that is defined at runtime is beyond the scope of Spring-weblow out of the box.
However, given enterprise resources, you can extend FlowDefinitionRegistryImpl or implement FlowDefinitionRegistry with a custom class that behaves like you want and allows you to change the flow "on the fly". You would need to pay attention to performance and synchronization, and define a mechanism for refreshing the underlying Map.
Related
I have application which acts as a proxy between different systems without own database. There are few possible use cases which are covered by the application:
Display data from specific system or systems
Store data to specific system or systems
Actually this application has their own front-end and back-end (with sping boot and angular stack). And back-end is responsible to get/put data from/to external systems and front-end communicates with the back-end and it does not know anything about external systems. Also, the back-end follows hexagonal architecture and has their own defined domain models.
Currently there are requirements to cover auditing for business use cases related to the application. For instance, if user goes to some feature related to the application and make some changes there, it should be audited.
I've googled this topic on the internet but I only found entity based auditing like this https://docs.spring.io/spring-data/jpa/docs/1.7.0.DATAJPA-580-SNAPSHOT/reference/html/auditing.html. For my case I would need something similar but based on domain models rather then on entities.
Could you please recommend some direction to cover this? Actually which library or so can be used for such use case to use state of domain model to prepare audit events. I've found something like this https://logging.apache.org/log4j-audit/latest/gettingStarted.html, but I am really not sure if it is rigth way to go
I would say you can build your own auditing strategy based on events.
Let us take the example you gave: "if user goes to some feature related to the application and make some changes there, it should be audited.".
I assume you have a service that handles these requests from a REST API or something similar. That same service would not only communicate with the external systems but would also publish an event with let's say the information about the user and the performed changes or updated (here you can rely on Redis for example, but there are other options like RabbitMQ or even Kafka, depending on how reliable you want your auditing feature to be).
Then you would have another component of your app listening for these events so that you can store them in a Database (I guess that is the purpose). Or you can even have a separated micro-service only for this purpose, depending on how complex this auditing system is meant to be.
If you want something more "magical" and automated you can try to take a look at Spring Boot Data Audit code to see how it is implemented, but you might end up building an overengineered solution.
Using the Mendix Business Modeler to build web-applications is fundamentally different than developing web-applications using technologies like Java/Spring/JSF. But, I'm going to try to compare the two for the sake of this question:
In a Java/Spring based application, I can integrate my application with the 3rd party product Ehcache to cache data at the method level. For example, I can configure ehcache to store the return value for a given method (with a specific time-to-live). Whenever this method is called, ecache will automatically check if the method has been called previously with the same parameters and if there is a stored return value in the cache. If so, the method is never actually executed and instead the cached method return value is immediately returned.
I would like to have the same capabilities within Mendix, but in this case I would be caching Microflow return values. Also, I don't want to be forced to add actions all over the place explicitly telling the Microflow to check the cache. I would like to register my Microflows for caching in one centralized place, or simply flag each Microflow for being cached. In other words, this question is just as much about the concept of aspect-oriented-programming (AOP) in Mendix as it is about caching: is there a way to get hooks into Microflow invocation so I can apply pre and post execution operations? In my opinion the same reasons why AOP has it's place an purpose in Java exist in Mendix.
When working with the Mendix application it tries to do as much for you as possible, in this case that means that the platform already has an object cache to keep all objects that need caching.
Internally the Mendix platform uses Ehcache to do that.
However it is not really possible to influence that cache as you would normally do in Java/Spring.This is due to all the functionality of the Mendix Platform, that already tries to cache all objects as efficiently as possible.
Every object you create is always added to the cache. When working with that object it stays in cache until the Platform detects that the specific object can no longer be accessed either through the UI or a microflow.
There are also API calls available that instruct the platform to retain the object in cache regardless of it's usage. But that doesn't provide you with the flexibility as you asked for.
But specifically on your question, my initial response would be: Why would you want to cache a microflow output?
Objects are already cached in memory, and the browser client only refreshes the cache when instructed. Any objects that you are using will be cached.
Also when looking at most of the microflows that we use, I don't think it is likely that I would want to cache the output instead of re-running the microflows. Due to the design of the majority of the microflows I think it is likely that most microflows can return a slightly different output every time you execute it.
There are many listener classes you can subscribe to in the Mendix platform that allow you to trigger something in addition to the default action. But that would require some detailed knowledge of the current behavior.
For example you can override the login action, but if you don't perform all the correct validations you could make the login process less secure.
We are developing a web-application (lets call it an image bank) for which we have identified the following needs:
The application caters customers which consist of a set of users.
A new customer can be created dynamically and a customer manages it's users
Customers have different feature sets which can be changed dynamically
Customers can develop their own features and have them deployed.
The application is homogeneous and has a current version, but version lifting of customers can still be handled individually.
The application should be managed as a whole and customers share the resources which should be easy to scale.
Question: Should we build this on a standard OSGi framework or would we be better of using one of the emerging application frameworks (Virgo, Aries or upcoming OSGi standard)?
More background and some initial thoughts:
We're building a web-app which we envision will soon have hundreds of customers (companies) with hundreds of users each (employees), otherwise why bother ;). We want to make it modular hence OSGi. In the future customers themselves might develop and plugin components to their application so we need customer isolation. We also might want different customers to get different feature sets.
What's the "correct" way to provide different service implementations to different clients of an application when different clients share the same bundles?
We could use the app-server approach (we've looked at Virgo) and load each bundle once for each customer into their own "app". However it doesn't feel like embracing OSGi. We're not hosting a multitude of applications, 99% of the services will share the same impl. for all customers. Also we want to manage (configure, monitor etc.) the application as one.
Each service could be registered (properly configured) once for each customer along with some "customer-token" property. It's a bit messy and would have to be handled with an extender pattern or perhaps a ManagedServiceFactory? Also before registering a service for customer A one will need to acquire the A-version of each of it's dependencies.
The "current" customer will be known to each request and can be bound to the thread. It's a bit of a mess having to supply a customer-token each time you search for a service. It makes it hard to use component frameworks like blueprint. To get around the problem we could use service hooks to proxy each registered service type and let the proxy dispatch to the right instance according to current customer (thread).
Beginning our whole OSGi experience by implementing the workaround (hack?) above really feels like an indication we're on the wrong path. So what should we do? Go back to Virgo? Try something similar to what's outlined above? Something completely different?!
ps. Thanks for reading all the way down here! ;)
There are a couple of aspects to a solution:
First of all, you need to find a way to configure the different customers you have. Building a solution on top of ConfigurationAdmin makes sense here, because then you can leverage the existing OSGi standard as much as possible. The reason you might want to build something on top is that ConfigurationAdmin allows you to configure each individual service, but you might want to add a layer on top so you can more conveniently configure your whole application (the assembly of bundles) in one go. Such a configuration can then be translated into the individual configurations of the services.
Adding a property to services that have customer specific implementations makes a lot of sense. You can set them up using a ManagedServiceFactory, and the property makes it easy to lookup the service for the right customer using a filter. You can even define a fallback scenario where you either look for a customer specific service, or a generic one (because not all services will probably be customer specific). Since you need to explicitly add such filters to your dependencies, I'd recommend taking an existing dependency management solution and extending it for your specific use case so dependencies automatically add the right customer specific filters without you having to specify that by hand. I realize I might have to go into more detail here, just let me know...
The next question then is, how to keep track of the customer "context" within your application. Traditionally there are only a few options here, with a thread local context being the most used one. Binding threads to customers does tend to limit you in terms of implementation options though, as in general it probably means you have to prohibit developers from creating threads themselves, and it's hard to off-load certain tasks to pools of worker threads. It gets even worse if you ever decide to use Remote Services as that means you will completely loose the context.
So, for passing on the customer identification from one component to another, I personally prefer a solution where:
As soon as the request comes in (for example in your HTTP servlet) somehow determine the customer ID.
Explicitly pass on that ID down the chain of service dependencies.
Only use solutions like the use of thread locals within the borders of a single bundle, if for example you're using a third party library inside your bundle that needs this to keep track of the customer.
I've been thinking about this same issue (I think) for some time now, and would like your opinions on the following analogy.
Consider a series of web application where you provide access control using a single sign-on (SSO) infrastructure. The user authenticates once using the SSO-server, and - when a request comes in - the target web application asks the SSO server whether the user is (still) authenticated and determines itself if the user is authorized. The authorization information might also be provided by the SSO server as well.
Now think of your application bundles as mini-applications. Although they're not web applications, would it still not make sense to have some sort of SSO bundle using SSO techniques to do authentication and to provide authorization information? Every application bundle would have to be developed or configured to use the SSO bundle to validate the authentication (SSO token), and validate authorization by asking the SSO bundle if the user is allowed to access this application bundle.
The SSO bundle maintains some sort of session repository, and also provides user properties, e.g. information to identify the data repository (of some sort) of this user. This way you also wouldn't pass trough a (meaningful) "customer service token", but rather a cryptic SSO-token that is supplied and managed by the SSO bundle.
Please not that Virgo is an OSGi container based on Equinox, so if you don't want to use some Virgo-specific feature, you don't have to. However, you'll get lots of benefits if you do use Virgo, even for a basic OSGi application. It sounds, though, like you want web support, which comes out of the box with Virgo web server and will save you the trouble of cobbling it together yourself.
Full disclosure: I lead the Virgo project.
I am planning a new application and have been experimenting with GWT as a possible frontend. The design question I am facing is this.
Should I use
Option A: GWT-RPC and build the app quickly
Option B: Build a REST backend using Spring MVC 3.0 with all the great #Controller, #Service, #Repository annotations and build a client side library to talk to the backend using the GWT overlay features and the GWT Request builder?
I am interested in all the pros and cons and people experiences with this type of design?
Ask yourself the question: "Will I need to reuse the server-side interface with a non-GWT front-end?"
If the answer is "no, I'll just have a GWT client": You can use GWT-RPC, and take advantage of the fact that you can use your Java objects both on the server and the client-side. This can also make the communication a bit more efficient, at least when used with <inherits name="com.google.gwt.user.RemoteServiceObfuscateTypeNames" />, which shortens the type names to small numeric values. You'll also get the advantage of better error handling (using Exceptions), type safety, etc.
If the answer is "yes, I'll make my service accessible for multiple kinds of front-ends": You can use REST with JSON (or XML), which can also be understood by non-GWT clients. In addition to switching clients, this would also allow you to switch to a different server implementation (maybe non-Java) in the future more easily. The disadvantage is, that you'll probably have to write wrappers (JavaScript Overlay Types) or transformation code on the GWT client side to build nice Java objects from the JSON objects. You'll have to be especially careful when you deploy a new version of the service, which brings us back to the lack of type safety.
The third option of course would be to build both. I'd choose this option, if the public REST interface should be different from the GWT-RPC interface anyway - maybe providing just a subset of easy to use services.
You can do both if use also use the RestyGWT project. It will make calling REST based JSON resources as easy as using GWT-RPC. Plus you can typically reuse the same request response DTOs from the server side on the client side.
We ran into the same issue when we created the Spiffy UI Framework. We chose REST and I would never go back. I'd even say GWT-RPC is a GWT Anti-pattern.
REST is a good idea even if you never intend to expose your REST endpoints. Creating a REST API will make your UI faster, your API better, and your entire application more maintainable.
I would say build a REST backend. In my last project we started by developing using GWT-RPC for the first few months, we wanted fast bootstrapping. Later on, when we needed the REST API, it was so expensive to do the refactoring we ended up with two backend APIs (REST and RPC)
If you build a proper REST backend, and a deserialization infra on the client side (to transform the json\xml to GWT Java objects) then the benefit of the RPC is almost nothing.
Another sometimes forgotten advantage of the REST approach is that it's more natural to the browser running the client, RPC is a propitiatory protocol, where all the requests are using POST. You can benefit from client side caching when reading resources in the standard way.
Answering ams comments:
Regarding the RPC protocol, last time I "sniffed" it using firebug it didn't look like json, so I don't know about that. Though, even if it is json based, it still uses only the HTTP POST method to communicate with the server, so my point here about caching is still valid, the browser won't cache POST requests.
Regarding the retrospective and what could have done better, writing the RPC service in a resource oriented architecture could lead later to easier porting to REST. remember that in REST one usually exposes resources with the basic CRUD operations, if you focus on that approach when writing the RPC service then you should be fine.
The REST architectural style promotes inspectable messages (which aids debugging and security), API evolution, multiple platforms, simple interfaces, failure recovery, high scalability, and (optionally) extensible systems via code on demand. It trades per-interaction performance for overall network efficiency. It reduces the server's control over consistent application behavior.
The "RPC style" (as we speak of it in opposition to REST) promotes platform uniformity, interface variability, code generation (and thereby the ability to pretend the network doesn't exist, but see the Fallacies), and customized interactions. It trades overall network efficiency for high per-interaction performance. It increases the server's control over consistent application behavior.
If your application desires the former qualities, use the REST style. If it desires the latter, use the RPC style.
If you're planning on using using Hibernate/JPA on the server-side and sending the resulting POJO's with relational data in them to the client (ie. an Employee object with a collection of Phones), definitely go with the REST implementation.
I started my GWT project a month ago using GWT RPC. All was well until I tried to serialize an object from the underlying db with a One-To-Many relationship in it. And got the dreaded:
com.google.gwt.user.client.rpc.SerializationException: Type 'org.hibernate.collection.PersistentList' was not included in the set of types which can be serialized by this SerializationPolicy
If you encounter this and want to stay with GWT RPC you will have to use something like:
GWT Request Factory (www.gwtproject.org/doc/latest/DevGuideRequestFactory.html) - which forces you to write 3+ classes/interfaces per POJO you want to share with the client. OUCH!
Gilead (sourceforge.net/projects/gilead/) - which appears to a dead project.
I'm now using RestyGWT. The switch was fairly painless and my POJO's serialize without issue.
I would say that this depends on the scope of your total application. If your backend should be used by other clients, needs to be extendable etc. then create a separate module using REST. If the backend is to be used by only this client, then go for the GWT-RPC solution.
I like the idea of Spring WebFlow - particularly the way the flow definitiion abstracts the higher level web flow from components in the Spring Bean Container.
The Flow definition format seems to include everything one needs in a Web Flow - views, actions, transitions, subflows, outcomes etc.
Do you think the Spring WebFlow Flow Definition format would be suitable for externalising a Web Flow for a non Spring framework? Something that does not use Spring, or perhaps even Java, for its underlying components.
Bear in mind, I am thinking of only page flow in particular, not general workflow or BPEL type stuff
State machines (like spring webflow) have been used to describe web-app flows since the first web-apps were built. So yes it's ok. Why isn't everybody doing it ? I think that when using state machines to describe web-flows there is a tendency for formalism to take a little bit too much over. What starts off as a good idea ends up being more of a pain. Ajax and multiple concurrent active states on a given page makes it even worse.
The biggest strength of the SWF in my opinion is that centralizes flow (navigation) in a single place and makes it explicit, easy to read, manipulate etc. It is well suited for more complex navigation flows where you might go back and forth between pages, or for wizard-like or step-by-step UIs. It also has some advanced reuse features like subflows and flow inheritance.
The concept of view state and action state well mimics user - web app interaction where transition to action occurs after user has created an event, then the next view is presented to a user and the machine is in a state waiting for next user event.
It is important to note that these flows occur on server side. Everythnig seems to be moving to a client side these days and even so state machine can have its role. Flex for example has a concept of states and transitions. These should help programmer manage complex UIs with lots of controls. States can hide or display controls and much more.
So, I’d say this can be a very neat paradigm for modeling flows and user interactions.