Suitable design pattern for Dynamic Runtime Configurations - spring-boot

I am working on a spring boot application in java. I have a case where I need to pick up some runtime configuration(which can change dynamically without any need of deployment or rebooting the app) and use it in all the classes further ahead. It basically stores all the plug and play configurations I want my app to support.
I have tried listener pattern but it doesn't seem to be the best option since I don't want that config to be listened by a few, rather I want that config to flow through whole of the code.
Is there an existing design pattern or a technique that is a standard for such activity?
Kindly suggest.

In general you need to store the application configuration externally and then listen to changes on that. How the change is notified depends on the runtime environment. For example cloud platforms may have separate decoupled notification systems, desktop application may want to implement a file change listener and so on.
This pattern is generally called Runtime Reconfiguration Pattern (see this link). In conclusion there is no magic that will apply the changes throughout your code, but you will need to listen for the changes and adjust the runtime behavior based on those.

Related

Karate WebSocket how to listen to multiple messages within one session?

For our integration tests we have a scenario where we want to listen to a set amount of messages predefined by the environment we use. I've seen that it's possible to listen to multiple messages by opening up a new connection but that doesn't allow for much flexibility.
Have you read the docs, because as far as I know, if you define a "handler" function, you can use the same connection for multiple messages and choose when you want to stop: https://github.com/intuit/karate#websocket
Also see: https://stackoverflow.com/a/67870765/143475
But if you have a very specific need or custom logic, maybe the best thing is is to write a small piece of Java "glue" code and you will get all the flexibility that you want. You may be able to re-use Karate's Java API such as com.intuit.karate.http.WebSocketClient - but this is not documented, and potentially an area where you can research / contribute code.
Here is a good example: https://twitter.com/KarateDSL/status/1417023536082812935 of the flexibility that the Java-interop approach provides.

How do I change an attribute for a policy group without having to re-compile all of the policyfiles?

I am trying to shift from environment files in Chef to using Policyfiles. https://docs.chef.io/policy.html. I really like the concept, especially since you can include a policy from policy into another, but I am trying to understand how do a simple attribute change.
For instance, if I want to change a globally-used attribute that may be an error message for a problem that is happening now. ("The system will be down for 10 minutes. Thanks for your patience"). Or perhaps I want to turn off some AB testing with an attribute working as a flag. From what I can tell, the only way I can do this is to change an attribute in the policyfile, and then I need to create a new version of the policy file.
And if the policyfile is an included in many other policyfiles, like in the case of a base policyfile, then I have a lot of work to do for a simple change.
default['production']['maintenance_message'] = 'We will be down for the next 15 minutes!'
default['production']['start_new_feature'] = true
How do I make a simple change to an attribute that affects an entire policy group? Is there a simple way to change an attribute, or do I have to move all my environment properties to a data bag??
OK, I used Chef Support and got an answer: Nope.
This is their response:
"You've called out one of the main reasons we recommend that people use something like Jenkins pipelines to deliver cookbooks to their infrastructure. All that work can be kicked off by a build system recognizing a change in a dependency and initiating new builds for all the downstream consumer jobs.
For what it's worth, I don't really like putting application configurations like that maintenance message example you used in configuration management, I think something like Habitat is a better system for that kind of rapid-change configuration delivery, although you could also go down the route of storing application configuration like that in a system like Hashicorp Vault, Consul, or etcd, and ensuring that whatever apps need to ingest those changes are able to do so without configuration management fighting with the key-value config store.
If that was just an example to illustrate things, ignore the previous comment and refer only to my recommendation to use pipelines to deliver cookbooks, attributes, etc to your infrastructure (and I would generally recommend against data bags these days, but that's mostly a preference thing)."

How to combine pulgins for the same resource?

I am working in the logical architecture of a project that receives some info from users and processes it. One of the requirements is to expose an interface for external developers to add further functionalities. So far I have proposed a MVC 2-tier architecture, where the View and Controller run in the user's machine, and the Model is hosted in an Application server and remotely invoked. The requirement on functionalities suggests me to use a plugin pattern.
Additional steps selected by the user might be executed when processing the information, so I wanted to model them as plugins that will already exist when the application is released. This means that this plugins would affect the same resource (the processing flow), and I am uncertain about how to deal with this when both plugins are enabled.
Since I am not as familiar with the plugin pattern as with other patterns, the reading I did before asking made me try something similar to the Abstract Factory pattern. The problem is that, when two or more plugins are enabled, I would need mutiple inheritance. I also thought of the Builder pattern to model steps of the processing separately, but then an order among plugins would have to be defined and this would affect the independence of plugin's developers.
If I understand correctly, you want to be able to extend the same variation point with multiple independent plugins. If so, the pipes and filters pattern is an appropriate mechanism.
With this approach, plugins represent filters and you can design a plugin container that loads and then chains them. If no plugin is loaded for a given variation point, then you either short-circuit the variation point or provide some form of default filter.
Also, giving the plugins a mechanism to specify their position in the filter chain will be helpful, so think about that when designing the plugin-interface.

How to provision OSGi services per client

We are developing a web-application (lets call it an image bank) for which we have identified the following needs:
The application caters customers which consist of a set of users.
A new customer can be created dynamically and a customer manages it's users
Customers have different feature sets which can be changed dynamically
Customers can develop their own features and have them deployed.
The application is homogeneous and has a current version, but version lifting of customers can still be handled individually.
The application should be managed as a whole and customers share the resources which should be easy to scale.
Question: Should we build this on a standard OSGi framework or would we be better of using one of the emerging application frameworks (Virgo, Aries or upcoming OSGi standard)?
More background and some initial thoughts:
We're building a web-app which we envision will soon have hundreds of customers (companies) with hundreds of users each (employees), otherwise why bother ;). We want to make it modular hence OSGi. In the future customers themselves might develop and plugin components to their application so we need customer isolation. We also might want different customers to get different feature sets.
What's the "correct" way to provide different service implementations to different clients of an application when different clients share the same bundles?
We could use the app-server approach (we've looked at Virgo) and load each bundle once for each customer into their own "app". However it doesn't feel like embracing OSGi. We're not hosting a multitude of applications, 99% of the services will share the same impl. for all customers. Also we want to manage (configure, monitor etc.) the application as one.
Each service could be registered (properly configured) once for each customer along with some "customer-token" property. It's a bit messy and would have to be handled with an extender pattern or perhaps a ManagedServiceFactory? Also before registering a service for customer A one will need to acquire the A-version of each of it's dependencies.
The "current" customer will be known to each request and can be bound to the thread. It's a bit of a mess having to supply a customer-token each time you search for a service. It makes it hard to use component frameworks like blueprint. To get around the problem we could use service hooks to proxy each registered service type and let the proxy dispatch to the right instance according to current customer (thread).
Beginning our whole OSGi experience by implementing the workaround (hack?) above really feels like an indication we're on the wrong path. So what should we do? Go back to Virgo? Try something similar to what's outlined above? Something completely different?!
ps. Thanks for reading all the way down here! ;)
There are a couple of aspects to a solution:
First of all, you need to find a way to configure the different customers you have. Building a solution on top of ConfigurationAdmin makes sense here, because then you can leverage the existing OSGi standard as much as possible. The reason you might want to build something on top is that ConfigurationAdmin allows you to configure each individual service, but you might want to add a layer on top so you can more conveniently configure your whole application (the assembly of bundles) in one go. Such a configuration can then be translated into the individual configurations of the services.
Adding a property to services that have customer specific implementations makes a lot of sense. You can set them up using a ManagedServiceFactory, and the property makes it easy to lookup the service for the right customer using a filter. You can even define a fallback scenario where you either look for a customer specific service, or a generic one (because not all services will probably be customer specific). Since you need to explicitly add such filters to your dependencies, I'd recommend taking an existing dependency management solution and extending it for your specific use case so dependencies automatically add the right customer specific filters without you having to specify that by hand. I realize I might have to go into more detail here, just let me know...
The next question then is, how to keep track of the customer "context" within your application. Traditionally there are only a few options here, with a thread local context being the most used one. Binding threads to customers does tend to limit you in terms of implementation options though, as in general it probably means you have to prohibit developers from creating threads themselves, and it's hard to off-load certain tasks to pools of worker threads. It gets even worse if you ever decide to use Remote Services as that means you will completely loose the context.
So, for passing on the customer identification from one component to another, I personally prefer a solution where:
As soon as the request comes in (for example in your HTTP servlet) somehow determine the customer ID.
Explicitly pass on that ID down the chain of service dependencies.
Only use solutions like the use of thread locals within the borders of a single bundle, if for example you're using a third party library inside your bundle that needs this to keep track of the customer.
I've been thinking about this same issue (I think) for some time now, and would like your opinions on the following analogy.
Consider a series of web application where you provide access control using a single sign-on (SSO) infrastructure. The user authenticates once using the SSO-server, and - when a request comes in - the target web application asks the SSO server whether the user is (still) authenticated and determines itself if the user is authorized. The authorization information might also be provided by the SSO server as well.
Now think of your application bundles as mini-applications. Although they're not web applications, would it still not make sense to have some sort of SSO bundle using SSO techniques to do authentication and to provide authorization information? Every application bundle would have to be developed or configured to use the SSO bundle to validate the authentication (SSO token), and validate authorization by asking the SSO bundle if the user is allowed to access this application bundle.
The SSO bundle maintains some sort of session repository, and also provides user properties, e.g. information to identify the data repository (of some sort) of this user. This way you also wouldn't pass trough a (meaningful) "customer service token", but rather a cryptic SSO-token that is supplied and managed by the SSO bundle.
Please not that Virgo is an OSGi container based on Equinox, so if you don't want to use some Virgo-specific feature, you don't have to. However, you'll get lots of benefits if you do use Virgo, even for a basic OSGi application. It sounds, though, like you want web support, which comes out of the box with Virgo web server and will save you the trouble of cobbling it together yourself.
Full disclosure: I lead the Virgo project.

What parts of application you prefer to be externalized as configuration and why?

What parts of your application are not coded?
I think one of the most obvious examples would be DB credentials - it's considered bad to have them hard coded. And in most of situations it is easy to decide if you want something to be externalized or coded.For me the rules are simple. Some part of the application should be externalized if:
it can and should be changed by non-developer, but not so often to be included in application settings defined in UI (DB credentials, service URLs, etc)
it does not require programming language and seems unnatural being coded (localization)
Do you have anything to add?
This is a little related to this question about spring cfg.
Spring configuration seems less obvious example for me, because in my practice it is never modified by anyone except the developer. And the road of externalizing can take you far away, to the entire project being "configured", not coded - so where to stop?
So please post here some examples from your experience, when you got benefit from having something configured, not coded - like dependency injection configuration in spring, etc.
And if you use spring - how often is configuration changed without recompiling?
Anything that needs to differ between different deployments of your application. That is, anything specific to the environment.
Examples include:
Database connection strings
URLs for web or WCF services
Logging configuration
Any information your application uses that is "data" and that could change depending on where it is installed. Things like:
smtp mail server used to send e-mails
Database connect strings
Paths to file locations / folders used by the app
FTP servers & connect info
Active Directory servers used for authentication
Any links displayed in the application to external information
sources
Warning limit values
I've even put the RegEx filters used to limit the allowable characters
for data entry fields.
Besides the obvious changing stuff (paths, servers, ports, and so on), some people argue that you should be able to easily change whatever might reasonably change, for instance, say you have a generic engine which operates on the business logic (a rule engine).
You would then define the rules on a "configuration file" which ends up being is no less than programming in a DSL instead of in the generic purpose language. Benefits being it's closer to the domain so it's easier and more maintainable, and that you can easily change things that otherwise would demand a new build.
The main argument behind this is that things you assumed would never change always end up changing nonetheless, so you better be prepared.
paths and server names/addresses come to mind..
I agree with your two conditions, which is why I:
Rarely include a config file as part of a Windows or Windows Mobile application (web apps yes).
If I did include a config file meant to be tweaked by end users, it certainly wouldn't be XML.
Employee emails/names since employees can come and go... (you should typically try to keep them out of an application though)
Configuration files should include:
deployment details
DB credentials
file paths
host names
anything that is used in many places but that may change
contact email addresses
options that aren't in the GUI
The last one is a bit open-ended, but very important. I've found it very useful to foresee variables that the client may, in the future, want to change. If changes are infrequent, I or they can edit the config file. If it becomes a frequent thing, it's trivial to add the option to the GUI, which isn't hardcoded.
I would also add encryption keys (which themselves should be encrypted)...
Basically the rule of thumb is information the application needs BEFORE it's regular, functional operation, data that it MUST have on-hand (i.e. local and not networked).
Note that this data should not be dynamically changing or large amounts of it, otherwise it should be in the database.
With Spring apps I actually distinguish between two types of configuration:
Items externalized into property files which are "deploy time" concerns or "environment-specific": server IP's / addresses, file system locations, etc etc
Spring XML configuration which can do lots of things, like indicate the overall application structure, apply behavior via AOP, etc.
I use Spring to wire all the beans in a J2SE application that has no GUI (a transactional switch). That way it's very easy for me to have different configurations in each deployment (we have this thing running in different countries), without having to code anything different.
Another thing I like to have is to manage all the SQL statements separately from the code, when I use plain JDBC (or Spring JDBC). Like in a properties file or XML or something, sometimes even as String properties in the beans that will use the statement (when there is only one bean that will use the statement, such as a DAO).
I am going to use spring JDBC or vanilla JDBC for data persistence, here we have decided to externalize all the SQL from the Java code, so can be better mangable in terms of SQL query tuning and optimization, we don't need to disturb the java code.

Resources