Is there a Container only distribution for Spring XD? - spring

When running Spring XD in Distributed Mode, there will be many JVM's running the XD Container.
Is there a "slimmed down" version of the XD distribution that includes only the bins/libs that are necessary to run the container?

there currently isn't. But I'm not 100% sure the container-only version would be that slim (it's the container that needs to know about the many frameworks supported). So if we ever hard-separate the two, I'd bet that the admin would benefit.
If this is really a concern for you, I'd invite you to create a JIRA issue to track this (ideally motivating the concern). We always aim for better modularity, but this hasn't been on the top of our list, for now.

Related

Camunda: architecture and decisions for a high-performance, dynamic multi tenant app

I'm in charge of building a complex system for my company, and after some research decided that Camunda fits most of my requirements. But some of my requirements are not common, and after reading the user guide I realized there are many ways of doing the same thing, so I hope this question will clarify my thoughts and also will serve as a base questión for everyone else looking for building something similar.
First of all, I'm planning to build a specific App on top of Camunda BPM. It will use workflow and BPM, but not necessarily all the stuff BPM/Camunda provides. This means it is not in my plans to use mostly of the web apps that came bundled with Camunda (tasks, modeler...), at least not for end users. And to make things more complicated it must support multiple tenants... dynamically.
So, I will try to specify all of my requirements and then hopefully someone with more experience than me could explain which is the best architecture/solution to make this work.
Here we go:
Single App built on top of Camunda BPM
High-performance
Workload (10k new process instances/day after few months).
Users (starting with 1k, expected to be ~ 50k).
Multiple tenants (starting with 10, expected to be ~ 1k)
Tenants dynamically managed (creation, deploy of process definitions)
It will be deployed on cluster
PostgreSQL
WildFly 8.1 preferably
After some research, this are my thoughts
One Process Application
One Process Engine per tenant
Multi tenancy data isolation: schema or table level.
Clustering (2 nodes) at first for high availability, and adding more nodes when amount of tenants and workload start to rise.
Doubts
Should I let camunda manage my users/groups, or better manage this on my app? In this case, can I say to Camunda “User X completed Task Y”, even if camunda does not know about the existence of user X?
What about dynamic multi tenancy? Is it possible to create tenants on the fly and make those tenants persist over time even after restarting the application server? What about re-deployment of processes after restarting?
After which point should I think on partitioning of engines on nodes? It’s hard to figure out how I’m going to do this with dynamic multi tenancy, but moreover... Is this the right way to deal with high workload and growing number of tenants?
With my setup of just one process application, should I take care of something else in a cluster environment?
I'm not ruling out using only one tenant, one process engine and handle everything related to tenants logically within my app, but I understand that this can be very (VERY!) cumbersome.
All answers are welcome, hopefully we'll achieve a good approach to this problem.
1. Should I let camunda manage my users/groups, or better manage this on my app? In this case, can I say to Camunda “User X completed Task Y”, even if camunda does not know about the existence of user X?
Yes, you can choose your app to manage the users and tell Camunda that a task is completed by a user whom Camunda doesn't know about. And same way, you can make Camunda to assign task to users which it doesn't know at all. This is done by implementing their org.camunda.bpm.engine.impl.identity.ReadOnlyIdentityProvider interface and let the configuration know about your implementation.
PS: If you doesn't need all the application that comes with Camunda, I would even suggest you to embed the Camunda engine in your app. It can be done easily and they have good documentation for thier java APIs. And it is easily achievable.
2. What about dynamic multi tenancy? Is it possible to create tenants on the fly and make those tenants persist over time even after restarting the application server? What about re-deployment of processes after restarting?
Yes. Its possible to dynamically add Tenants. While restarting the engine or your application, you can either choose to redeploy / or just use the existing deployed processes. Even when you redeploy a process, if you want Camunda to create a new version of the process only if there is a change in the process, that's also possible. See enableDuplicateFiltering property for their DeploymentBuilder.
3. After which point should I think on partitioning of engines on nodes? It’s hard to figure out how I’m going to do this with dynamic multi tenancy, but moreover... Is this the right way to deal with high workload and growing number of tenants?
In my experience, it is possible. You need to keep track of various parameters here, like memory, number of requests being served, number of open connections available etc., then accordingly add more or remove nodes. With AWS, this will be much easier as they have some of these tools already available for dynamic scaling in / out nodes. But that said, I have done this only with Camunda as embedded engine application(s).

ServiceMix and clustering

First of all I've read this post and it partially answered my question but here's my dilemma: I'd like to install ServiceMix on two different machines, and I want them to be working on failover. Which means if one instance dies for whatever reason the other one takes over control, and if I have to install a third instance of ServiceMix it would be easy to do so.
What I'm planning on installing and using is basically : Camel (with Jetty extension), ActiveMQ, Karaf, hawt.io and webconsole.
So basically what I want to do is to have the same bundles in both OSGis, same configuration for both instances, when I change something on one it gets propagated to the second.
Any Idea on how I could get that done? Thank you in advance.
You have to experiment a bit, but I think it is a reachable task.
First of all for the propagation Task you'll need to use the Apache Karaf Cellar clustering solution, it will help you propagate all your changes throughout your Cluster-Group. Second you'll need to configure a fail-over mechanism as described in the documentation. For this you most likely need to switch to Container-level locking. The crucial part is to make sure all your own bundles aren't stopped while the Karaf Cellar ones are already working. You might need to tweak the startlevels for your own apps and the default startlevel a bit.

Suitable frameworks for ERP like application

I want to create a production management system to be used by a small manufacturing firm. The system will allow to document different stages in manufacturing of equipment. The requirements are as follows:
1.Non browser based interface.Need something like Swing or AWT based.While i understand the convenience of implementing a browser based solution,the business owner insists on a non browser interface
2.Accessed from multiple systems.These systems will allow CRUD operations on the central system (Thin Client?)
3.The application will not have more than 3 concurrent users.
I need some advice regarding what would be a good path for this kind of application.Currently, i'm thinking of using Griffon with RMI. However, i don't have much development experience.Read a bit about Apache River (Jini) too.Would it be a good idea to use Griffon with RMI?
Please provide some advice. Thanks.
EDIT:after some reading, i've decided to use more mainstream frameworks.So, Griffon is not an option. How about Jini(Apache River) or OSGI (Apache Felix)?
Hmm how is that a project which recently moved out of the incubation phase be considered mainstream vs a project that's been used in production for more than 3 years now? Anyway, Apache River gives you access to Jini technology and nothing more; meaning you can't achieve item #1 of your list with River alone. River may use RMI for accessing remote resources, however you can use RMI directly, or try out DRMI, Kryonet, Hessian/Burlap, Spring's HTTP Invoker, Protocol Buffers, Avro/Thrift, REST, SOAP, ZMQ and many more.
Even if you choose one of these options and/or River you still have to define the following things
application structure (file structure and runtime behavior)
build setup
dependency management
testing profiles
packaging
deployment strategies
These things and more are what Griffon brings to the table. As you may have noticed the framework allows you to build up applications by adding plugins, reducing thew amount of time you must allot for hunting down dependencies, setting up bootstrap mechanism and getting things done. On the subject of remoting technologies have a look at the different options Griffon has to offer http://artifacts.griffon-framework.org/tags/plugin/remoting
Even more, you can also combine OpenDolphin (http://open-dolphin.org/dolphin_website/Home.html) with Griffon. There's even an example application found at the opendolhpin repository showing a full client-server application (build with Griffon, Grails and OpenDolphin) https://github.com/canoo/open-dolphin/tree/master/dolphin-griffon-crud
With what seems to be your current understanding of the problem, I would not recommend OSGI, especially for a small manufacturing firm (Possible maintenance issues, depending on the "personel").
The main reason why I wouldn't advocate JINI or OSGI in your case is because of what you said
However, i don't have much development experience.
JINI (Apache River) is a viable option as long as you fully understand the concepts of LookupService and service registrations, etc. There's tons of RMI going on here with possible firewall implications...
OSGI is not difficult but you may have issues deciding how to structure your applications as well as interacting with services, etc.
Try to stick to the simplest approach that you can handle for the implementation (Flexible design in mind): Make it work and then improve it.
There are simple Web Services options such as Spring Remoting (over http/https for example), unless Spring introduces too many concepts and headaches for your app.

Should cluster support be at the application or framework level?

Lets say you're starting a new web project that required the website to run on and MVC framework on Mono. A couple major requirements are that it has to scale easy, be stable and work with multiple servers that may or may not be in the same place or even on the same local network.
The first thing I thought of was a sort of cluster communication between servers. Each server would act as a node and be its own standalone application and would query other nodes in a known list for session information and things like that.
But one major design questions I have is should this functionality be built into the supporting framework or should the application handle the synchronization of the data?
Or am I just way off and this would never work?
Normaly clustering rather belongs to some kind of middleware layer, thus on your framework level. However it can also be implemented on the application level.
It depends on your exact use, if you want load balancing, scalability etc.

What type of business/technology use might justify the high cost of a Websphere license?

In a large corporation I worked in previously a manager purchased a $50,000+ Websphere production license despite the fact that he only needed a container to run a couple of servlets on a small intranet system.
Assuming we agree that this was overkill, to say the least, and that a free servlet runner like Tomcat would probably have sufficed, what type of business/technology use might justify the high cost of an application server such as Websphere? I'm thinking that integration projects are the most likely candidate - i.e. in environments in which multiple legacy systems need to be glued together using Java/Websphere as a bridge or wrapper. Any other good cases?
I don't necessarily agree with the following, but there is a fairly strong argument there...
<devil's advocate>
The key thing to remember when looking at licensing isn't necessarily the cost of the technology, but rather the cost of having someone point their finger at you at 2am when it's all gone belly up that you want money for.
There are a bunch of companies who have built their business model on this approach (eg Sun - leaving the obvious debate aside on whether that actually worked or not, RedHat, et al).
The benefits that IBM can provide for their products don't really come down to technology per se (as you said, you can get something that'll do the same job for a much better price), it's more about the business process around their products. If you're in an environment where you need predictable uptime, scalability, etc etc. (Banking for example)
IBM's products are reasonably well tested (one of the reasons they're usually a couple of releases behind the bleading edge elsewhere). You know that the things you're getting will be pretty robust, integrate well with other big-business systems (both legacy systems as you said, and other business systems like Siebel, Oracle, SAP, etc) not to mention the one-stop-shop support for integration with other IBM products (if you've drunk the full IBM cool aid).
You also know that where there are issues with what's being delivered, it's relatively transparent and there will be documented workarounds available for things you're likely to bump into.
</devil's advocate>
If you have smart enough people, you don't necessarily need the support that folk like IBM can offer (take the RedHat example - people can still go and download Linux for free and run their business on it). But at 2:00 you're on your own - you can't ring up Linux (or one of the Tomcat committers) and get them to tell you what you're doing wrong and help you fix it.
If you continue to look around, you'll find that many decisions like this aren't as based on technical considerations as you'd think they should be. I, like most other experienced practicioners, would choose one of the other stacks like Tomcat or JBoss. This isn't because they don't have licensing costs; it's because developers can build the best product in the shortest amount of time with them compared to the other J2EE products.
As to why IBM and other J2EE vendors still have as big a market share as they do, it's because of thought patterns like "Having a throat to choke", and "Can't get fired for buying IBM". Neither of which contain much technical merit, but that's because most of the time the people making these decisions aren't technically curent enough to understand the real factors, and don't have or don't trust the people that are qualified to make the decision.
This question is a little too fine-grained to give a short technical answer, since there are so many complicated aspects to building a successful product in the context of your situation. A couple general "Containers For Dummies" guidelines though:
Use either Tomcat or JBoss, move on, and focus on writing a good application. I see a strong up-vote for Glassfish, but I'd caution that it might not have the critical mass you'd be comfortable with. You can use one of the other vendor products and still succeed; They'll just weigh you down more.
When in doubt, listen to Rod Johnson. He and his company Spring Source are paving the way for the evolution of Java today. He's doing for Java containers today what Josh Block did for Java 1.2 (anyone use the Collections framework???)
Remember that Websphere is not just an overengineered Servlet container - it's an overengineered J2EE container. Hence, things like EJBs that are also supported in J2EE are present within WebSphere - so if an application does need them, they are available. Of course, why one would need WebSphere instead of a generic J2EE container is beyond me - unless they need overengineered feature Y that is due to be released in Milestone X of competing freely available product Z.
To be precise, Websphere is not a product. It's a product line. As stated by Martin, service is a crucial part of why people purchase IBM Websphere products (and btw. a big chunk of IBMs revenue).
Websphere consists not only of the J2EE stack (aka Websphere Application Server). It has many components/products built on top of it such as the Websphere Process Choreographer (workflow engine), Websphere Portal, Websphere Business Monitor and other useful components to run a business.
Websphere is a terrible product. Unless you need all the other IBM products that integrate with it and nothing else, theres no sensible reason to buy it. If all you want is a servlet container, grab Tomcat or Jetty.
They are infinitely faster and wont give your developers headaches. Websphere is a royal pain in the arse. Things that take seconds in tomcat eg deploying a small WAR, literally require minutes and a thousand clicks in Websphere.
In the end Websphere only gets sold to big corporate types where the manager, knows little technology, doesnt care about wasting^h^h^h^h investing the companies money, and becasuse of the old addage - nobody got fired buying IBM.
Many people dont use EJBs and stick to Hibernate and Servlets. If you do that in the beginning theres no reason why your WARs wont work in the future when you decide to move to Websphere because it - integrates with something. Then again you really should also be sure that you are aware of what other products you might need in the future and use that in your decision making process.
Im sure many other java types when forced to work with Websphere with just servlets often develop in Tomcat and then deploy at the very end in WS.

Resources