I have read that Zeebe (https://zeebe.io/) from Camunda is created specifically for Microservices Orchestration. I know there is some difference related to performance.
My question is "Can I achieve the same thing using Camunda what I can do with Zeebe?"
I noticed that Camunda enterprise provides many features that are not provided in the free version or in Zeebe like BPMN deployment, History of previous workflows. I want to get those features for microservice orchestration. My guess is that I will not get if I use Zeebe.
These are multiple questions in one.
Yes, you can do Microservice orchestration with Camunda BPM. As you pointed out correctly, the difference is the architecture of the workflow engine itself, not the use cases you can leverage it for.
Yes, there is a Camunda BPM enterprise edition that has features that are not in the community edition, see: https://camunda.com/enterprise/
Zeebe will provide the same capabilities eventually. Given that it is relatively young it does not have all features of Camunda BPM on this end. But to relate to your example, Operate can show history instances but is also not free for commercial use. Zeebe is also provided as managed service: https://camunda.com/products/cloud/
Related
We're working on a project , and we want to use some toggling feature tool like ff4j or togglz but we have a real constraints about performances, i mean we really need a tool with the less time of execution , i've checked a little bit ff4j and togglz but i don't know what is best for this solution, or may be if you know some other tools.
Context of project: its a netflix microservices architecture, so we have eureka,ribbon,zuul and microservices.
otherwise , if you have another solution , may be develop a sidecar please give me some ideas.
thank you in advance :)
Disclaimer : I created FF4j, as such I won't give you answer relative to performance comparison. I will provide architecture design principles.
Microservices means distributed architecture so you will have to store the state of your features in a common persistence storage (DB).
The cost of feature toggle framework won't be time to evaluate the feature state predicate (it is a simple condition) it will be the time to access the data from the persistence storage.
FF4j provides support for both REDIS and CONSUL:
Redis seems a good candidate as very fast for put/get and distribute.
Consul is also a good idea in distributed microservice : it provides a key-value store.
Eureka may does the same, I don't know, ff4j does not have store for it yet.
If you have to store your features in a slower DB such as SQL-Like then you might consider to use caching. FF4j provides some cacheProxy to handle such use cases.
Other Considerations :
Put the administration console only in a backend application not on each microservices (security + performance overhead)
Feature Toggle can do more with Configuration Management and monitoring.
You may want to have a look at this 15min talk exactly on that subject. LIVE DEMO starting at 7:10
and related github repository for sample with Spring-Cloud
I am going to teach myself some Java EE and making a simple web portal where people can generate their own invoices(pdf lib is needed). Not asking about any code but can you give advice (examples) which technologies I can make use of through the process? I have decided to use "Spring MVC" as the framework + java/Kotlin as a compiler. Some database + server + email+ some micro services?, are needed but which can it be? Thank you!
If you are trying to implement microservices, i prefer spring boot which has embedded tomcat with additional services, and for database you can use open source mysql
if you are also planning for UI stuff and new to it prefer basic Html,css and Bootstrap
If I am there here are my choices. All these choices are based on my past 4 complete end to end web application project experience.
Spring Boot
Using spring boot create micro services. As it has in built tomcat it will be easy to deploy any environment, either local laptop or on premise server or cloud server.
JPA with Hibernate
If you are looking for free you can choose MYSQL. As it has strong community support
almost all the issues you are going to face would have been asked and answered already under stack overflow or somewhere else in the internet. Another think is as you chose JPA you can switch to any database easily.
React
As of now the simplest and one of the fastest ui framework. Also it has strong user support. You can find answer to almost all questions you will have on internet.
Apart from all, you can extend any of these technologies. Happy Coding!!!
You may want to consider using Jaspersoft for generating your pdf files:
https://www.jaspersoft.com/reporting-software
https://community.jaspersoft.com/wiki/introduction-jaspersoft-studio
There may undoubtedly be other solutions out there, but this is the one I'm most used to.
We are having a setup of 3 different Java EE Servers, all communicating with both JGroups and RMI. We are heavily unit testing our code and the whole team is totally in favor of TBD, but we are facing problems when it comes to integration testing our servers.
Especially our custom fail-over/ reconnect/ termination detection "algorithms" would need some automated testing because we are often seeing that they break and we currently always fix it by trial and error testing.
We are using the following libraries/frameworks: Tomcat, Maven, Spring 3, RMI, JGroups
Any ideas, suggestions, links and resources are welcome!
Interesting that nobody answered this question since 2011. Maybe there wasn't anything to recommend?
If you are looking into integration testing only it's much easier. You can write your usual JUnit/TestNG tests and use arquillian to take care of the container (lifecycle, deployments, configuration, etc). You can run all the components (tests, containers, deployments) on a single node, bind to different IPs or ports, let JGroups do all the cluster communication as usual.
http://arquillian.org/
Moreover, there is a whole book now available about integration testing in called 'Continuous Enterprise Development in Java'.
http://www.amazon.com/Continuous-Enterprise-Development-Andrew-Rubinger/dp/1449328296
The situation is IMO much worse when it comes to system testing. I am going to just say one name here: SmartFrog which is 'powerful and flexible Java-based software framework for configuring, deploying and managing distributed software systems'. The learning curve is terrible though.
http://www.smartfrog.org/
I have been trying to understand a bit more about the wider picture of OSGi without reading thru the entire specification. As with so many things, the introduction to what OSGi actually is was probably written by someone who had been working on it for a decade and perhaps wasn't best placed to put themselves in the mindset of someone who knows nothing about it :-)
Looking at Felix's example DictionaryService, I don't really understand what is going on. Is OSGi a distinct instance of a JVM into which you load bundles which can then find each other?
Obviously it is not just this because other answers on StackOverflow are explicit that OSGi can solve the dependency problem of a distributed system containing modules deployed within distinct JVMs (plus the FAQ keeps talking about networks).
In this latter case, how does a component running in one JVM interact with another component in a separate JVM? Can the two components "use" each other as if they were running within the same JVM (i.e. via local method calls), and how does OSGi manage the marshalling of data across a network (do you have to use Serializable for example)?
Or does the component author have to use some other distinct mechanism (either provided by OSGi or written themselves) for communication between remote components?
Any help much appreciated!
Yes, OSGi only deals with bundles and services running on the same VM. However, one should note that it is a distinct feature of OSGi that it facilitates running multiple applications (in a controlled way and sharing common modules) on the same JVM at all.
When it comes to accessing services outside the clients JVM, there is currently no standardized solution. Paremus Infiniflow and the derived open-source project Newton use an SCA approach. The upcoming 4.2 release of the OSGi specs will address one side of the problem, namely how to use generic distribution software in such a way that it can bring remote services into the client's JVM.
As somebody mentioned R-OSGi, this approach also deals with the other side of the problem, being how to manage dependencies between distributed OSGi frameworks. Since R-OSGi is not generic distribution software but explicitly deals with the lifecycle issues and dependency management of OSGi bundles.
As far as I know, OSGi does not solve this problem out of the box. There are OSGi-bundles, for example Remote OSGi, which allow the programmer to distribute services across a network.
Not yet, i think it's being worked on for the next release.
But some companies have already implemented distributed osgi. One i'm aware of is Paremus' Infiniflow (http://www.paremus.com/products/products.html). At linkedin they are also working on this. More info here: Building Linkedin next gen architecture with osgi and here: Matt raible: building linkedin next gen architecture
Here's a summary of the changes for OSGI 4.2: Some thoughts on the OSGi R4.2 draft, There's a section on RFC-119 dealing with distributed OSGi.
AFAIK, bundles are running in the same JVM, but are not loaded using the same class loader (that why you can use two different versions of the same bundle at the same time).
To interact with components in another JVM, you must use a network protocol such as rmi.
The OSGi alliance is working on a standard for distributed OSGi:
http://www.osgi.org/download/osgi-4.2-early-draft2.pdf
There even is an early Apache implementation of this new standard:
http://cxf.apache.org/distributed-osgi.html
#Patriarch24
The accepted answer to this question would seem to indicate otherwise (unless I'm misreading it). Also, taken from the FAQ:
The OSGi Service Platform provides the functions to change the composition dynamically on the device of a variety of networks, without requiring a restart
(Emphasis my own). Although in the same FAQ it describes OSGi as in-VM.
Why am I so confused about this? Why is such a basic question about a decade-old technology not clear?
The original problem of OSGI was more related to distribution of code (and then configuration of bundle) than to distribution of execution.
People looking at distributed components are rather looking towards SCA
The "introduction" link is not really an intro, it is a FAQ entry. For more information, see http://www.osgi.org/About/WhatIsOSGi Not hard to find I would think.
Anyway, OSGi is an in-VM SOA. That is, the OSGi Framework is about what happens inside the VM, it provides a framework for structuring your application inside the VM so you can built it too a large extent from components. So the core has nothing to do with distribution, it is completely oblivious of who implements the services, it just provides a mechanism for modules to meet each other in a loosely coupled way.
That said, the µService model reifies the joints between the modules and it turns out that you can build support on top of the framework that provides distribution to the other components. In the last releases we specified some mechanisms that make this standardized in the core and provide a special service Remote Service Admin that can manage a distributed topology.
If you are looking for a distributed OSGi centric Cloud runtime - then the Paremus Service Fabric ( https://docs.paremus.com/display/SF16/Introduction ) provides these capabilities.
One or more Systems each consisting of a number of OSGi assemblies (Blueprint or Declarative Services) can be dynamically deployed and maintained across a population of OSGi runtime Frameworks (Knopflerfish, Felix or Equinox).
A light weight RSA remote framework is provided which provides Service discovery by default using DDS (a seriously good middleware messaging technology) - (thought ZooKeeper and other approach can be used). Currently supported re-moting protocols include RMI and Avro.
Regards
Richard
Does anybody has an experience with Spring Integration project as embedded ESB?
I'm highly interesting in such use cases as:
Reading files from directory on schedule basis
Getting data from JDBC data source
Modularity and possibility to start/stop/redeploy module on the fly (e.g. one module can scan directory on schedule basis, another call query from jdbc data source etc.)
repeat/retry policy
UPDATE:
I found answers on all my questions except "Getting data from JDBC data source". Is it technically possible?
Remember, "ESB" is just a marketing term designed to sell more expensive software, it's not a magic bullet. You need to consider the specific jobs you need your software to do, and pick accordingly. If Spring Integration seems to fit the bill, I wouldn't be too concerned if it doesn't look much like an uber-expensive server installation.
The Spring Integration JDBC adapters are available in 2.0, and we just released GA last week. Here's the relevant section from the reference manual: http://static.springsource.org/spring-integration/docs/latest-ga/reference/htmlsingle/#jdbc
This link describes the FileSucker with Spring Integration. Read up on your Enterprise Integration patterns for more info I think.
I kinda think you need to do a bit more investigation your self, or do a couple of tries on some of your usecases. Then we can discuss whats good and bad
JDBC Adapters appear to be a work in progress.
Even if there is no specific adapter available, remember that Spring Integration is a thin wrapper around POJOs. You'll be able to access JDBC in any component e.g. your service activators.
See here for a solution based on a polling inbound channel adapter too.