I'm trying to find best way to execute rules from a use case of batch execution.
The batch process more than 10 000 rows and need to load a lot of data in memory. The rules should be apply to all rows, i have seen there are some ways to execute rules :
directly from rest api kie server after i have deployed a container from workbench for example.
Or i could use kie scanner to load new container and execute rules from the application without do a http call.
I am worry about performance of kie server rest api, should be a problem?
When should we execute rules with rest api vs java application embedded by using kie scanner maven.. ? Drools have so many ways to execute rules.
Related
I am new to spring boot, have come across a situation...
l have 10 different property files based on various logical modules of a monolith application(db.properties,jms.properties, etc) and 7 envs(pre, sit1,sit2,uat1,uat2,prod, dr). The idea of having diffrent property files so that we can use them almost with no change whenever we move to microservice based approach.
One approach says - we use various spring application names
like - spring.application.name=db,jms,a,b .....
In this way we will land up having 10×7 = 70 files under same folde? (In order to make it profile driven) like jms.properties,jms-dev.properties,jms-uat.propetris...... for all various logical modules.
Is there any better approach to host the files using config server?
We have a monolith application and we plan to continue the same for the time being.
I am struggling to build such facility using spring cloud config server...if any one can help
I am working on a sample application right now using Spring Boot, Spring Data JPA, and Spring Data Elasticsearch. I want to be able to run the unit tests as part of a pipeline build, but they require Elasticsearch to be running to work as the service makes calls to said ES server. SQL works fine because I am using an in-memory H2 instance.
I have implemented some code to attempt to launch ES as an "embedded" server. The embedded server works just fine, but it seems like, at least from what I can tell, it is started AFTER the context loads. Most importantly after the ElasticSearchConfiguration does it's thing.
I think I need refactor the code out of AbstractElasticsearchTest into a separate class that can run prior to ElasticSearchConfiguration generates the client/template, but I am not sure how to do it, nor how to Google said process.
Is there some mechanism in Spring Boot that could be used to start the embedded servers prior to running any of the configurations? Or is there some way I could enhance ElasticSearchConfiguration to do it prior to creating the client/template, but only when running the unit tests?
Edit:
So, just to be a little more specific...what I am looking for is a means/way to either run ES 5 in "embedded" mode OR how to mock up the Spring Data ES code enough so that it works for the CI server. The code linked above currently is mixing unit tests with integration tests, I know, as it's currently making calls to a physical ES server. That's what I am trying to correct: I should be able to stub/mock enough of the underlying Spring Data code to make the unit test think it's talking to the real-deal. I can then change the tests that determine if the documents made it to ES and test things like type-ahead searches to be integration tests instead so they do not run when CI or Sonar runs.
Ok, so for those that might come back here in the future, this commit shows the changes I made to get ES to run as "embedded".
The nuts-and-bolts of it was to start the node as "local" then physically return node.client(). Then in the Spring Bean method that gets the client, check if you have "embedded" turned on, if so start the node and return it's Client (the local one), if not just build the client just as normal.
We have String Batch applications with triggers defined in each app.
Each Batch application runs tens of similar jobs with different parameters and is able to do that with 1400 MiB per app.
We use Spring Batch Admin, which is deprecated years ago, to launch individual job and to get brief overview what is going in jobs. Migration guide recommends to replace Spring Batch Admin with Spring Cloud DataFlow.
Spring Cloud DataFlow docs says about grabbing jar from Maven repo and running it with some parameters. I don't like idea to wait 20 sec for application downloading, 2 min to application launching and all that security/certificates/firewall issues (how can I download proprietary jar across intranets?).
I'd like to register existing applications in Spring Cloud DataFlow via IP/port and pass job definitions to Spring Batch applications and monitor executions (including ability to stop job). Is Spring Cloud DataFlow usable for that?
Few things to unpack here. Here's an attempt at it.
Spring Cloud DataFlow docs says about grabbing jar from Maven repo and running it with some parameters. I don't like idea to wait 20 sec for application downloading, 2 min to application launching and all that security/certificates/firewall issues
Yes, there's an App resolution process. However, once downloaded, we would reuse the App from Maven cache.
As for the 2mins bootstrapping window, it is up to Boot and the number of configuration objects, and of course, your business logic. Maybe all that in your case is 2mins.
how can I download proprietary jar across intranets?
There's an option to resolve artifacts from a Maven artifactory hosted behind the firewall through proxies - we have users on this model for proprietary JARs.
Each Batch application runs tens of similar jobs with different parameters and is able to do that with 1400 MiB per app.
You may want to consider the Composed Task feature. It not only provides the ability to launch child Tasks as Direct Acyclic Graphs, but it also allows transitions based on exit-codes at each node, to further split and branch to launch more Tasks. All this, of course, is automatically recorded at each execution level for further tracking and monitoring from the SCDF Dashboard.
I'd like to register existing applications in Spring Cloud DataFlow via IP/port and pass job definitions to Spring Batch applications and monitor executions (including ability to stop job).
As far as the batch-jobs are wrapped into Spring Cloud Task Apps, yes, you'd be able to register them in SCDF and use it in the DSL or drag & drop them into the visual canvas, to create coherent data pipelines. We have a few "batch-job as task" samples here and here.
I recently started working on creating jacoco reports for maven projects including unit and integration tests and they seem to work out correctly.
Now I have encountered a different scenario which I am not sure how to approach.
I have one workspace which consists of integration test cases - application A, but the source code does not exist in the same workspace/code base. The source code which actually runs on invoking these integration test scripts are in a different workspace/code base - application B(they are invoked using rest api calls with the localhost urls. The jboss server is started for application B so that the localhost context is up) from the integration tests.
The aim is to invoke these integration tests from application A, which in turn calls the source code of these tests in application B generating the jacoco report of the code coverage for application B.
I am not actually sure how to achieve this.
Can someone provide some input.
Thanks.
If I understand you correctly, you actually have 2 different processes in your scenario:
The "client" process that runs the integration tests and for which jacoco can be easily applied, but it's not what you need
The "server" process that runs the actual JBoss server and executes the actual code.
Client process contacts the server via HTTP.
In this case, I'm afraid jacoco won't be able to provide a coverage for you if you're running the tests from maven/gradle, because jacoco instruments only bytecode on the running JVM. So you have to be "creative" here :)
I'll list here some possible approaches
Disclaimer: I haven't tried them though (didn't work with jboss/java ee), but maybe you'll be able to at least borrow some ideas
The first approach would be running the tests together with the application somehow, like its done for example in spring tests (I'm not sure whether JBoss provides similar capabilities).
The idea is simple:
You run the integration test, it runs the jboss "embedded in the same jvm" and you can inject beans / EJB session beans into the test (like autowiring with spring).
The advantage of such a method is that you'll be able just to use jacoco maven plugin and it will instrument everything for you
I don't know how easy will be achieving this architecture technically, I know that recent jboss versions support embedded mode, So maybe you'll find This link to be a useful foundation
Another direction is to take a look at Arquillian project. They have some jacoco extension that probably will help, but I've never tried it.
And the last approach I can think of is running the jboss server with jacoco agent directly instead of relying on the build system that runs jacoco for you.
The idea here is to stream the results of covered server code into some file / tcp endpoint. So you run the jboss with -javaagent:[yourpath/]jacocoagent.jar and it starts streaming the results wherever you need it to stream. After the tests you should gather these results and prepare a report. You can find Here more information about this approach
I have got a use case to implement. It's basically a workflow kind of use case. Below is the requirements
Extract and import data from an external db to an internal db
Make this imported data into different formats and supply it to multiple external systems and invoke some script there. The external interfaces are SFTP, SOAP, JDBC, Python over CORBA. There are around 14 external systems with one of these interfaces.
Interface transactions are executed in around 15 steps, with the ability to run some steps in parallel
These steps should be configurable. ie, a particular flow may execute 10 of these 15 steps and another flow executes 15 of 15 steps
Should have the ability to restart each step individually or restart from a particular step
There are some steps that are manual and completion of manual step should trigger next step
Volume of data is not that large. Total data size is around 400k records. But this process is executing for around 30k records at a time. Time for development is less and we are looking for some light weight easy to learn and implement solution.
We are looking for Spring based or Spring integratable solutions.
The solutions we considered are
For workflow:
Activiti, Spring Batch
For interfaces:
Spring Integration
My question is
Can Spring batch considered for managing a work flow kind of use case? I don't think it's a best fit use case for Spring Batch but as its simple and easy to implement looked for its scope. We considered doing the interfaces interaction as each step in a batch job and inside the tasklet do the Spring Integration for external interfaces, with few issues as far as I understand are
a) Dynamic step configuration can be done with Java configuration, but how flexible it is and is it recommended?
b) Manual step processing is not possible in Spring Batch
Is there any work around for this? Is there any other issues or performance impacts on doing this?
Activiti seems to a solution. Can you please provide some feedback on Activiti with Spring and Spring integration for this use case and ease of implementing it? And support for Activiti
Can Activiti workflows restarted from a particular task? Is a task can be rollbacked?
Welcoming any suggestions !!
1) For managing workflows, Activiti would be a great choice. They have created a really good process engine which should comply your needs for delegating your tasks as well as calling your custom logic. Moreover, it is based entirely on Spring Framework so Integration with your logic would be easy.
2) i've provided the same in first answer.
3) No, you will have to create a new workflow for that and Yes!, a task can be rolled back.