optaplanner with aws lambda - aws-lambda

I am using optaplanner to solve a scheduling problem. I want to invoke the scheduling code from AWS Lambda (i know that Lambda's max execution time is 5 minutes and thats okay for this application)
To achieve this I have build a maven project with two modules:
module-1: scheduling optimization code
module-2: aws lambda handler ( calls scheduling code from module-1)
When i run my tests in IntelliJ Idea for module-1(that has optaplanner code), it runs fine.
When I invoke the lambda function, i get following exception:
java.lang.ExceptionInInitializerError:
java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError
at org.kie.api.internal.utils.ServiceRegistry.getInstance(ServiceRegistry.java:27)
...
Caused by: java.lang.RuntimeException: Child services [org.kie.api.internal.assembler.KieAssemblers] have no parent
at org.kie.api.internal.utils.ServiceDiscoveryImpl.buildMap(ServiceDiscoveryImpl.java:191)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.getServices(ServiceDiscoveryImpl.java:97)
...
I have included following dependency in maven file: org.optaplanner optaplanner-core 7.7.0.Final
Also checked that jar file have drools-core, kie-api, kei-internal, drools-compiler. Does anyone know what might be the issue?

Sounds like a bug in drools when running in a restricted environment such as AWS-lambda. Please create a JIRA and link it here.

I was getting the same error attempting to run a fat jar containing an example OptaPlanner project. A little debugging revealed that the problem was services was empty when ServiceDiscoveryImpl::buildMap was invoked; I was using the first META-INF/kie.conf in the build, and as a result services were missing from that file. Naturally your tests would work properly because the class path would contain all of the dependencies (that is, several distinct META-INF/kie.conf files), and not the assembly you were attempting to execute on the lambda.
Concatenating those files instead (using an appropriate merge strategy in assembly) fixes the problem and appears appropriate given how those are loaded by ServiceDiscoveryImpl. The updated JAR runs properly as an AWS lambda.
Note: I was using the default scoreDrl from the v7.12.0.Final Cloud Balancing example.

Related

Jenkins build, hudson.FilePath is missing

while building a pipeline on Jenkins for building a springboot application, I noticed that the build fails with the following message: Required context class hudson.FilePath is missing. Searching on the internet I find examples of json configurations to be provided to the Jenkins service. My doubt is on the web interface: https://prnt.sc/6SIZKYRmfYGI where it allows me to create pipelines, but I don't know where to fill in the required filepath. Would anyone know how to give me directions?

Quarkus test fails after adding qute dependency

I was working on a Quarkus AWS lambda service that generates some html reports and uploads to S3. After implementation, I found that my lambda tests started failing with 404.
Analysing further, I found that the quarkus-resteasy-qute dependency that I added was the culprit. As soon as I add this dependency to my project, my lambda handler test case starts failing with a 404. My plan was to use qute for some templating of the reports generated by the lambda.
I have replicated the same in a starter project here. I don't have any clue on what exactly in this dependency is causing the failure.
Really appreciate any help or pointers to debug it further.
quarkus-resteasy-qute will NOT work with regular lambda. It would work with quarkus-amazon-lambda-http (or rest) which spins up an API gateway on AWS though.
Reference: https://github.com/quarkusio/quarkus/issues/24312

Beam pipeline not moving in Google Dataflow while running ok on direct runner

I have a Beam pipeline runs well locally with DirectRunner. However, when switching to the DataFlowRunner, the job started and I can see the flow chart from the Google dataflow web ui. However, the job does not run. It was hanging there till I stop the job. I am using Beam 2.10. I can see the auto scaling adjusting cpu and no exception in the log.
I think this has something to do with the way I create the Jar file. I am using the shadow Jar to create the jar file in gradle build. Main reason to use the ShadowJar is for mergeServiceFiles(). If not using mergeServiceFiles(), the job will run with exception like No FileSystem found for gs.
So I copied the word count from google dataflow template repo and package as jar file. It shows the same thing, the job started but not moving. The code has been touched with miniumum change for the service account credential. Instead of its original PipelineOptions, I extends the GcsOptions for the credential.
Tried beam 2.12, 2.10.
Dig around and found the full log by clicking on the stackdrive on the upper right corner of the log shown. Found the following
Caused by: java.lang.IllegalStateException: Detected both log4j-over-slf4j.jar AND bound slf4j-log4j12.jar on the class path, preempting StackOverflowError. See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more details. at org.slf4j.impl.Log4jLoggerFactory.<clinit>(Log4jLoggerFactory.java:54) ....
Then there is a
java failed with exit status 1
log entry few rows under the log4j error. Basically the java program already stopped but the dataflow UI still showing it is running on the flow chart.
Use the gradle build script to exclude all the slf4j-log4j12 from
compile ('org.apache.hadoop:hadoop-mapreduce-client-core:3.2.0') {exclude group: 'org.slf4j', module: 'slf4j-log4j12'}
and other dependencies contains slf4j-log4j12 and the job start moving.

AWS Flow Jar creation with Maven + Java 1.8

Has anyone been able to compile an application with Java 1.8 + AWS Flow + Maven?
I have an established Java application which has been created with Java 1.8 it uses the AWS library's and AWS flow framework. I'm looking to now automate the build of the product, I opted to use Maven. Until this point the project was exported manually within eclipse.
I have reached a point where I can build a Jar which contains our generated workflow classes ( external clients + factories ) along with what I understand to be the aspect classes ( xxxxx$1.class, xxxxx$2.class ).
The end goal is to get the weaving to happen at compile time.
However when running the maven built jar the workflows are not working as expected. The application completly ignoring the #Asynchronous annotation and results in a not ready state. As a result it will cancel the scheduling the activity we wish to execute.
I have created a simple application with a single workflow and activity to show the issues that I'm experiencing. This version has been exported via eclipse and works, but get the error shown when building via the POM.
Start with message: With Comp
Created Workers
Added implentations
Nov 28, 2016 12:14:11 PM com.amazonaws.services.simpleworkflow.flow.worker.GenericWorker start
INFO: start: GenericWorkflowWorker[super=GenericWorkflowWorker[service=com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient#163e4e87, domain=Experimental, taskListToPoll=TEST, identity=3174#ip-10-0-1-141, backoffInitialInterval=100, backoffMaximumInterval=60000, backoffCoefficient=2.0], workflowDefinitionFactoryFactory=com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinitionFactoryFactory#56de5251]
Nov 28, 2016 12:14:12 PM com.amazonaws.services.simpleworkflow.flow.worker.GenericWorker start
INFO: start: GenericActivityWorker [super=GenericActivityWorker[service=com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient#4c60d6e9, domain=Experimental, taskListToPoll=TEST, identity=3174#ip-10-0-1-141, backoffInitialInterval=100, backoffMaximumInterval=60000, backoffCoefficient=2.0], taskExecutorThreadPoolSize=100]
Start workers
Now Sleep
Sleep Done
Make Call
DECIDER 1
DECIDER 2
DECIDER DOING CATCH
java.lang.IllegalStateException: not ready
at com.amazonaws.services.simpleworkflow.flow.core.Settable.get(Settable.java:91)
at com.amazonaws.services.simpleworkflow.flow.core.Functor.get(Functor.java:35)
at root.DeciderWFMethods.printMessage(DeciderWFMethods.java:79)
at root.DeciderWFMethods.access$100(DeciderWFMethods.java:6)
at root.DeciderWFMethods$1.doTry(DeciderWFMethods.java:54)
at --- continuation ---.(:0)
at root.DeciderWFMethods.workflowExecute(DeciderWFMethods.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition.invokeMethod(POJOWorkflowDefinition.java:150)
at com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition.access$1(POJOWorkflowDefinition.java:148)
at com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition$1.doTry(POJOWorkflowDefinition.java:76)
at --- continuation ---.(:0)
at com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition.execute(POJOWorkflowDefinition.java:66)
at com.amazonaws.services.simpleworkflow.flow.worker.AsyncDecider$WorkflowExecuteAsyncScope.doAsync(AsyncDecider.java:70)
DECIDER DOING FINALLY
Having compared the contents of the generated jar from both eclipse and maven builds there is nothing obviously different to me.
I have searched the net for something useful but only really found example for Java 1.6 / 1.7 nothing for 1.8.
It's at this point that I should mention i'm new to maven but believe its more likely to be an AspectJ configuration / AWS build tools issue rather than Maven problem.
Build & Run
The sample application is being run on an EC2 instance using EC2 IAM roles to execute to a Workflow domain called 'Experimental'
It accepts a string which the activity upper cases, the decider should then print the message from the activity.
To build.
mvn clean
mvn package
Then running the compiled jar
java -jar Test.jar "a test message"
.
GitHub Link
https://github.com/jwhitefield-hark/aws-flow-maven
Any advice would be greatly appreciated.
We have managed to resolve this issue with help with the kind folk at the AWS forums.
Our problem were two fold.
We had compiler arguments set to -proc:none this prevented the build from completing.
Also within our aspectj-maven plugin we had set the execution to process-sources which appears to be the crux of our problem as this is preventing a good build being created and also not showing us the errors which were being generated as a result of including our compiler arguments.
As a side note within the aspectj-maven-plugin we had set the targets to 1.6 this is not required. We tried as it appeared that eclipse may have been using these setting. Either way these properties seem to have no affect.
We also changed the aspectj library from aws-java-sdk-swf-libraries to aws-swf-build-tools to keep it upto date.
https://forums.aws.amazon.com/thread.jspa?threadID=243838&tstart=0

Gradle - Compile submodules in parallel

I have a project with two sub modules.
Client - A UI based on Google's web developer kit
Server - A spring boot based server.
Now in my Gradle config(build file) on server, Im creating a jar file from the client and then including it on the server through the below snippet. Lastly, I create a .war file based on the server config.
dependencies {
compile project(':client')
}
The architecture is similar to Spring Boot's proposed ways of resource handling.
Now, when I run the Gradle build, because server is dependent on the client, the server compilation doesnt start until the client compilation and tests are done.
I feel that I'm not making use of Gradle's parallelism with this way of compiling client and server.
Are there any ways such that a compile and run test cases in parallel and then create a .war file only when both submodule's tasks are complete? How do i access the configurations of the client and server modules and then create a new war file on the rootProject?
You can try to add flag --parallel to your Gradle command. However this is still incubating feature. I noticed significant improvement on building time when running Gradle daemon, so you can try it out as well.
No, this level of parallelism is not currently available. I think the team are slowly working towards general parallel task execution, as described in their spec. That should allow for the kind of behaviour you're requesting.
That said, you can run tests in parallel if they're independent, via the maxParallelForks and forkEvery options. MrHaki gives a short how-to for this on his blog. Note that this only applies to a single Test task instance.

Resources