Netflix Conductor WorkflowStatusListener - microservices

I'm using Netflix Conductor over Rest API. I'm able to create a workflow and run it but I would like to know how to use the workflowStatusListener feature.
I'm running Conductor on my localhost with Docker and I saw that the server is a simple jar, possibly a SpringBoot app. So, how to pass my on jar with my Listener or Simple Tasks in this scenario?

I found how to deploy it using the Docker image.
I copied the folder /app from my docker container, changed the startup.sh script and mounted my local folder.
I copied my jar into /app/libs
java -cp libs/*.jar com.netflix.conductor.bootstrap.Main $config_file $log4j_file

Related

Springboot app image not working on aws ecs

I am learning AWS ECS. In continuation to it, I created a sample application with the following Dockerfile config
FROM adoptopenjdk/openjdk11
ARG JAR_FILE=build/libs/springboot-practice-1.0-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
EXPOSE 80
ENTRYPOINT ["java","-jar","/app.jar"]
The app has just one endpoint /employee which returns a hardcoded value at the moment.
I built this image locally and ran it successfully using command
docker run -p 8080:8080 -t chandreshv85/demo
Pushed it to docker respository
Created a fargate cluster and task definition
Created a task in running state based on the above definition.
Now when I try to access the endpoint it's giving 404.
Suspecting that something could be wrong with the security group, I created a different task definition based on another image (another repo) and it ran fine with the same security group.
Now, I believe that something is wrong with the springboot image. Can someone please help me identify what is wrong here? Please let me know if you need more information.

log4j2 logs for Spring boot application inside Docker container

I have a spring boot application which uses log4j2 to write logs. When I deploy this application to a Docker container, I want the logs to be written to a file at a specified location outside the container. How can I do this ? I tried providing the path of the log folder using an environment variable on startup but its of no use. No logs are being written. Please help
You have to mount the directory on the host file system inside the container. A simple web search will yield many results on how to do this such as How to mount a host directory in a Docker container

Deploying spring boot micro services with K8's in production

I'm new to k8's setup, I wanted to know what is the best way to deploy the services in production. Below are a few way's I could think of, can you guide me in the right direction.
1) Deploy each *.war file into a apache tomcat docker container, and using the service discovery mechanism of k8's.
2) Run each application normally using "java -jar *.war" into pods and expose their ports using port binding.
Thanks.
The canonical way to deploy applications to Kubernetes is as follows:
Package each application component in a container image and upload it to a container registry (e.g. Docker Hub)
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user
I would suggest to use embedded Tomcat server in Springboot .jar file to deploy your microservices. Below the answer of #weibeld that I also use to deploy my springboot apps.
Package each application component in a container image and upload it
to a container registry (e.g. Docker Hub)
You can use Jib to easily build distroless image. The container image can be built using maven plugin.
mvn compile jib:build -Djib.to.image=MY_REGISRY_IMAGE:MY_TAG -Djib.to.auth.username=USER -Djib.to.auth.password=PASSWORD
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Create your deployment .yml file structure and adjust the deployment parameters as you need in the file.
kubectl create deployment my-springboot-app --image MY_REGISRY_IMAGE:MY_TAG --dry-run -o yaml > my-springboot-app-deployment.yml
Create the deployment:
kubectl apply -f my-springboot-app-deployment.yml
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user
kubectl expose deployment my-springboot-app --port=8080 --target-port=8080 --dry-run -o yaml > my-springboot-app-service.yml
kubectl apply -f my-springboot-app-service.yml

How I can deal with the problem of deployed spring cloud application in docker accessing shared files write and read?

I deployed spring cloud application in docker.The spring cloud application use accessing picture file.When I deployed spring cloud application in docker by dockerfile. In a local development environment,I can access the picture file.When I deployed the application in docker,it cast bug.It shows that the spring cloud application not find the file on the host computer. What do I need to deal with the problem?
I have tried to copy the host computer picture file to docker volumne path. But it can not work.
My host computer of picture file path in my application yml file is like this.
originImgPath: /tmp/tcps/klanalyze/originImg
captchaImgPath: /tmp/tcps/klanalyze/captchaImg
The picture saved on the host computer path is like this.
/tmp/tcps/klanalyze/originImg
/tmp/tcps/klanalyze/captchaImg
My docker file by package is like this.
FROM jdk-8u191:20190321
MAINTAINER beigai_liyang
VOLUME /tmp
ADD target/klanalyze-gateway-0.0.1-SNAPSHOT.jar app.jar
RUN bash -c 'touch /app.jar'
EXPOSE 8888
ENTRYPOINT [ "java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/app.jar" ]
My code is like this.
//read yml setting
#Autowired
private GatewayProperty gatewayProperty;
public void loadPicture(){
……
//load file
File file = new File(gatewayProperty.getOriginImgPath());
……
}
My docker version is 17.12.1-ce.
My spring cloud version is Finchley.SR1.
My Spring boot version is 2.0.3.RELEASE.
My host computer is cent-os 7.
Seems you need use the -v to mount your host direct to /tmp volume.
docker run -v /tmp:/tmp image_name
Concerns:
Not very make sense about "copy the host computer picture file to docker volumne path"
Past your docker run command here, so we could see how you run the container

Jar not found error while while trying to deploy SCDF Stream

I registered the sink first as follows:
app register --name mysink --type sink --uri file:///Users/swatikaushik/Downloads/kafkaStreamDemo/target/kafkaStreamDemo-0.0.1-SNAPSHOT.jar
Then I created a stream
stream create --definition “:myKafkaTopic > mysink" --name myStreamName --deploy
I got the error
Command failed org.springframework.cloud.dataflow.rest.client.DataFlowClientException: File
/Users/swatikaushik/Downloads/kafkaStreamDemo/target/kafkaStreamDemo-0.0.1-SNAPSHOT.jar must exist
While the jar exists!!
I'v followed the maven local repository mounting approach, using the docker compose, hope this helps:
Maven:
mvn clean install
Setup your environment variables:
$Env:DATAFLOW_VERSION="2.5.1.RELEASE"
$Env:SKIPPER_VERSION="2.4.1.RELEASE"
$Env:HOST_MOUNT_PATH="C:\Users\yourUserName\.m2"
$Env:DOCKER_MOUNT_PATH="/root/.m2/"
Restart/start the containers:
docker-compose down
docker-compose up
Register your apps:
app register --type sink--name mysink --uri maven://groupId:artifactId:version
Register Doc
File permission is one thing - please double check as advised.
A few other ideas:
1) Run app info sink:mysink. If the JAR is actually available, it should return with a list of Boot/Whitelisted properties of the Application.
2) Run the Jar standalone. Make sure it actually starts via java -jar.....
3) The stream definition appear to include a special character (“:myKafkaTopic > mysink" instead of ":myKafkaTopic > mysink" - notice the “ character); it would fail in the Shell, but it looks like you were able to deploy it. A full stacktrace would help.
We just had the same error as described above.
We had mounted the folder of the jar files to the skipper.
The solution was, that we had to mount the jars to the data-flow server docker container as well.
Skipper is deploying it, but dataflow server registers it.

Resources