Springboot app image not working on aws ecs - spring-boot

I am learning AWS ECS. In continuation to it, I created a sample application with the following Dockerfile config
FROM adoptopenjdk/openjdk11
ARG JAR_FILE=build/libs/springboot-practice-1.0-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
EXPOSE 80
ENTRYPOINT ["java","-jar","/app.jar"]
The app has just one endpoint /employee which returns a hardcoded value at the moment.
I built this image locally and ran it successfully using command
docker run -p 8080:8080 -t chandreshv85/demo
Pushed it to docker respository
Created a fargate cluster and task definition
Created a task in running state based on the above definition.
Now when I try to access the endpoint it's giving 404.
Suspecting that something could be wrong with the security group, I created a different task definition based on another image (another repo) and it ran fine with the same security group.
Now, I believe that something is wrong with the springboot image. Can someone please help me identify what is wrong here? Please let me know if you need more information.

Related

Accessing persistent H2 DB in docker container

I'm deploying a springboot application and I want to use a persistent DB. So, in application.properties file, I have
spring.datasource.url=jdbc:h2:file:/home/ubuntu/db;AUTO_SERVER=TRUE;
Now this works as long as I start this application without using a container. Now, I build a docker image and try to run the application. Dockerfile looks like
FROM maven:3-jdk-11 AS maven
ARG BUILD = target/build.jar
COPY ${BUILD} build.jar
EXPOSE 8080
USER spring:spring
ENTRYPOINT["java","-jar","/build.jar"]
Now this doesn't work when I try to start it, because it searches for /home/ubuntu/db inside the container, which does not exist. Is there a way to make the app inside the docker container access the host folder /home/ubuntu/db? Thanks for the response.
The missing part is to tell docker when running the containter to mount /home/ubuntu/db from the host into the container.
You do that like this:
docker run -v <folder_on_host>:<folder_in_cointainer>
with your example:
docker run -v /home/ubuntu/db:/home/ubuntu/db
more info on docker docs: https://docs.docker.com/get-started/06_bind_mounts/
Just in case it is helpful to anyone else, the full command to be used is:
docker run -v /home/ubuntu/db:/home/ubuntu/db --privileged -p $HOST_PORT:$CONTAINER_PORT <image-name>

DockerCompose Kong with Deck installed

I'm looking into using Deck for Kong to perform synchronized migration. However, I can't seem to find a way to install Deck cli into my Kong container using docker-compose.
Is there any guide/documentation I can follow to perform such an installation?
I think that this can be resolved with docker container kong/deck if you will follow the below steps.
write Dockerfile.deck by using kong/deck image
. write wait script for the 8001 (admin) port of kong.
. kong/deck is just a container for the cli, so you should notice that its default ENTRYPOINT is deck command, so you should reset ENTRYPOINT if you want to run wait script.
. copy kong.yaml from local to container directory
add your-deck service part to docker-compose.yml with build configuraiton of 1)
run dockercompose up after build
This will work if you want to apply the kong.yaml at deployment time.

Netflix Conductor WorkflowStatusListener

I'm using Netflix Conductor over Rest API. I'm able to create a workflow and run it but I would like to know how to use the workflowStatusListener feature.
I'm running Conductor on my localhost with Docker and I saw that the server is a simple jar, possibly a SpringBoot app. So, how to pass my on jar with my Listener or Simple Tasks in this scenario?
I found how to deploy it using the Docker image.
I copied the folder /app from my docker container, changed the startup.sh script and mounted my local folder.
I copied my jar into /app/libs
java -cp libs/*.jar com.netflix.conductor.bootstrap.Main $config_file $log4j_file

How I can deal with the problem of deployed spring cloud application in docker accessing shared files write and read?

I deployed spring cloud application in docker.The spring cloud application use accessing picture file.When I deployed spring cloud application in docker by dockerfile. In a local development environment,I can access the picture file.When I deployed the application in docker,it cast bug.It shows that the spring cloud application not find the file on the host computer. What do I need to deal with the problem?
I have tried to copy the host computer picture file to docker volumne path. But it can not work.
My host computer of picture file path in my application yml file is like this.
originImgPath: /tmp/tcps/klanalyze/originImg
captchaImgPath: /tmp/tcps/klanalyze/captchaImg
The picture saved on the host computer path is like this.
/tmp/tcps/klanalyze/originImg
/tmp/tcps/klanalyze/captchaImg
My docker file by package is like this.
FROM jdk-8u191:20190321
MAINTAINER beigai_liyang
VOLUME /tmp
ADD target/klanalyze-gateway-0.0.1-SNAPSHOT.jar app.jar
RUN bash -c 'touch /app.jar'
EXPOSE 8888
ENTRYPOINT [ "java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/app.jar" ]
My code is like this.
//read yml setting
#Autowired
private GatewayProperty gatewayProperty;
public void loadPicture(){
……
//load file
File file = new File(gatewayProperty.getOriginImgPath());
……
}
My docker version is 17.12.1-ce.
My spring cloud version is Finchley.SR1.
My Spring boot version is 2.0.3.RELEASE.
My host computer is cent-os 7.
Seems you need use the -v to mount your host direct to /tmp volume.
docker run -v /tmp:/tmp image_name
Concerns:
Not very make sense about "copy the host computer picture file to docker volumne path"
Past your docker run command here, so we could see how you run the container

Where to set env variables for local Spring Cloud Dataflow?

For development, I'm using the local Spring Cloud Dataflow server on my Mac, though we plan to deploy to a Kubernetes cluster for integration testing and production. The SCDF docs say you can use environment variables to configure various things, like database configuration. I'd like my registered app to use these env variables, but they don't seem to be able to see them. That is, I start the SCDF server by running its jar from a terminal window, which can see a set of environment variables. I then configure a stream using some Spring Cloud stream starter apps and one custom Spring Boot app. I have the custom app logging System.getenv() and it's not showing the env variables I need. I set them in my ~/.bashrc file, which I also source from ~/.bash_profile. That works for my terminal windows and most other things that need environment, but not here. Where should I be defining them?
To the points in the first answer and comments, they sound good, but nothing works for me. I have an SQS Source that get's its connection via:
return AmazonSQSAsyncClientBuilder.standard()
.withRegion(Regions.US_WEST_2.getName()))
.build();
When I deploy to a Minikube environment, I edit the sqs app's deployment and set the AWS credentials in the env section. Then it works. For a local deployment, I've now tried:
stream deploy --name greg1 --properties "deployer.sqs.AWS_ACCESS_KEY_ID=<id>,deployer.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "deployer.sqs.aws_access_key_id=<id>,deployer.sqs.aws_secret_access_key=<secret>"
stream deploy --name greg1 --properties "app.sqs.AWS_ACCESS_KEY_ID=<id>,app.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "app.sqs.aws_access_key_id=<id>,app.sqs.aws_secret_access_key=<secret>"
All fail with the error message I get when credentials are wrong, which is, "The specified queue does not exist for this wsdl version." I've read the links, and don't really see anything else to try. Where am I going wrong?
You can pass environment variables to the apps that are deployed via SCDF using application properties or deployment properties. Check the docs for a description of each type.
For example:
dataflow:> stream deploy --name ticktock --properties "deployer.time.local.javaOpts=-Xmx2048m -Dtest=foo"

Resources