GCP Cloud Run Spring App - How to specify flyway volume - spring-boot

In my spring app, I externalized flyway migration
spring.flyway.locations=filesystem:/var/migration
When using docker run I specify
docker run --name papp -p 9000:2000 -v "C:...\db\migration":/var/migration
How can I specify this in Cloud Run >> VAriables and Secrets
What values do I need to supply for -v "C:...\db\migration":/var/migration
(I created a bucket and uploaded file in Cloud Storage..assuming files should be there)

The form you have there is being described as:
Mount each secret as a volume, which makes the secret available to the container as files.
If you cannot explain in how far /var/migration would be a "secret", this might be the wrong approach. One likely cannot mount that volume as a secret; just mount with docker run -v.
But there are secrets for Flyway, which you could mount eg. as flyway.properties file:
flyway.user=databaseUser
flyway.password=databasePassword
flyway.schemas=schema1,schema2
And generally all the other configuration parameters, as well:
https://flywaydb.org/documentation/configuration/parameters/

Related

Springboot app image not working on aws ecs

I am learning AWS ECS. In continuation to it, I created a sample application with the following Dockerfile config
FROM adoptopenjdk/openjdk11
ARG JAR_FILE=build/libs/springboot-practice-1.0-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
EXPOSE 80
ENTRYPOINT ["java","-jar","/app.jar"]
The app has just one endpoint /employee which returns a hardcoded value at the moment.
I built this image locally and ran it successfully using command
docker run -p 8080:8080 -t chandreshv85/demo
Pushed it to docker respository
Created a fargate cluster and task definition
Created a task in running state based on the above definition.
Now when I try to access the endpoint it's giving 404.
Suspecting that something could be wrong with the security group, I created a different task definition based on another image (another repo) and it ran fine with the same security group.
Now, I believe that something is wrong with the springboot image. Can someone please help me identify what is wrong here? Please let me know if you need more information.

Cannot access environment variables in Docker environment

I am trying to dockerize my spring boot project and use it in EC2 instance.
In application.properties I have following lines,
spring.datasource.url=${SPRING_DATASOURCE_URL}
spring.datasource.username=${SPRING_DATASOURCE_USERNAME}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD}
and I am reading SPRING_DATASOURCE_URL, SPRING_DATASOURCE_USERNAME and SPRING_DATASOURCE_PASSWORD from my environment.
I have following Dockerfile,
FROM openjdk:latest
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar", "-DSPRING_DATASOURCE_URL=${DATASOURCE_URL}", \
"-DSPRING_DATASOURCE_USERNAME=${DATASOURCE_USERNAME}", \
"-DSPRING_DATASOURCE_PASSWORD=${DATASOURCE_PASSWORD}", "/app.jar"]
EXPOSE 8080
When I try to run run the following command,
sudo docker run -p 80:8080 <image-repo> --env DATASOURCE_URL=<url> --env DATASOURCE_USERNAME=<username> --env DATASOURCE_PASSWORD=<password>
My application crashes because of the non-existing environment variables.
Also,
I have tried using docker-compose mentioned Spring boot app started in docker can not access environment variables link. It become much more complicated for me to debug.
TL;DR I want to achieve information hiding in my spring boot application. I want to use environment variables to hide my credentials to the database. However, my dockerized program is not having the environment variables that I want to have.
If you have any approaches other than this, I would be also happy to listen the way I can achieve information hiding. Thanks.
You need to put options like docker run --env before the image name. If they're after the image name, Docker interprets them as the command to run; with the specific setup you have here, you'll see them as the arguments to your main() function.
sudo docker run \
--env SPRING_DATASOURCE_URL=... \
image-name
# additional arguments here visible to main() function
I've spelled out SPRING_DATASOURCE_URL here. Spring already knows how to map environment variables to system properties. This means you can delete the lines from your application.properties file and the java -D options, so long as you do spell out the full Spring property name in the environment variable name.

How can I natively load docker secrets mounted as a volume in spring boot

How can my spring boot application running inside a container access docker secrets mounted as a volume inside that same container?
Commonly suggested methods I DO NOT want to use:
Echo the secret into an environment variable - this is insecure.
Pass the secret in as a command line argument - this is messy with multiple secrets and hurts the local dev experience
Manually read the secret from a file by implementing my own property loader - Spring must have a native way to do this
Spring boot 2.4 introduced this feature as part of Volume mounted config trees using the spring.config.import property.
To read docker secrets mounted as a volume from the default location set:
spring.config.import = configtree:/run/secrets/
The property name will be derived from the secret filename and the value from the file contents.
This can then be accessed like any other spring property.

How I can deal with the problem of deployed spring cloud application in docker accessing shared files write and read?

I deployed spring cloud application in docker.The spring cloud application use accessing picture file.When I deployed spring cloud application in docker by dockerfile. In a local development environment,I can access the picture file.When I deployed the application in docker,it cast bug.It shows that the spring cloud application not find the file on the host computer. What do I need to deal with the problem?
I have tried to copy the host computer picture file to docker volumne path. But it can not work.
My host computer of picture file path in my application yml file is like this.
originImgPath: /tmp/tcps/klanalyze/originImg
captchaImgPath: /tmp/tcps/klanalyze/captchaImg
The picture saved on the host computer path is like this.
/tmp/tcps/klanalyze/originImg
/tmp/tcps/klanalyze/captchaImg
My docker file by package is like this.
FROM jdk-8u191:20190321
MAINTAINER beigai_liyang
VOLUME /tmp
ADD target/klanalyze-gateway-0.0.1-SNAPSHOT.jar app.jar
RUN bash -c 'touch /app.jar'
EXPOSE 8888
ENTRYPOINT [ "java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/app.jar" ]
My code is like this.
//read yml setting
#Autowired
private GatewayProperty gatewayProperty;
public void loadPicture(){
……
//load file
File file = new File(gatewayProperty.getOriginImgPath());
……
}
My docker version is 17.12.1-ce.
My spring cloud version is Finchley.SR1.
My Spring boot version is 2.0.3.RELEASE.
My host computer is cent-os 7.
Seems you need use the -v to mount your host direct to /tmp volume.
docker run -v /tmp:/tmp image_name
Concerns:
Not very make sense about "copy the host computer picture file to docker volumne path"
Past your docker run command here, so we could see how you run the container

Where to set env variables for local Spring Cloud Dataflow?

For development, I'm using the local Spring Cloud Dataflow server on my Mac, though we plan to deploy to a Kubernetes cluster for integration testing and production. The SCDF docs say you can use environment variables to configure various things, like database configuration. I'd like my registered app to use these env variables, but they don't seem to be able to see them. That is, I start the SCDF server by running its jar from a terminal window, which can see a set of environment variables. I then configure a stream using some Spring Cloud stream starter apps and one custom Spring Boot app. I have the custom app logging System.getenv() and it's not showing the env variables I need. I set them in my ~/.bashrc file, which I also source from ~/.bash_profile. That works for my terminal windows and most other things that need environment, but not here. Where should I be defining them?
To the points in the first answer and comments, they sound good, but nothing works for me. I have an SQS Source that get's its connection via:
return AmazonSQSAsyncClientBuilder.standard()
.withRegion(Regions.US_WEST_2.getName()))
.build();
When I deploy to a Minikube environment, I edit the sqs app's deployment and set the AWS credentials in the env section. Then it works. For a local deployment, I've now tried:
stream deploy --name greg1 --properties "deployer.sqs.AWS_ACCESS_KEY_ID=<id>,deployer.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "deployer.sqs.aws_access_key_id=<id>,deployer.sqs.aws_secret_access_key=<secret>"
stream deploy --name greg1 --properties "app.sqs.AWS_ACCESS_KEY_ID=<id>,app.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "app.sqs.aws_access_key_id=<id>,app.sqs.aws_secret_access_key=<secret>"
All fail with the error message I get when credentials are wrong, which is, "The specified queue does not exist for this wsdl version." I've read the links, and don't really see anything else to try. Where am I going wrong?
You can pass environment variables to the apps that are deployed via SCDF using application properties or deployment properties. Check the docs for a description of each type.
For example:
dataflow:> stream deploy --name ticktock --properties "deployer.time.local.javaOpts=-Xmx2048m -Dtest=foo"

Resources