I don't understand how to register an app. I followed a lot of guides and they use this example to explain it:
dataflow:>app register --name fileIngest --type task --uri file:///path/to/target/ingest-X.X.X.jar
My jar is in "C:\Temp" but if i set the uri: file:///Temp/myjar-0.0.1-SNAPSHOT.jar
i have this error:
java.lang.IllegalArgumentException: File /Temp/myjar-0.0.1-SNAPSHOT.jar must exist
Can someone explain me how to run a local batch with Spring Cloud Data Flow in local?
I understood how to do it. In docker-compose.yml i set the path in skipper-server and dataflow-server like this:
image: springcloud/spring-cloud-dataflow-server:${DATAFLOW_VERSION:?DATAFLOW_VERSION is not set!}
container_name: dataflow-server
volumes: - 'C:/Temp:/root/apps'
"Then the right way to register the app is: "
app register --name 'mybatch' --type task --uri file:///root/apps/myjar-0.0.1-SNAPSHOT.jar
What you tried is intended to be used in the Unix box, but for Windows, you'd have to point to the file with a different namespace pattern.
Perhaps try this:
app register --name fileIngest --type task --uri file:/C:/Temp/myjar-0.0.1-SNAPSHOT.jar
Related
I am learning AWS ECS. In continuation to it, I created a sample application with the following Dockerfile config
FROM adoptopenjdk/openjdk11
ARG JAR_FILE=build/libs/springboot-practice-1.0-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
EXPOSE 80
ENTRYPOINT ["java","-jar","/app.jar"]
The app has just one endpoint /employee which returns a hardcoded value at the moment.
I built this image locally and ran it successfully using command
docker run -p 8080:8080 -t chandreshv85/demo
Pushed it to docker respository
Created a fargate cluster and task definition
Created a task in running state based on the above definition.
Now when I try to access the endpoint it's giving 404.
Suspecting that something could be wrong with the security group, I created a different task definition based on another image (another repo) and it ran fine with the same security group.
Now, I believe that something is wrong with the springboot image. Can someone please help me identify what is wrong here? Please let me know if you need more information.
I am trying to dockerize my spring boot project and use it in EC2 instance.
In application.properties I have following lines,
spring.datasource.url=${SPRING_DATASOURCE_URL}
spring.datasource.username=${SPRING_DATASOURCE_USERNAME}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD}
and I am reading SPRING_DATASOURCE_URL, SPRING_DATASOURCE_USERNAME and SPRING_DATASOURCE_PASSWORD from my environment.
I have following Dockerfile,
FROM openjdk:latest
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar", "-DSPRING_DATASOURCE_URL=${DATASOURCE_URL}", \
"-DSPRING_DATASOURCE_USERNAME=${DATASOURCE_USERNAME}", \
"-DSPRING_DATASOURCE_PASSWORD=${DATASOURCE_PASSWORD}", "/app.jar"]
EXPOSE 8080
When I try to run run the following command,
sudo docker run -p 80:8080 <image-repo> --env DATASOURCE_URL=<url> --env DATASOURCE_USERNAME=<username> --env DATASOURCE_PASSWORD=<password>
My application crashes because of the non-existing environment variables.
Also,
I have tried using docker-compose mentioned Spring boot app started in docker can not access environment variables link. It become much more complicated for me to debug.
TL;DR I want to achieve information hiding in my spring boot application. I want to use environment variables to hide my credentials to the database. However, my dockerized program is not having the environment variables that I want to have.
If you have any approaches other than this, I would be also happy to listen the way I can achieve information hiding. Thanks.
You need to put options like docker run --env before the image name. If they're after the image name, Docker interprets them as the command to run; with the specific setup you have here, you'll see them as the arguments to your main() function.
sudo docker run \
--env SPRING_DATASOURCE_URL=... \
image-name
# additional arguments here visible to main() function
I've spelled out SPRING_DATASOURCE_URL here. Spring already knows how to map environment variables to system properties. This means you can delete the lines from your application.properties file and the java -D options, so long as you do spell out the full Spring property name in the environment variable name.
In my spring app, I externalized flyway migration
spring.flyway.locations=filesystem:/var/migration
When using docker run I specify
docker run --name papp -p 9000:2000 -v "C:...\db\migration":/var/migration
How can I specify this in Cloud Run >> VAriables and Secrets
What values do I need to supply for -v "C:...\db\migration":/var/migration
(I created a bucket and uploaded file in Cloud Storage..assuming files should be there)
The form you have there is being described as:
Mount each secret as a volume, which makes the secret available to the container as files.
If you cannot explain in how far /var/migration would be a "secret", this might be the wrong approach. One likely cannot mount that volume as a secret; just mount with docker run -v.
But there are secrets for Flyway, which you could mount eg. as flyway.properties file:
flyway.user=databaseUser
flyway.password=databasePassword
flyway.schemas=schema1,schema2
And generally all the other configuration parameters, as well:
https://flywaydb.org/documentation/configuration/parameters/
In my use case I would like to have two jar files in the containers. In typical docker image, I see an entry point, which basically starts the jar file. In my case, I will not know which program to be started, till the time the container is getting used in the K8s services. In my example, I have a jar file that applies the DDLs and the second Jar file is my application. I want the k8s to deploy my DDL application first and upon completion it will deploy my spring boot application (from a different jar but from same container ) next. There by I cannot give an entry point for my container, rather I need to run the specific jar file using command and argument from my yaml file. In all the examples I have come across, I see an entry point being used to start my java process.
The difference here from the post referred here is- i want to have the container to have two jar files and when I load the container through k8s, I want to decide which program to run from command prompt. One option I am exploring is to have a parametrized shell script, so I can pass the jar name as parameter and the shell will run java -jar . I will update here once I find something
solution update
Add two jars in the docker file and have a shell script that uses parameter. Use the below sample to invoke the right jar file form the K8s yaml file
spec: containers:
- image: URL
imagePullPolicy: Always name: image-name
command: ["/bin/sh"]
args: ["-c", "/home/md/javaCommand.sh jarName.jar"]
ports: - containerPort: 8080
name: http
A docker image doesn't have to run a java jar when starting, it has to run something.
You can simply make this something a bash script that will make these decisions and start the jar you like
Try to add the per-requisites in the Init Containers while deploying it to kubernetes and in the regular container you can place your application, it will make DDLs container to be initialized first and then the following application container can be executed.
I registered the sink first as follows:
app register --name mysink --type sink --uri file:///Users/swatikaushik/Downloads/kafkaStreamDemo/target/kafkaStreamDemo-0.0.1-SNAPSHOT.jar
Then I created a stream
stream create --definition “:myKafkaTopic > mysink" --name myStreamName --deploy
I got the error
Command failed org.springframework.cloud.dataflow.rest.client.DataFlowClientException: File
/Users/swatikaushik/Downloads/kafkaStreamDemo/target/kafkaStreamDemo-0.0.1-SNAPSHOT.jar must exist
While the jar exists!!
I'v followed the maven local repository mounting approach, using the docker compose, hope this helps:
Maven:
mvn clean install
Setup your environment variables:
$Env:DATAFLOW_VERSION="2.5.1.RELEASE"
$Env:SKIPPER_VERSION="2.4.1.RELEASE"
$Env:HOST_MOUNT_PATH="C:\Users\yourUserName\.m2"
$Env:DOCKER_MOUNT_PATH="/root/.m2/"
Restart/start the containers:
docker-compose down
docker-compose up
Register your apps:
app register --type sink--name mysink --uri maven://groupId:artifactId:version
Register Doc
File permission is one thing - please double check as advised.
A few other ideas:
1) Run app info sink:mysink. If the JAR is actually available, it should return with a list of Boot/Whitelisted properties of the Application.
2) Run the Jar standalone. Make sure it actually starts via java -jar.....
3) The stream definition appear to include a special character (“:myKafkaTopic > mysink" instead of ":myKafkaTopic > mysink" - notice the “ character); it would fail in the Shell, but it looks like you were able to deploy it. A full stacktrace would help.
We just had the same error as described above.
We had mounted the folder of the jar files to the skipper.
The solution was, that we had to mount the jars to the data-flow server docker container as well.
Skipper is deploying it, but dataflow server registers it.