For development, I'm using the local Spring Cloud Dataflow server on my Mac, though we plan to deploy to a Kubernetes cluster for integration testing and production. The SCDF docs say you can use environment variables to configure various things, like database configuration. I'd like my registered app to use these env variables, but they don't seem to be able to see them. That is, I start the SCDF server by running its jar from a terminal window, which can see a set of environment variables. I then configure a stream using some Spring Cloud stream starter apps and one custom Spring Boot app. I have the custom app logging System.getenv() and it's not showing the env variables I need. I set them in my ~/.bashrc file, which I also source from ~/.bash_profile. That works for my terminal windows and most other things that need environment, but not here. Where should I be defining them?
To the points in the first answer and comments, they sound good, but nothing works for me. I have an SQS Source that get's its connection via:
return AmazonSQSAsyncClientBuilder.standard()
.withRegion(Regions.US_WEST_2.getName()))
.build();
When I deploy to a Minikube environment, I edit the sqs app's deployment and set the AWS credentials in the env section. Then it works. For a local deployment, I've now tried:
stream deploy --name greg1 --properties "deployer.sqs.AWS_ACCESS_KEY_ID=<id>,deployer.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "deployer.sqs.aws_access_key_id=<id>,deployer.sqs.aws_secret_access_key=<secret>"
stream deploy --name greg1 --properties "app.sqs.AWS_ACCESS_KEY_ID=<id>,app.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "app.sqs.aws_access_key_id=<id>,app.sqs.aws_secret_access_key=<secret>"
All fail with the error message I get when credentials are wrong, which is, "The specified queue does not exist for this wsdl version." I've read the links, and don't really see anything else to try. Where am I going wrong?
You can pass environment variables to the apps that are deployed via SCDF using application properties or deployment properties. Check the docs for a description of each type.
For example:
dataflow:> stream deploy --name ticktock --properties "deployer.time.local.javaOpts=-Xmx2048m -Dtest=foo"
Related
So, I am learning about microservices (I am a beginner) and I'm facing an issue. I've went through the Spring Cloud Config and Docker docs hoping to find a solution, but I didn't.
I have an app with 3 microservices (Spring Boot) and 1 config server (Spring Cloud Config). I'm using a private Github repository for storing config files and this is my application.properties file for config server:
spring.profiles.active=git
spring.cloud.config.server.git.uri=https://github.com/username/microservices-config.git
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.default-label=master
spring.cloud.config.server.git.username=${GIT_USERNAME}
spring.cloud.config.server.git.password=${GIT_ACCCESS_TOKEN}
I have a Dockerfile based on which I have created a Docker image for config server (with no problems). I created a docker-compose.yml which I use to create and run containers, but it fails because of an exception in my cloud config app. The exception is:
org.eclipse.jgit.api.errors.TransportException: https://github.com/username/microservices-config.git: not authorized
Which basically means that my environment variables GIT_USERNAME and GIT_ACCCESS_TOKEN (that I set up in Intellij's "Edit configuration" and use in application.properties) are not available for the config server to use in a container.
The question is: Do I need to somehow add those environment variables to .jar or to Docker image or to Docker container? Like I'm not sure how do I make them available for the config server to use in a container.
Any help or explanation is welcomed :)
I am learning AWS ECS. In continuation to it, I created a sample application with the following Dockerfile config
FROM adoptopenjdk/openjdk11
ARG JAR_FILE=build/libs/springboot-practice-1.0-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
EXPOSE 80
ENTRYPOINT ["java","-jar","/app.jar"]
The app has just one endpoint /employee which returns a hardcoded value at the moment.
I built this image locally and ran it successfully using command
docker run -p 8080:8080 -t chandreshv85/demo
Pushed it to docker respository
Created a fargate cluster and task definition
Created a task in running state based on the above definition.
Now when I try to access the endpoint it's giving 404.
Suspecting that something could be wrong with the security group, I created a different task definition based on another image (another repo) and it ran fine with the same security group.
Now, I believe that something is wrong with the springboot image. Can someone please help me identify what is wrong here? Please let me know if you need more information.
I'm new to k8's setup, I wanted to know what is the best way to deploy the services in production. Below are a few way's I could think of, can you guide me in the right direction.
1) Deploy each *.war file into a apache tomcat docker container, and using the service discovery mechanism of k8's.
2) Run each application normally using "java -jar *.war" into pods and expose their ports using port binding.
Thanks.
The canonical way to deploy applications to Kubernetes is as follows:
Package each application component in a container image and upload it to a container registry (e.g. Docker Hub)
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user
I would suggest to use embedded Tomcat server in Springboot .jar file to deploy your microservices. Below the answer of #weibeld that I also use to deploy my springboot apps.
Package each application component in a container image and upload it
to a container registry (e.g. Docker Hub)
You can use Jib to easily build distroless image. The container image can be built using maven plugin.
mvn compile jib:build -Djib.to.image=MY_REGISRY_IMAGE:MY_TAG -Djib.to.auth.username=USER -Djib.to.auth.password=PASSWORD
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Create your deployment .yml file structure and adjust the deployment parameters as you need in the file.
kubectl create deployment my-springboot-app --image MY_REGISRY_IMAGE:MY_TAG --dry-run -o yaml > my-springboot-app-deployment.yml
Create the deployment:
kubectl apply -f my-springboot-app-deployment.yml
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user
kubectl expose deployment my-springboot-app --port=8080 --target-port=8080 --dry-run -o yaml > my-springboot-app-service.yml
kubectl apply -f my-springboot-app-service.yml
I registered the sink first as follows:
app register --name mysink --type sink --uri file:///Users/swatikaushik/Downloads/kafkaStreamDemo/target/kafkaStreamDemo-0.0.1-SNAPSHOT.jar
Then I created a stream
stream create --definition “:myKafkaTopic > mysink" --name myStreamName --deploy
I got the error
Command failed org.springframework.cloud.dataflow.rest.client.DataFlowClientException: File
/Users/swatikaushik/Downloads/kafkaStreamDemo/target/kafkaStreamDemo-0.0.1-SNAPSHOT.jar must exist
While the jar exists!!
I'v followed the maven local repository mounting approach, using the docker compose, hope this helps:
Maven:
mvn clean install
Setup your environment variables:
$Env:DATAFLOW_VERSION="2.5.1.RELEASE"
$Env:SKIPPER_VERSION="2.4.1.RELEASE"
$Env:HOST_MOUNT_PATH="C:\Users\yourUserName\.m2"
$Env:DOCKER_MOUNT_PATH="/root/.m2/"
Restart/start the containers:
docker-compose down
docker-compose up
Register your apps:
app register --type sink--name mysink --uri maven://groupId:artifactId:version
Register Doc
File permission is one thing - please double check as advised.
A few other ideas:
1) Run app info sink:mysink. If the JAR is actually available, it should return with a list of Boot/Whitelisted properties of the Application.
2) Run the Jar standalone. Make sure it actually starts via java -jar.....
3) The stream definition appear to include a special character (“:myKafkaTopic > mysink" instead of ":myKafkaTopic > mysink" - notice the “ character); it would fail in the Shell, but it looks like you were able to deploy it. A full stacktrace would help.
We just had the same error as described above.
We had mounted the folder of the jar files to the skipper.
The solution was, that we had to mount the jars to the data-flow server docker container as well.
Skipper is deploying it, but dataflow server registers it.
Is it possible to programmatically set a different profile for every instance of a spring boot application deployed in cloud foundry using for example ConfigurableEnvironment and cloud foundry instance index?
I would suggest that you look into using tasks.
https://docs.cloudfoundry.org/devguide/using-tasks.html
Here's roughly how this would work.
Run cf push to deploy your application to CF. If you do not actually have an application to run, that is OK. You just need to push the app and start it once so that it stages and creates a droplet. After that, you can run cf stop to shutdown the instance (note: cf push --no-start won't work, because the app needs to stage at least once).
Run cf run-task <app> <command>. This is where you kick off your batch jobs. The <command> argument is going to be the full command to run your batch job. In this, you can include an argument to indicate the profiles that should be used. Ex: --spring.profiles.active=dev,hsqldb.
https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-profiles.html
You need to use the full or relative path to the java exceutable because the Java buildpack does not put it onto the path. If you wanted to run a task that printed the version of the JVM, you'd use this command '.java-buildpack/open_jdk_jre/bin/java -version'.
Ex: cf run-task <app> '.java-buildpack/open_jdk_jre/bin/java -version'
See this SO post though for drawbacks of hardcoding the path to the Java executable in your command. My suggestion would be to take the command that's listed when you run cf push and modify to your needs.