Have a docker container running a single spring cloud data flow application. Up until version 2.7.2 of data flow, I was able to pass all the database url/username/password properties to the application by defining them as environment variables in the container.
The application has application.yml defined to fetch the properties from variables, like this:
appname:
datasource:
url: ${APP_DB_URL:jdbc:sqlserver://127.0.0.1:1433}
password: ${APP_DB_PASSWORD:D3faultP4ss!}
username: ${APP_DB_USER:sa}
As said previously these variables was able to be defined by just having them as env variables in the container, now this method doesn't seem to work anymore starting from data flow 2.8 upwards.
Setup is configured to automatically register the application to the spring cloud data flow after startup, with following logic
1st delete all old tasks
curl "http://localhost:9393/tasks/definitions/${APP_NAME}?cleanup=false" -o /dev/null -s -w "%{http_code}" -X DELETE
2nd delete all old applications
curl "http://localhost:9393/apps" -o /dev/null -s -w "%{http_code}" -X DELETE
3rd register application
curl "http://localhost:9393/apps/task/${APP_NAME}/${APP_VERSION}" -o /dev/null -s -w "%{http_code}" -X POST -d "uri=file%3A%2F%2Fapp%2F${APP_JAR}&force=true"
4th register task
curl "http://localhost:9393/tasks/definitions" -o /dev/null -s -w "%{http_code}" -X POST -d "name=${APP_NAME}&definition=${APP_NAME}"
APP_NAME, APP_VERSION and APP_JAR are env variables put into the container in the build phase.
The on schedule, daily, the job is triggered with the following call:
curl "http://localhost:9393/tasks/executions" -o /dev/null -s -w "%{http_code}" -X POST -d "name=${APP_NAME}&properties=deployer.${APP_NAME}.local.workingDirectoriesRoot%3D%2Fapp%2Flogs&arguments=--date%3D$(date --date='yesterday' +\%Y-\%m-\%d)+--range%3D${range}+--dataSource%3DApplication+SourceProperty"
As said, until 2.7.2 version this worked, now when trying to update to 2.8 or newer the application tries to connect to the default database url defined in the application.yml. How can I override these using env variables?
And obviously if there's a better way to do the auto registration of the app, all tips are appreciated.
Spring Cloud Data Flow 2.7.x used Spring Boot 2.3
Spring Cloud Data Flow 2.8.x used Spring Boot 2.4
There are changes to property names and the data source initialisation will override your configuration unless you disable autoconfiguration of datasources.
We suggest you move to latest Spring Cloud Data Flow and update your applications to Spring Boot 2.7.x
The you provide properties like:
app.<appname>.spring.datasource.url etc.
Related
we have a problem setting up aws-sigv4 and connecting an AWS AMP workspace via docker images.
TAG: grafana/grafana:7.4.5
Main problem is that in the UI the sigv4 configuration screen does not appear.
Installing grafana:7.4.5 locally via Standalone Linux Binaries works.
Just setting the environment variables,
export AWS_SDK_LOAD_CONFIG=true
export GF_AUTH_SIGV4_AUTH_ENABLED=true
the configuration screen appears.
Connecting and querying data to AMP via corresponding IAM instance role is working flawlessly.
Doing the same in the docker image as ENV Variables does NOT work.
When using grafana/grafana:sigv4-web-identity it works, but it seems to me that this is just a "test image".
How to configure the default grafana image in order to enable sigV4 authentication?
It works for me:
$ docker run -d \
-p 3000:3000 \
--name=grafana \
-e "GF_AUTH_SIGV4_AUTH_ENABLED=true" \
-e "AWS_SDK_LOAD_CONFIG=true" \
grafana/grafana:7.4.5
You didn't provide minimal reproducible example, so it's hard to say what is a problem in your case.
Use variable GF_AWS_SDK_LOAD_CONFIG instead of AWS_SDK_LOAD_CONFIG.
I have created a docker image for my spring boot app "payalbnsl/shoppingapp"
First i start the container for mongodb: "docker run -d -p 27017:27017 --name mongodb --network n1 -e MONGO_INITDB_ROOT_USERNAME=mongodb -e MONGO_INITDB_ROOT_PASSWORD=mongodb -e MONGO_INITDB_DATABASE=levent -e MONGO_USERNAME=mongodb -e MONGO_PASSWORD=mongodb mongo --auth"
Then i run the image : "docker run -d --network n1 -p 9000:9000 payalbnsl/shoppingapp:1"
When i try to access the url "http://13.233.154.209:9000", it redirects to "http://13.233.154.209:9000/products", like it should as coded. But i keep getting 404 error. No page is displayed.
Logs are exactly same both when running locally and on ec2 using docker
But locally i can see the web page
While running on ec2, i get this page:
Though i just entered http://13.233.154.209:9000, it redirects to http://13.233.154.209:9000/products on production, but i don't see any web page.
I have opened up port 9000 for ec2 instance.
Why there is 404, i cannot understand. I am using JSP for frontend.
Application has successfully connected to mongo. Both are on bridge network n1. I have enabled firewall on this port. Even if i run this application locally using docker it gives same 404. On any instance, i am getting 404.
Also for me "curl -i --unix-socket /var/run/docker.sock http://localhost/containers/json " works while "curl -i --unix-socket /var/run/docker.sock http://containers/json " gives 404. Not sure what it means exactly.
Any help would be really appreciated!
Thanks,
Payal
It was a spring boot jsp problem. I had placed jsps in the webapp folder and when i created an executable jar file, jsps were not copied. Hence created a war file and redeployed. It worked. Resolved!
I am using the Kubernetes to deploy and trace data from application using zipkin. I am facing issue in replacing MySQL with Elasticsearch since I am not able to get the idea. Even the replacement is done on command line basis, using STORAGE_TYPE="Elasticsearch" but how that can be done through kubernetes? I am able to run the container from docker imgaes but is there any way to replace through deployment?
You may define all needed params via ENV options.
Here is a cmd for running zipkin in docker:
docker run -d -p 9411:9411 -e STORAGE_TYPE=elasticsearch -e ES_HOSTS=http://172.17.0.3:9200 -e ES_USERNAME=elastic -e ES_PASSWORD=changeme openzipkin/zipkin
All these params can be defined in Deployment (see Expose Pod Information to Containers Through Environment Variables)
I have a simple Spring Boot Application, a Spring Cloud Config Server that is going to be run on Openshift 3.1. The problem is that when running on this platform, the application increases its size steadily until it uses up the set up container max memory (512MB), crashing eventually, and making Openshift restart it.
We configure this application on Openshift with a Dockerfile. Deploying it directly on a simple Docker container the application behaves normally. I load-tested it with JMeter and its memory compsumption stays at 256MB, no matter the load.
Could this be an Openshift bug? Is there any solution for this?
Dockerfile:
FROM java:8
RUN curl -H 'Cache-Control: no-cache' -f "http://${APPLICATION-URL}" -o ./app.jar
EXPOSE 8888
ENTRYPOINT java -jar ./app.jar
A Spring Boot service running at localhost:7090/someurl accepts parameters Var1 and Var2 in POST requests. How can I test this from the CentOS 7 terminal?
When I type localhost:7090/someurl?Var1=something&Var2=anotherthing into the web browser, the logs show that an unsuported GET request was made. How do I mimmick a POST request to test this service? What do I type into the CentOS 7 terminal?
if it is a normal bash shell, you can always use curl (you maybe need to install it), for example:
curl -X POST -d "Var1=VALUE1&Var2=VALUE2" localhost:7090/someurl