I'm looking to start Spring XD in distributed mode (more specifically deploying it with BOSH). How does the admin component communicate to the module container?
If it's via TCP/HTTP, surely I'll have to tell the admin component where all the containers are? If it's via Redis, I would've thought that I'll need to tell the containers where the Redis instance is?
Update
I've tried running xd-admin and Redis on one box, and xd-container on another with redis.properties updated to point to the admin box. The container starts without reporting any exceptions.
Running the example stream submission curl -d "time | log" http://{admin IP}:8080/streams/ticktock yields no output to either console, and not output to the logs.
If you are using the xd-container script, then the redis.properties is expected to be under "XD_HOME/config" where XD_HOME points the base directory where you have bin, config, lib & modules of xd.
Communication between the Admin and Container runtime components is via the messaging bus, which by default is Redis.
Make sure the environment variable XD_HOME is set as per the documentation; if it is not you will see a logging message that suggests the properties file has been loaded correctly when it has not:
13/06/24 09:20:35 INFO support.PropertySourcesPlaceholderConfigurer: Loading properties file from URL [file:../config/redis.properties]
Related
We have max-tests set to 1 so that each test case should be getting a brand new container. However, we are seeing that used containers are not being destroyed consistently.
Therefore I am asking where can I see any errors from Zalenium itself?
I see there are options for LogBack config, but what is the default location for any log file?
The plain vanilla install on Ubuntu
Im assuming the Zalenium container is creating logs on that container
Im just asking where is the default location for these?
Thanks
I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.
I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.
I'm using spring-boot and spring-boot-yarn to submit yarn applications to a cluster.
My use-case is close to the one described in this tutorial https://github.com/spring-guides/gs-yarn-basic.
The only difference is that my 'client' is supposed to be a web application and submit the yarn jobs when web requests are made.
The problem I have is that web requests to the 'client' web-application provide parameters I need to pass down to the yarn job.
In the above tutorial parameters are passed as command line arguments to to the appmaster / container specified in application.yml. In my case this approach does not work since I have a different set of parameters for each yarn job.
Is there a way to pass dynamic parameters to yarn jobs without hard-coding them in application.yml?
Original idea was to prevent "rogue" users or applications to pass properties which would then automatically end up in a command-line options potentially making harm within a hadoop cluster.
It's worth to check my answer in Spring Boot Yarn - Passing Command line arguments if this is what you want.
Having said that, you are not a first person to ask this or "complain" that it is too difficult or unclear how to do it. We're going to make this much easier with future releases mostly because it just seem to be what users want to do.
I'm trying to deploy my Spring Boot based application to a CloudControl container.
I've added the mysql.free add-on and configured it through my application.properties:
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.max-active=1
spring.datasource.max-idle=1
spring.datasource.min-idle=1
spring.datasource.initial-size=1
spring.datasource.url=jdbc:mysql://${MYSQLS_HOSTNAME}:${MYSQLS_PORT}/${MYSQLS_DATABASE}
spring.datasource.username=${MYSQLS_USERNAME}
spring.datasource.password=${MYSQLS_PASSWORD}
On my local development system, everything works perfectly fine, but on the CloudControl container, the app won't start.
I added the StackTrace here. I'm trying to solve the problem for days, but I am not able to solve it by my own.
Spring apps are very memory consuming and the mysqls.free addon allows only a limited number of parallel connections. Although your Stacktrace doesn't show any of these problems. It's hard to solve this issue without more context like logs or environment settings.
The following commands may help:
cctrlapp app_name/default log error # shows startup log
cctrlapp app_name/default addon.creds # shows DB credentials
I've uploaded some spring-boot example code at https://github.com/cloudControl/spring-boot-example-app which I've tested on cloudControl today.
Please take a look at the configuration there. If you want to deploy it, make sure your container has memory size >= 768mb.
cctrlapp app_name/default deploy --memory 768MB
If you still have issues, please contact cloudControl support to help you.