I'm trying to combine suggestions on how to use SSL with openshift :
https://blog.openshift.com/openshift-demo-part-13-using-ssl/
with those on how to use ssl with mq:
Spring Configuration for JMS (Websphere MQ - SSL, Tomcat, JNDI, Non IBM JRE)
So I managed to modify my Spring Boot Camel app to move from connection via svrconn mq channel without SSL to one that uses SSL,
by adding SSLCipherSuite property to com.ibm.mq.jms.MQConnectionFactory bean, and by adding these VM arguments to Run Configuration
(as described in the second linked document):
-Djavax.net.ssl.trustStore=C:\path-to-keystore\key.jks
-Djavax.net.ssl.trustStorePassword=topsecret
-Djavax.net.ssl.keyStore=C:\path-to-keystore\key.jks
-Djavax.net.ssl.keyStorePassword=topsecret
-Dcom.ibm.mq.cfg.useIBMCipherMappings=false
And it works fine locally on embedded Tomcat server, however, I would need to deploy it to Openshift, so my first impulse was
to add the same VM arguments to those that I use for Openshift deployment, that is these ones:
-Dkubernetes.master=
-Dkubernetes.namespace=
-Dkubernetes.auth.basic.username=
-Dkubernetes.auth.basic.password=
-Dkubernetes.trust.certificates=
-Dfabric8.mode=openshift
but it obviously doesn't work, for example because I don't have the same path to keystore. So I investigated it a bit,
and learned that I have to use secrets, that can be defined via CLI >>oc secrets new<< command, or via Openshift console,
but I don't understand how exactly to proceed.
Do I have to add parameters to image, or environment variables to deployment config or something else?
Somehow it has to reference the defined secrets, and it has to be named by changing each dot with underscore in its name?
So, for example if I issue:
oc secrets new my-key-jks key.jks
then I have to >>Add Value from Config Map or Secret<<
JAVAX_NET_SSL_TRUSTSTORE my-key-jks key.jks
and Add Value:
COM_IBM_MQ_CFG_USEIBMCIPHERMAPPINGS false ??
I tried that, but this doesn't work, I added values to deploymentconfigs, so that I get such excerpt:
"spec": {
"containers": [
{
"env": [
{
"name": "KUBERNETES_NAMESPACE",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
},
{
"name": "JAVAX_NET_SSL_TRUSTSTORE",
"valueFrom": {
"secretKeyRef": {
"key": "key.jks",
"name": "my-key-jks"
}
}
},
when I do:
oc get dc app_name -o json
I have also created sa (serviceaccount) and assigned him as an admin to project, and assigned him to use newly created secret, I did it through Openshift console, so that I don't have oc CLI code right now.
This is also somewhat relevant (but it doesn't help me much):
https://github.com/openshift/openshift-docs/issues/699
After a build, pod's status becomes >>Crash Loop Back-off<<, and >>The logs are no longer available or could not be loaded.<< Without SSL, the same app runs fine on Openshift.
IMHO you are misinterpreting some of the settings you specify here.
1.
The VM arguments after "-Dkubernetes.master=" you specify here I assume are meant to be given to the fabric8 maven plugin which you use for deployment. Right?
The parameters that are about authentication/certificates here are ONLY for the communication to kubernetes and NOT intended for giving keystore data to your application to use. So I think they are unrelated.
Instead you need to ensure that inside your container your app gets started with the same parameters that you use for local execution. Of course you then would have to change the parameter values to where the respective data is available inside your container.
2.
Secrets are a tool to add sensitive data to your deployment that you don't want to be baked into your application image. So for example your keystores and the keystore passwords would qualify to be injected via secret.
An alternative to providing secret data as environment variable, like you tried, is to just mount them into the filesystem which makes the secret data available as files. As your application needs the JKS as a file you could do the following.
In the web console on your deployment, use the link "Add config files" under section "Volumes"
Select the secret "my-key-jks" created before as "source".
Specify some path where the secret should be mounted inside your container, for example "/secret". Then click "Add".
You jks will then be available inside your container under path "/secret/key.jks" so your applications parameter can point to this path.
Related
We are using minio for storage file releases.
We are using the go cdk library to convert s3 to http.
The problem is, when I try to execute a release I'm having this issue:** NoCredentialProviders: no valid providers in chain. Deprecated.**
This is the URL we are using: "s3://integration-test-bucket?endpoint=minio.9001®ion=us-west-2" . It's any way to pass credentials to the URL itself? In this case, It will not be sensitive data as we are running it locally.
Note: I'm using docker compose yml and default environment for minio_access_key and minio_secret_key. (minioadmin & minioadmin).
I tried several types of query parameters inside URL to pass credentials. The goal is to not touch go CDK library itself, but pass credentials through URL or pass dummy credentials/avoid credentials checking.
You can provide following environment variables to the service/container that tries to connect to minio:
AWS_ACCESS_KEY_ID=${MINIO_USER}
AWS_SECRET_ACCESS_KEY=${MINIO_PASSWORD}
AWS_REGION=${MINIO_REGION_NAME}
The library should pick them up during container startup and use when executing requests.
I have a Springboot application using embedded keycloak.
What I am looking for is a way to load the keycloak server from it, make changes to the configuration, add users and to then export this new version of keycloak.
This question got an answer on how to do a partial export but I can't find anything in the documentation of the Keycloak Admin REST API on how to do a full export.
With the standalone keycloak server I would be able to simply use the CLI and type
-Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=/tmp/keycloak-dump.json
But this is the embedded version.
This is most likely trivial since I know for a fact that newly created users have to be stored somewhere.
I added a user and restarting the application doesn't remove it, so keycloak persists it somehow. But the json files I use for keycloak server and realm setup haven't been changed.
So, with no access to a CLI without a standalone server and no REST endpoint for a full export, how do I load the server, make some changes and generate a new json via export that I can simply put into my Spring App instead?
You can make a full export with the following command (if the Springboot works with Docker containers):
[podman | docker] exec -it <pod_name> opt/jboss/keycloak/bin/standalone.sh
-Djboss.socket.binding.port-offset=<interger_value> Docker recommend an offset of 100 at least
-Dkeycloak.migration.action=[export | import]
-Dkeycloak.migration.provider=[singleFile | dir]
-Dkeycloak.migration.dir=<DIR TO EXPORT TO> Use only iff .migration.provider=dir
-Dkeycloak.migration.realmName=<REALM_NAME_TO_EXPORT>
-Dkeycloak.migration.usersExportStrategy=[DIFFERENT_FILES | SKIP | REALM_FILE | SAME_FILE]
-Dkeycloak.migration.usersPerFile=<integer_value> Use only iff .usersExportStrategy=DIFFERENT_FILES
-Dkeycloak.migration.file=<FILE TO EXPORT TO>
I am creating an open source keycloak example with documentation; you can see a full guide about import/export in my company's GitHub.
We are currently set path of properties file which contains secret/access key for Credentials File for AWSCredentialsProviderControlerService . Issue, is we are changing properties path for prod and non prod each time we run nifi workflow. trying to come up no change on Configuration on Credential File path, so that access/secret key would be read regardless of prod and non prod. Since credential file wont support Nifi Expresion language, trying to make use of ACCESS KEY/SECRET properties ${ENV:equalsIgnoreCase("prod"):ifElse(${ACESS_PROD},${ACESS_NONPROD})} Issue we are facing, we are not able to store these access key/secret keys to the registry. Hence unable to implement this change. Is there any way to read access/secret key regardless of environment in Nifi. Curently we are using 1 property file for non prod nifi and 2nd property file for prod properties. In this set up, manually need to changed to credential file path when switching from prod to non prod. Trying to seamlessly work without changing path of credential file. Is there any way to make this happen?
enter image description here
The process that uses the AWSCredentialsProviderControlerService does not support param or variables, but the AWSCredentialsProviderControlerService "credential file" property supports "Parameters Context" as entries, make use of this for your solution.
Example:
Trigger something --> RouteOnAttribute --> If Prod (run executestreamcmd and change the Parameter Context Value to point to prod credfile) else if DEV(run executestreamcmd and change the Parameter Context Value to point to prod credfile) --> then run you AWS Processor.
You can use the toolkit client to edit the parameter context, or event nipyapi python module. It will not be fast tohu.
I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.
I am trying to use Google Cloud CLOUD NATURAL LANGUAGE API.
I already have Google cloud running Account.
I enabled CLOUD NATURAL LANGUAGE API service and generated Service account keys and downloaded locally.
I ham using Goggle default program
LanguageServiceClient language = LanguageServiceClient.create();
// The text to analyze
String text = "My stay at this hotel was not so good";
Document doc = Document.newBuilder().setContent(text).setType(Type.PLAIN_TEXT).build();
// Detects the sentiment of the text
Sentiment sentiment = language.analyzeSentiment(doc).getDocumentSentiment();
System.out.printf("Text: %s%n", text);
System.out.printf("Sentiment: %s, %s%n", sentiment.getScore(), sentiment.getMagnitude());
I am using Eclipse as IDE on Mac
When I run application I got error
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute E
ngine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs
/application-default-credentials for more information.
I even added GOOGLE_APPLICATION_CREDENTIALS as export in Terminal and on using "printenv" it shows the Path like this
GOOGLE_APPLICATION_CREDENTIALS=/Users/temp/Downloads/Sentiment-0e556940c1d8.json
Still it wasn't working with some hit and trial method I found out that in eclipse we can configure run.
There I have added environment variable and after that when I run program it works fine.
Now MY problem is I am implementing that code inside J2EE project and that ear file is to deploy in Wildfly.
I am again getting same error. Now I dont know where to set enviromnet variable in Wildfly or where???
Finally I found a way to set up GOOGLE_APPLICATION_CREDENTIALS as environment variable inside Wildfly
If you are running server through Eclipse
Open Wildfly Server setting by double clicking your server inside
Server Tab
Click "Open Launch Configuration"
Move to "Environment" tab and add new variable as key value pair
eg
GOOGLE_APPLICATION_CREDENTIALS /Users/temp/Downloads/Sentiment-0e556940c1d8.json
If you are running server using terminal
By default Wildfly looks for additional setting inside standalone.conf file.
just open wildfly/bin/standalone.conf file and add following line
GOOGLE_APPLICATION_CREDENTIALS=/Users/temp/Downloads/Sentiment-0e556940c1d8.json
Thats it. You are good to go.....