We are using minio for storage file releases.
We are using the go cdk library to convert s3 to http.
The problem is, when I try to execute a release I'm having this issue:** NoCredentialProviders: no valid providers in chain. Deprecated.**
This is the URL we are using: "s3://integration-test-bucket?endpoint=minio.9001®ion=us-west-2" . It's any way to pass credentials to the URL itself? In this case, It will not be sensitive data as we are running it locally.
Note: I'm using docker compose yml and default environment for minio_access_key and minio_secret_key. (minioadmin & minioadmin).
I tried several types of query parameters inside URL to pass credentials. The goal is to not touch go CDK library itself, but pass credentials through URL or pass dummy credentials/avoid credentials checking.
You can provide following environment variables to the service/container that tries to connect to minio:
AWS_ACCESS_KEY_ID=${MINIO_USER}
AWS_SECRET_ACCESS_KEY=${MINIO_PASSWORD}
AWS_REGION=${MINIO_REGION_NAME}
The library should pick them up during container startup and use when executing requests.
Related
I have generated a service account keyfile which contains an ID for the key as well as the (rather lengthy) key iself.
I'd like to store this automatically in gitlab CICD preferably, where the process itself can do one of two things (neither of which I have conceptualized)
Pass the CICD variable in as another nested variable which the host machine in subsequent workflows will maintain so the key, stored as a string as a $variable, can be passed in as the value of this argument:
-D fs.gs.auth.service.account.private.key="${variable}
(in CICD prep) check for the existence of a file, and if not, create a file and (somehow) inject the value of the CICD variable into such a file, and then call it once more from subsequent scripts safely and then use the output of that file as a value to the same argument above (hadoop command)
-D fs.gs.auth.service.account.private.key="${/path/to/file?}
The above of course does not work. (Invalid arguments: Invalid PKCS8 data)
EDIT: I should also add that this instance of gitlab is not associated with google cloud and the project I am making is attempting to use google services remotely with use of service accounts
I have a Springboot application using embedded keycloak.
What I am looking for is a way to load the keycloak server from it, make changes to the configuration, add users and to then export this new version of keycloak.
This question got an answer on how to do a partial export but I can't find anything in the documentation of the Keycloak Admin REST API on how to do a full export.
With the standalone keycloak server I would be able to simply use the CLI and type
-Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=/tmp/keycloak-dump.json
But this is the embedded version.
This is most likely trivial since I know for a fact that newly created users have to be stored somewhere.
I added a user and restarting the application doesn't remove it, so keycloak persists it somehow. But the json files I use for keycloak server and realm setup haven't been changed.
So, with no access to a CLI without a standalone server and no REST endpoint for a full export, how do I load the server, make some changes and generate a new json via export that I can simply put into my Spring App instead?
You can make a full export with the following command (if the Springboot works with Docker containers):
[podman | docker] exec -it <pod_name> opt/jboss/keycloak/bin/standalone.sh
-Djboss.socket.binding.port-offset=<interger_value> Docker recommend an offset of 100 at least
-Dkeycloak.migration.action=[export | import]
-Dkeycloak.migration.provider=[singleFile | dir]
-Dkeycloak.migration.dir=<DIR TO EXPORT TO> Use only iff .migration.provider=dir
-Dkeycloak.migration.realmName=<REALM_NAME_TO_EXPORT>
-Dkeycloak.migration.usersExportStrategy=[DIFFERENT_FILES | SKIP | REALM_FILE | SAME_FILE]
-Dkeycloak.migration.usersPerFile=<integer_value> Use only iff .usersExportStrategy=DIFFERENT_FILES
-Dkeycloak.migration.file=<FILE TO EXPORT TO>
I am creating an open source keycloak example with documentation; you can see a full guide about import/export in my company's GitHub.
I have a CodePipeline with GitHub as a source, set up. I am trying to, without a success, pass a single secret parameter( in this case a Stripe secret key, currently defined in an .env file -> explaination down below ) to a specific Lambda during a Deployment stage in CodePipeline's execution.
Deployment stage in my case is basically a CodeBuild project that runs the deployment.sh script:
#! /bin/bash
npm install -g serverless#1.60.4
serverless deploy --stage $env -v -r eu-central-1
Explanation:
I've tried doing this with serverless-dotenv-plugin, which serves the purpose when the deployment is done locally, but when it's done trough CodePipeline, it returns an error on lambda's execution, and with a reason:
Since CodePipeline's Source is set to GitHub (.env file is not commited), whenever a change is commited to a git repository, CodePipeline's execution is triggered. By the time it reaches deployment stage, all node modules are installed (serverless-dotenv-plugin along with them) and when serverless deploy --stage $env -v -r eu-central-1 command executes serverless-dotenv-plugin will search for .env file in which my secret is stored, won't find it since there's no .env file because we are out of "local" scope, and when lambda requiring this secret triggers it will throw an error looking like this:
So my question is, is it possible to do it with dotenv/serverless-dotenv-plugin, or should that approach be discarded? Should I maybe use SSM Parameter Store or Secrets Manager? If yes, could someone explain how? :)
So, upon further investigation of this topic I think I have the solution.
SSM Parameter Store vs Secrets Manager is an entirely different topic, but for my purpose, SSM Paremeter Store is a choice that I chose to go along with for this problem. And basically it can be done in 2 ways.
1. Use AWS Parameter Store
Simply by adding a secret in your AWS Parameter Store Console, then referencing the value in your serverless.yml as a Lambda environement variable. Serverless Framework is able to fetch the value from your AWS Parameter Store account on deploy.
provider:
environement:
stripeSecretKey: ${ssm:stripeSecretKey}
Finally, you can reference it in your code just as before:
const stripe = Stripe(process.env.stripeSecretKey);
PROS: This can be used along with a local .env file for both local and remote usage while keeping your Lambda code the same, ie. process.env.stripeSecretKey
CONS: Since the secrets are decrypted and then set as Lambda environment variables on deploy, if you go to your Lambda console, you'll be able to see the secret values in plain text. (Which kinda indicates some security issues)
That brings me to the second way of doing this, which I find more secure and which I ultimately choose:
2. Store in AWS Parameter Store, and decrypt at runtime
To avoid exposing the secrets in plain text in your AWS Lambda Console, you can decrypt them at runtime instead. Here is how:
Add the secrets in your AWS Parameter Store Console just as in the above step.
Change your Lambda code to call the Parameter Store directly and decrypt the value at runtime:
import stripePackage from 'stripe';
const aws = require('aws-sdk');
const ssm = new aws.SSM();
const stripeSecretKey = ssm.getParameter(
{Name: 'stripeSecretKey', WithDecryption: true}
).promise();
const stripe = stripePackage(stripeSecretKey.Parameter.Value);
(Small tip: If your Lambda is defined as async function, make sure to use await keyword before ssm.getParameter(...).promise(); )
PROS: Your secrets are not exposed in plain text at any point.
CONS: Your Lambda code does get a bit more complicated, and there is an added latency since it needs to fetch the value from the store. (But considering it's only one parameter and it's free, it's a good trade-off I guess)
For the conclusion I just want to mention that all this in order to work will require you to tweak your lambda's policy so it can access Systems Manager and your secret that's stored in Parameter Store, but that's easily inspected trough CloudWatch.
Hopefully this helps someone out, happy coding :)
I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.
I need a mongo client with user of only read access for all databases.
Also i need the ruby mongo client to be created without the password hardcoded?
Any suggestions.
just use standard ruby mongo driver: https://github.com/mongodb/mongo-ruby-driver
here you have API documentation in which you'll find details on authentication (essentially it requires passing user and password keys in initialization data): http://api.mongodb.com/ruby/2.5.0/Mongo/Client.html
--
Also i need the ruby mongo client to be created without the password
hardcoded
You can always keep authentication details in your app's config
OR
for example pass it in environment variables when starting process [they will be available through ENV hash]. Example of how it is used
RAILS_ENV=developent rails s - RAILS_ENV is an environment variable accessible in your app by ENV['RAILS_ENV']
If you decide to do it this way you can keep authentication outside your app on the machine on which you're running your app.