Spring + Vault Secret Engine Database Connection - spring

I've stumbled across a problem with Spring and Vault Secret Engine. I do have a set database connection, for which the lease can be generated succesfully via vault cli (vault read database/creds/my-role)
Now, I do have a spring service deployed via Terraform and a vault policy document:
data "vault_policy_document" "this" {
rule {
path = "/database/creds/my-role"
capabilities = ["read"]
description = "Allow service to write and read into/from it's vault database path"
}
}
The service logs:
{"type":"application","service":"identcheck-service","timestamp":"2022-02-17 11:54:29,822","level":"WARN","message":"[RequestedSecret [path='database/creds/my-role', mode=RENEW]] Lease │
│ [leaseId='database/creds/my-role/ej2iuCzhcO8gToGjX3mFeOgg', leaseDuration=PT5M, renewable=true] Status 403 Forbidden: 1 error occurred:\n\t* permission denied\n\n; nested exception is o │
│ rg.springframework.web.client.HttpClientErrorException$Forbidden: 403 Forbidden: \"{\"errors\":[\"1 error occurred:\\n\\t* permission denied\\n\\n\"]}<EOL>\"","thread":"main","process_id":"8","logger":"org.s │
│ pringframework.vault.core.lease.SecretLeaseEventPublisher$LoggingErrorListener","class":"org.springframework.boot.logging.DeferredLog","method":"logTo"}
If, however, I change the path to "*" and grant all capabilities it works just fine.I'd really like to reduce the capabilities for the service to the specific db path.
Any ideas what could cause this?
Thanks in advance

Related

Unable to bootstrap nomad cluster with multi-region setup "Error bootstrapping: Unexpected response code: 500 (No path to region)"

I am trying to setup Nomad ACL among a multi region and multi datacenter cluster,
In server stanza I added the below on all server nodes
server {
enabled = true
bootstrap_expect = 2
encrypt = "XXX-same-on-all-servers-XXX"
authoritative_region = "HOME-DC"
server_join {
retry_join = ["server1", "server2", "server3"]
}
}
acl {
enabled = true
}
After I restart all the servers on tailing the logs this is what I get
2021-02-01T11:38:04.156Z [WARN] nomad.rpc: no path found to region: region=HOME-DC
2021-02-01T11:38:04.157Z [ERROR] nomad: failed to fetch namespaces from authoritative region: error="No path to region"
And this is I get if I run
nomad acl bootstrap -address=$NOMAD_ADDR
Error bootstrapping: Unexpected response code: 500 (No path to region)
On the docs I see it asks you to set the replication_token value of the acl stanza, but I am not clear on how to do it, Does it has to be generated somehow like the encrypt token? If yes then how? Reference
authoritative region: authoritative_region is not required and should be removed. After removing it, run nomad acl bootstrap will success.
non-authoritative regions:
authoritative_region is always required.
replication_token is required too, replication_token could be your authoritative's bootstrap token, or create another token from for less capabilities.

How to load a AzureML model in an Azure Databricks compute?

I am trying to run a DatabricksStep. I have used ServicePrincipalAuthentication to authenticate the run:
appId = dbutils.secrets.get(<secret-scope-name>, <client-id>)
tenant = dbutils.secrets.get(<secret-scope-name>, <directory-id>)
clientSecret = dbutils.secrets.get(<secret-scope-name>, <client-secret>)
subscription_id = dbutils.secrets.get(<secret-scope-name>, <subscription-id>)
resource_group = <aml-rgp-name>
workspace_name = <aml-ws-name>
svc_pr = ServicePrincipalAuthentication(
tenant_id=tenant,
service_principal_id=appId,
service_principal_password=clientSecret)
ws = Workspace(
subscription_id=subscription_id,
resource_group=resource_group,
workspace_name=workspace_name,
auth=svc_pr
)
The authentication is successful since running the following block of code gives the desired output:
subscription_id = ws.subscription_id
resource_group = ws.resource_group
workspace_name = ws.name
workspace_region = ws.location
print(subscription_id, resource_group, workspace_name, workspace_region, sep='\n')
However, the following block of codes gives an error:
model_name=<registered-model-name>
model_path = Model.get_model_path(model_name=model_name, _workspace=ws)
loaded_model = joblib.load(model_path)
print('model loaded!')
This is giving an error:
UserErrorException:
Message:
Operation returned an invalid status code 'Forbidden'. The possible reason could be:
1. You are not authorized to access this resource, or directory listing denied.
2. you may not login your azure service, or use other subscription, you can check your
default account by running azure cli commend:
'az account list -o table'.
3. You have multiple objects/login session opened, please close all session and try again.
InnerException None
ErrorResponse
{
"error": {
"message": "\nOperation returned an invalid status code 'Forbidden'. The possible reason could be:\n1. You are not authorized to access this resource, or directory listing denied.\n2. you may not login your azure service, or use other subscription, you can check your\ndefault account by running azure cli commend:\n'az account list -o table'.\n3. You have multiple objects/login session opened, please close all session and try again.\n ",
"code": "UserError"
}
}
The error is Forbidden Error even though I have authenticated using ServicePrincipalAuthentication.
How to resolve this error to run inference using an AML registered model in ADB?
The Databricks workspace need to be present in the same subscription as your AML workspace.
This notebook demonstrates the use of DatabricksStep in Azure Machine Learning Pipeline.
Here is the Model class register.

Using CDK deploy cannot read temporary S3 bucket that holds Lambda Code

When I deploy a Lambda "code" using CDK the deploy process (cloudformation running under presumably my user) does not have seem to have access to the bucket that holds the Lambda code.
I followed this tutorial: https://intro-to-cdk.workshop.aws/what-is-cdk.html and see this error when I run cdk deploy:
Lambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-19kn1ypcmzq2q/assets/5327df
Lambda Code:
const handler = new lambda.Function(this, "TimestreamLambda", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.fromAsset(path.join(__dirname, '../resources')),
handler: "index.hello_world",
...
cdk and #aws-cdk version is 1.73.0 but I also tried with 1.71.0
Notes:
I see the bucket under my account (in my region).
When logged into this account I can see and download the asset file
the downloaded zip file has the correct contents.
More error details:
12/24 | 9:15:19 PM | CREATE_FAILED | AWS::Lambda::Function | TimestreamLambda (TimestreamLambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-28hiljazvaim/assets/5327df740bdc9c380ff567xxxxxxxxxxx7a68a.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: AWSLambdaInternal; Status Code: 403; Error Code: AccessDeniedException; Request ID: 1b813776-7647-4767-89bc-XXXXXXXXX; Proxy: null)
new Function (/Users/<user>/dev/cdk/cdk-workshop/node_modules/#aws-cdk/aws-lambda/lib/function.ts:593:35)
\_ new CdkWorkshopStack (/Users/<user>/dev/cdk/cdk-workshop/lib/cdk-workshop-stack.ts:33:21)
I also see this (using the -v option) during deploy:
env: {
CDK_DEFAULT_REGION: 'us-west-2',
CDK_DEFAULT_ACCOUNT: '94646XXXXX',
CDK_CONTEXT_JSON: '{"#aws-cdk/core:enableStackNameDuplicates":"true","aws-cdk:enableDiffNoFail":"true","#aws-cdk/core:stackRelativeExports":"true","aws:cdk:enable-path-metadata":true,"aws:cdk:enable-asset-metadata":true,"aws:cdk:version-reporting":true,"aws:cdk:bundling-stacks":["*"]}',
CDK_OUTDIR: 'cdk.out',
CDK_CLI_ASM_VERSION: '7.0.0',
CDK_CLI_VERSION: '1.73.0'
}
As it turns out this was an issue with the internal authentication system my company uses to access AWS. Instead of using my regular AWS account to access I had to create a temporary account (which also sets a temporary token).

Token authentication not working when Hashicorp vault is sealed

I'm working on a sample application where I want to connect to the Hashicorp vault to get the DB credentials. Below is the bootstrap.yml of my application.
spring:
application:
name: phonebook
cloud:
config:
uri: http://localhost:8888/
vault:
uri: http://localhost:8200
authentication: token
token: s.5bXvCP90f4GlQMKrupuQwH7C
profiles:
active:
- local,test
The application builds properly when the vault server is unsealed. Maven fetches the database username from the vault properly. When I run the build after sealing the vault, the build is failing due to the below error.
org.springframework.vault.VaultException: Status 503 Service Unavailable [secret/application]: error performing token check: Vault is sealed; nested exception is org.springframework.web.client.HttpServerErrorException$ServiceUnavailable: 503 Service Unavailable: [{"errors":["error performing token check: Vault is sealed"]}
How can I resolve this? I want maven to get the DB username and password during the build without any issues from the vault even when though it is sealed.
It's a profit of Vault that it's not simple static storage, and on any change in the environment, you need to perform some actions to have a stable workable system.
Advice: create a script(s) for automation the process.
Example. I have a multi-services system and some of my services use Vault to get the configuration.
init.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault operator unseal <token1>
vault operator unseal <token2>
vault operator unseal <token3>
vault login <main token>
vault secrets enable -path=<path>/ -description="secrets for My projects" kv
vault auth enable approle
vault policy write application-policy-dev ./application-policy-DEV.hcl
application.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault login <main token>
vault delete <secret>/<app_path>
vault delete sys/policy/<app>-policy
vault delete auth/approle/role/<app>-role
vault kv put <secret>/<app_path> - < <(yq m ./application.yaml)
vault policy write <app>-policy ./<app>-policy.hcl
vault write auth/approle/role/<app>-role token_policies="application-policy"
role_id=$(vault read auth/approle/role/<app>-role/role-id -format="json" | jq -r '.data.role_id')
secret_id=$(vault write auth/approle/role/<app>-role/secret-id -format="json" | jq -r '.data.secret_id')
token=$(vault write auth/approle/login role_id="${role_id}" secret_id=${secret_id} -format="json" | jq -r '.auth.client_token')
echo 'Token:' ${token}
where <app> - the name of your application, application.yaml - file with configuration, <app>-policy.hcl - file with policy
Of course, all these files should not be public, only for Vault administration.
On any changes in the environment or Vault period termination just run init.sh. For getting a token for the application run application.sh. Also if you need to change a configuration parameter, change it in application.yaml, run application.sh and use result token.
Script result (for one of my services):
Key Value
--- -----
token *****
token_accessor *****
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
Success! Data deleted (if it existed) at: <secret>/<app>
Success! Data deleted (if it existed) at: sys/policy/<app>-policy
Success! Data deleted (if it existed) at: auth/approle/role/<app>-role
Success! Data written to: <secret>/<app>
Success! Uploaded policy: <app>-policy
Success! Data written to: auth/approle/role/<app>-role
Token: s.dn2o5b7tvxHLMWint1DvxPRJ
Process finished with exit code 0

Google Cloud Monitoring Ruby client permission issue

I am following the Ruby code sample to add a custom metrics to stackdriver, however, I keep getting the permission denied error.
client = Google::Cloud::Monitoring::Metric.new
project_name = Google::Cloud::Monitoring::V3::MetricServiceClient.project_path project_id
descriptor = Google::Api::MetricDescriptor.new(
type: "custom.googleapis.com/my_metric#{random_suffix}",
metric_kind: Google::Api::MetricDescriptor::MetricKind::GAUGE,
value_type: Google::Api::MetricDescriptor::ValueType::DOUBLE,
description: "This is a simple example of a custom metric."
)
result = client.create_metric_descriptor project_name, descriptor
the error I got is "Google::Gax::PermissionDeniedError (GaxError RPC failed, caused by 7:Permission monitoring.metricDescriptors.create denied (or the resource may not exist).)"
The environment variable GOOGLE_APPLICATION_CREDENTIALS is set, and it works fine for the Google Cloud Storage code below
storage = Google::Cloud::Storage.new project: project_id
# Make an authenticated API request
storage.buckets.each do |bucket|
puts bucket.name
end
At this point, I don't know what is the problem. Do I need to set up a different credential for Cloud Monitoring?

Resources