Go storage client not able to access GCP bucket - go

I have a golang service which has an API exposed where we try to upload a CSV to a GCP bucket. On my local host, I set the environment variable GOOGLE_APPLICATION_CREDENTIAL
and point this variable to the filepath of service account json. But when deploying to an actual GCP instance, I'm getting the below error while trying to access this API. Ideally,the service should talk to GCP metadata server and fetch the credentials and then store them in a json file. So there are 2 problems here:
Service is not querying the metadata service to get the credentials.
If file is present(I created it manually), it's not able to access due to permission issues.
Any help would be appreciated.
Error while initializing storage Client:dialing: google: error getting credentials using well-known file (/root/.config/gcloud/application_default_credentials.json): open /root/.config/gcloud/application_default_credentials.json: permission denied

Finally, after long debugging and searching over the web, found out that there's already an open PR for the go-storage client which is open: https://github.com/golang/oauth2/issues/337. I had to make a few changes in the code using this method: https://pkg.go.dev/golang.org/x/oauth2/google#ComputeTokenSource where in basically we are trying to fetch the token explicitly from metadata server and then calling subsequent cloud API's.

Related

Databricks Azure Blob Storage access

I am trying to access files stored in Azure blob storage and have followed the documentation linked below:
https://docs.databricks.com/external-data/azure-storage.html
I was successful in mounting the Azure blob storage on dbfs but it seems that the method is not recommended anymore. So, I tried to set up direct access using URI using SAS authentication.
spark.conf.set("fs.azure.account.auth.type.<storage-account>.dfs.core.windows.net", "SAS")
spark.conf.set("fs.azure.sas.token.provider.type.<storage-account>.dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.sas.FixedSASTokenProvider")
spark.conf.set("fs.azure.sas.fixed.token.<storage-account>.dfs.core.windows.net", "<token>")
Now when I try to access any file using:
spark.read.load("abfss://<container-name>#<storage-account-name>.dfs.core.windows.net/<path-to-data>")
I get the following error:
Operation failed: "Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.", 403, HEAD,
I am able to mount the storage account using the same SAS token but this is not working.
What needs to be changed for this to work?
If you are using blob storage, then you have to use wasbs and not abfss. I have tried using using the same code as yours with my SAS token and got the same error with my blob storage.
spark.conf.set("fs.azure.account.auth.type.<storage_account>.dfs.core.windows.net", "SAS")
spark.conf.set("fs.azure.sas.token.provider.type.<storage_account>.dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.sas.FixedSASTokenProvider")
spark.conf.set("fs.azure.sas.fixed.token.<storage_account>.dfs.core.windows.net", "<token>")
df = spark.read.load("abfss://<container>#<storage_account>.dfs.core.windows.net/input/sample1.csv")
When I used the following modified code, I was able to successfully read the data.
spark.conf.set("fs.azure.account.auth.type.<storage_account>.blob.core.windows.net", "SAS")
spark.conf.set("fs.azure.sas.token.provider.type.<storage_account>.blob.core.windows.net", "org.apache.hadoop.fs.azurebfs.sas.FixedSASTokenProvider")
spark.conf.set("fs.azure.sas.fixed.token.<storage_account>.blob.core.windows.net", "<token>")
df = spark.read.format("csv").load("wasbs://<container>#<storage_account>.blob.core.windows.net/input/sample1.csv")
UPDATE:
To access files from azure blob storage where the firewall settings are only from selected networks, you need to configure VNet for the Databricks workspace.
Now add the same virtual network to your storage account as well.
I have also selected service endpoints and subnet delegation as following:
Now when I run the same code again using the file path as wasbs://<container>#<storage_account>.blob.core.windows.net/<path>, the file is read successfully.

How to override DefaultAWSCredentialsProviderChain by our own implementaion of credential provider with assume role

I am trying to use spring config server with cross account as I am deploying config server in kubernetise with aws backed.
but due to DefaultAWSCredentialsProviderChain I am unable to get connected to s3 bucket and gets 403 error.
In DefaultAWSCredentialsProviderChain as per logs WebIdentityTokenCredentialsProvider try to get credentials get 403 error.
but when I am try to connect with my awss3 client with STSAssumeRoleSessionCredentialsProvider it gets connect.
Is there any way so that I can provide STSAssumeRoleSessionCredentialsProvider instead of DefaultAWSCredentialsProviderChain

Data Factory Blob Storage Linked Service with Managed Identity : The remote server returned an error: (403)

I have created a linked service connection to a storage account using a managed identity and it successfully validates but when I try to use the linked service on a dataset I get an error:
A storage operation failed with the following error 'The remote server returned an error: (403)
The error is displayed when I attempt to browse the blob to set the file path.
The managed identity for the data factory has been assigned the Contributor role.
The blob container is set to private access.
Anyone know how I make this work?
Turns out I was using the wrong role. Just need to add an assignment for
Storage Blob Data Contributor.

Unable to execute odata calls using S4Hana SDK in cloud foundry environment with oAuth2SAMLBearerAssertion authentication

I'm trying to connect to s4 hana system using s4 sdk. While executing calls via .execute() method in cloud foundry environment, i see below error logs:
Caused by: com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: Failed to get authentication headers. Destination service returned error: Missing private and public key for subaccount ******-****-****-***-*******.
Note: I've already configured trust between subaccount and S4Hana system and created respective communication and business user. The associated authentication method used in the destination is oAuth2SamlBearerAssertion. Note: The call executes fine in both local and cloud foundry environment with basic authentication.
Can someone please suggest what is wrong here.
As correctly pointed out by #Dennis H there was a problem in trust configuration between my subaccount and S4 Hana system, the configuration wrong in my case :
-> The certificate I downloaded for trust was using this URL:
https://.authentication.eu10.hana.ondemand.com/saml/metadata
This is incorrect we need to get the certificate from download trust button in destination tab at subaccount level
->Provider name was incorrect in the communication system.
We are developing a side-by-side extension app and deploying it to CF. Our app is trying to connect to S4HANA cloud system using oAUTH2SAMLBEARERASSERTION. But facing issues while doing it. We are getting below error in logs. Please be noted, we are able to connect to S4HANA Cloud using basic auth.
com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: Failed to access the configuration of destination
Our destination parameters look as attached screenshotenter image description here
Thank you.

How do I connect to Google Cloud Datastore in golang from a GCE VM?

When I try running these example functions to connect to Cloud Datastore, I get a 401 Invalid Credentials error.
I'm running the go code from a VM within a Google Cloud Project. I have enabled the Datastore API and generated a JSON key which is loaded by the example code.
This question is very similar and even mentions the same repo, but does not use the same authentication shown in the examples, and was related to a 403 Unauthorized error.
For some reason the Datastore documentation does not mention go outside the context of App Engine.

Resources