How to use DefaultAzureCredential in both local and hosted Environment (Azure and On-Premise) to access Azure Key Vault? - .net-5

We have a web api(.NET 5) which access some secrets from the Azure KeyVault.
In local machine for development, since I am the owner the new vault created, my email has access privilege to keyvault.
Hence I selected my account though VS -->Tools> Options-->Azure Service Authentication-->Account Selection--> "myemail#.com"
I have the below code to fetch secrets from Keyvault and access through configuration like we access the appsettings value.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((context, config) =>
{
var appSettings = config.Build();
var credentialOptions = new DefaultAzureCredentialOptions();
var credential = new DefaultAzureCredential(credentialOptions);
config.AddAzureKeyVault(new Uri(appSettings["Url:KeyVault"]), credential);
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
We access the secret value like _configuration["secret"] in service and controller layer.
My queries are
1, If I move deploy this code to on premise server how it will work (dev env is on-premise server)?
2, If I deploy this web API to Azure, how to use identity AD App to access the key vault without any code change. We have AD app registered which has read access to this particular Vault.
I want the code to seamlessly work for local and Azure.

DefaultAzureCredential is the new and unified way to connect and retrieve tokens from Azure Active Directory and can be used along with resources that need them
The DefaultAzureCredential gets the token based on the environment the application is running
The following credential types if enabled will be tried, in order - EnvironmentCredential, ManagedIdentityCredential, SharedTokenCacheCredential, InteractiveBrowserCredential
IF I move deploy this code to on premise server how it will work (dev env is on-premises server)
When executing this in a development machine (on-premises server), you need to first configure the environment setting the variables AZURE_CLIENT_ID, AZURE_TENANT_ID and AZURE_CLIENT_SECRET to the appropriate values for your service principal (app registered in Azure AD)
If I deploy this web app to Azure, how to use identity AD App to access the key vault without any code change. We have AD app
registered which have read access to this Vault
You can enable System assigned Managed Identity for your web app. Add access policy for this identity in your Azure Key Vault to read the secrets. Now without making any changes in your code, your web app would be able to read the key vault secrets

Related

Why can't I connect to my azure blob storage account using a managed identity?

I have a python 3.8 application deployed on a kubernetes cluster on azure that has to access a blob storage container in an account in a different resource group. I'm using a managed identity to authenticate and query the container:
from azure.storage.blob import BlobServiceClient
creds = ManagedIdentityCredential()
url_template = task_config["ACCOUNT_ADDRESS_TEMPLATE"]
account_name = task_config["BLOB_STORAGE_ACCOUNT"]
account_url = url_template.replace("*", account_name)
blob_service_client = BlobServiceClient(account_url=account_url, credential=creds)
if container not in [c.name for c in blob_service_client.list_containers()]:
raise BlobStorageContainerDoesNotExistError(
f"Container {container} does not exist"
)
self.client: ContainerClient = blob_service_client.get_container_client(
container=container
I have verified that the managed identity has been assigned the Storage Blob Data Contributor role in the storage account, and also at the level of the resource group. I have verified that the token generated when instantiating the ManagedIdentityCredential() object references the right managed identity, and I have whitelisted the outbound IP (and every other possible IP just in case) of my python application. Nevertheless, I keep getting this error when attempting to list the containers in the account:
Http ResponseError(response=response, model=error)\nazure.core.exceptions.HttpResponseError: Operation returned an invalid status 'This request is not authorized to perform this operation.'
Could anyone point me in the right direction?
Specs:
azure-identity = "1.5"
azure-storage-blob= "12.8.1"
python = "3.8"
platform: linux docker containers running on kubernetes cluster deployed on azure.
I have tested in my environment
It seems you are using Storage Account to allow access from Selected Networks.
Please make sure to allow access from your AKS VMSS virtual network :
Then you can use the below python script to list the blob containers in the Storage Account :
from azure.storage.blob import BlobServiceClient
from azure.identity import ManagedIdentityCredential
creds = ManagedIdentityCredential ()
blob_service_client = BlobServiceClient(account_url="https://StorageAccountName.blob.core.windows.net/", credential=creds)
test = blob_service_client.list_containers()
for container in test :
print(container.name)

Azure Blob: How to grant a mobile app limited access to user-specific data

I have a mobile app (Xamarin Android & iOS) that connects to a website (ASP.NET MVC). Some of the content for the mobile app (files & images) comes from an Azure Blob store that currently has public read-access enabled.
I am building an authentication module for the app (OAuth, with username/password). Is it possible to somehow build authentication into my Azure Blob account as well, so that a user would only have access to their specific files? I know that I could use the website as an intermediary (ie. user authenticates and connects to website, websites connects to azure & retrieves data and returns it to app) but this will add an extra step of lag as opposed to just connecting to Azure Blob directly.
I see that Azure Blob supports a shared access signature (SAS) tokens. Is it possible to generate a SAS token just for the subset of files relevant to that user? I imagine the workflow would be:
mobile app authenticates to website api
website generates and return SAS token for blob access
mobile app connects to azure blob directly using SAS token.
Would that even be a good idea? Any other suggestions?
From what I understand of your scenario, you could use either Azure AD or SAS for authentication/authorization to Blob storage. The key will be to organize your users' data by container, so that you can restrict access to that container. This type of design will align best with how authorization is handled in Azure Storage today.
So for example, you would create a container for user1's data, another container for user2's data, and so on.
If you are already using Azure AD to authenticate and authorize your users for your application, then you may be able to simply assign an RBAC role that is scoped to the user's container for each user. For example, you can assign the Storage Blob Data Contributor role to user1 for container1, then do the same for user2 on container2. See Use the Azure portal to assign an Azure role for access to blob and queue data for information about how to do this in the Azure portal; you can also use PowerShell or Azure CLI.
Note that an RBAC role cannot be scoped to an individual blob, but only at the container level or above.
If you determine that you need to use SAS, you can create a SAS for each user that is restricted to their container. If your users are already authenticating/authorizing to your application with Azure AD, then you probably don't need to use SAS. The SAS would be useful in the case where you need to grant access to a user that is not otherwise authenticated.

Forge configuration What to Put in Server Providers=>Amazon=>Key/Secret

I created an Ubuntu server on Amazon AWS.
Then I registered for Forge, and now trying to configure it.
I selected source control to be Bitbucket.
I selected Amazon in Server Provider Section,but now I am not sure what to put in key and secret
I found the answer to this question,
We need to create a IAM user and opt for api access key and secret.
also remember to give access to at least FullEC2Admin Access to this user before initiating the process to create and provision the server via forge.

How to grant EC2 access to SQS

The docs are very confusing to me. I have read through the SQS access docs. But what really throws me is this page: http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-sqs.html
You can provide your credential profile like in the preceding example,
specify your access keys directly (via key and secret), or you can
choose to omit any credential information if you are using AWS
Identity and Access Management (IAM) roles for EC2 instances or
credentials sourced from the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables.
1) Regarding what I have bolded, how is that possible? I cannot find steps whereas you are able to grant EC2 instances access to SQS using IAM roles. This is very confusing.
2) Where would the aforementioned environment variables be placed? And where would you get the key and secret from?
Can someone help clarify?
There are several ways that applications can discover AWS credentials. Any software using the AWS SDK automatically looks in these locations. This includes the AWS Command-Line Interface (CLI), which is a python app that uses the AWS SDK.
Your bold words refer to #3, below:
1. Environment Variables
The SDK will look for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. This is a great way to provide credentials because there is no danger of accidentally including a credentials file in github or other repositories. In Windows, use the System control panel to set the variables. In Mac/Linux, just EXPORT the variables from the shell.
The credentials are provided when IAM users are created. It would be your responsibility to put those credentials into the environment variables.
2. Local Credentials File
The SDK will look in local configuration files, such as:
~/.aws/credentials
C:\users\awsuser\.aws\credentials
These files are great for storing user-specific credentials and can actually store multiple profiles, each with their own credentials. This is useful for switching between different environments such as Dev and Test.
The credentials are provided when IAM users are created. It would be your responsibility to put those credentials into the configuration file.
3. IAM Roles on an Amazon EC2 instance
An IAM role can be associated with an Amazon EC2 instance at launch time. Temporary credentials will then automatically be provided via the instance metadata service via the URL:
http://instance-data/latest/meta-data/iam/security-credentials/<role-name>/
This will return meta-data that contains AWS credentials, for example:
{
"Code" : "Success",
"LastUpdated" : "2015-08-27T05:09:23Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAI5OXLTT3D5NCV5MS",
"SecretAccessKey" : "sGoHyFaVLIsjm4WszUXJfyS1TVN6bAIWIrcFrRlt",
"Token" : "AQoDYXdzED4a4AP79/SbIPdV5N8k....lZwERog07b6rgU=",
"Expiration" : "2015-08-27T11:11:50Z"
}
These credentials have inherit the permissions of the IAM role that was assigned when the instance was launched. They automatically rotate every 6 hours (note the Expiration in this example, approximately 6 hours after the LastUpdated time.
Applications that use the AWS SDK will automatically look at this URL to retrieve security credentials. Of course, they will only be available when running on an Amazon EC2 instance.
Credentials Provider Chain
Each particular AWS SDK (eg Java, .Net, PHP) may look for credentials in different locations. For further details, refer to the appropriate documentation, eg:
Providing AWS Credentials in the AWS SDK for Java
Providing AWS Credentials in the AWS SDK for .Net
Providing AWS Credentials in the AWS SDK for PHP

Amazon STS as Token Vending Machine: Is User Session Management a valid Usecase?

Recently I read this article:
http://aws.amazon.com/articles/SDKs/Android/4611615499399490
Now my question is...
Can the Amazon STS (Security Token Service) used as a Token Vending Machine to manage user sessions for a clients of a Web Server (As opposed to Clients of AWS Services)?
Assume I have a Web Application. And this Web Application has Registered Users who are Authenticated with Login Credentials. Now I wish to issue a Session Token to these Users who are Authenticated.
1. User -> Web App -> User Login Page
2. User gives Credentials -> Web App -> Issues a Session Token (with expiry policy)
3. User the Session token -> Web App Resources (Non-AWS Resources proxy-ed by the Web App)
Can I use the Amazons Simple Token Service independently for the above Usecase? Or is Amazon STS only available for access to Amazon Services only?
The reason I wish to use Amazon STS is because they are :
- I don't have to worry about Session Token management
- Proven and Scalable
Please help. I am a little confused about this.
STS will provide temporary credentials (access key, secret key and token) for AWS Services only and should not be used for application authentication (or session management). But you could store those credentials in your session for AWS API access from your app.

Resources