Currently I am facing issue with masstransit passing default role credential, and my application code deploys into EKS container and EKS container attach with iam role.
iam role has full access to sqs service queue. Using this configuration without masstransit I am able to push message to queue using default credential option.
Can you please guide to configure masstransit in EKS container with specific role?
While #chris-patterson's answer does work for a node IAM role, the question asks about a container-specific role.
It is possible to do this in EKS by attaching an IAM role to a service account, then configuring a pod with the service account and desired container image. This page describes something similar to what I did: https://dzone.com/articles/how-to-use-aws-iam-role-on-aws-eks-pods
Then in MassTransit you can assume the service account role as follows:
if (EC2InstanceMetadata.Region is { } ec2InstanceRegion)
{
var creds = new AssumeRoleWithWebIdentityCredentials(
"/var/run/secrets/eks.amazonaws.com/serviceaccount/token",
ARN_OF_SERVICE_ACCOUNT_ROLE,
null);
sqsConfig.Host(ec2InstanceRegion.SystemName, awsConfig => awsConfig.Credentials(creds));
}
You'll need to install the AWSSDK.SecurityToken NuGet package. And in case you're wondering, the path string comes from the AWS docs: https://docs.aws.amazon.com/eks/latest/userguide/pod-configuration.html
By the way, while developing locally I've been using LocalStack and I was initially stuck on how to configure that too. So for good measure, here's my local config:
const int localStackPort = 4566;
var serviceUrl = $"http://localhost:{localStackPort}";
sqsConfig.Host(
new UriBuilder("amazonsqs", "localhost", localStackPort).Uri,
awsConfig =>
{
awsConfig.Config(
new AmazonSimpleNotificationServiceConfig { ServiceURL = serviceUrl });
awsConfig.Config(new AmazonSQSConfig { ServiceURL = serviceUrl });
awsConfig.Credentials(new AnonymousAWSCredentials());
});
You might be able to use the instance credentials to configure the host:
cfg.Host("us-east-2", h =>
{
h.Credentials(new InstanceProfileAWSCredentials());
});
Related
I'm trying to create API GW and integration using terraform. I dont know how to link
custom authorizer to Authorization via terraform.
Now :
Expectation :
I tried "x-amazon-apigateway-authtype" : "custom" and multiple aws docs. Kindly help
In addition to having an aws_apigatewayv2_authorizer resource, you also have to configure the authorizer on the aws_apigatewayv2_route resource.
For example:
resource "aws_apigatewayv2_route" "connect_route" {
api_id = aws_apigatewayv2_api.apigw.id
route_key = "$connect"
target = "integrations/${aws_apigatewayv2_integration.lambda-integration.id}"
authorization_type = "CUSTOM"
authorizer_id = aws_apigatewayv2_authorizer.authorizer.id
}
aws_apigatewayv2_authorizer adds the authorizer to the API Gateway, and aws_apigatewayv2_route sets it as the active authorizer for the route you set up.
I was able to create a keyvault, add secret, be able to display on the screen following this tutorial on YouTube. The only problem is that it's only working when I deploy to azure. And, so far, all the codes assume that I want to deploy to azure.
I found this response to a Stackoverflow question that explains how to do it on VS Code. The problem is that the code is different from mine, probably because the question was asked in 2019 while I'm using the DotNet5.0. Here's my code. It was created by
Going to Connected Services
Add Service
Select Key vault, by following the Wizard.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((context, config) =>
{
var keyVaultEndpoint = new Uri(Environment.GetEnvironmentVariable("VaultUri"));
config.AddAzureKeyVault(
keyVaultEndpoint,
new DefaultAzureCredential());
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
Each time I run it locally, I get the following exception.
{"error":{"code":"Forbidden","message":"Access denied to first party service.
Caller: name=from-infra;tid=f8cdef31-a31e-4b4a-93e4-5f571e91255a;
appid=872cd9fa-d31f-45e0-9eab-6e460a02d1f1;
...
"innererror":{"code":"AccessDenied"}}}
I've run the following code.
az keyvault set-policy --name 'myKeyvault' --object-id 872cd9fa-d31f-45e0-9eab-6e460a02d1f1 --secret-permissions get
The following line was added in the key vault Access Policies table.
Yet, when I tried to run the application locally, I still got the same error. Is there a step I am missing?
Thanks for helping
I used to gather azure key vault secret via this sample, I added the access policy for the user in my tenant which also used to sign in visual studio. This may help...
using System;
using System.Threading.Tasks;
using Azure.Identity;
using Azure.Security.KeyVault.Secrets;
namespace key_vault_test
{
class Program
{
static async Task Main(string[] args)
{
const string secretName = "test0120";
var kvUri = "https://fortest0120.vault.azure.net/";
var client = new SecretClient(new Uri(kvUri), new DefaultAzureCredential());
var secret = await client.GetSecretAsync(secretName);
Console.WriteLine($"Your secret is '{secret.Value.Value}'.");
}
}
}
And in my opinion, there's also another choice to obtain secrets, that's using key vault api, what you need to do is creating an azure ad app and and api permission for key vault, but this api just has delegated permission so that you can only use password flow(auth code or ropc) to generate the access token. Here you need to add access policy for the application you registered and those users(groups is preferred if there're many users, you could add those users into a group)
I’m building a React app, using API Gateway, lambda and cognito (basically starting from the https://serverless-stack.com tutorial). I would like to setup fine grained access control to my DynamoDb (i.e. through IAM policies that restrict access to DynamoDb tables based upon the logged-in user - like https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_dynamodb_rows.html)
AFAIK, a lambda function assumes a service role, as defined in the serverless.yml file, that has in itself nothing to do with the AIM policy that is attached to the logged in cognito user. I know that using an aim_authorizer, I can get info on the logged in user.
My question: is it possible to have the lambda do AWS calls on behalf of the given cognito user, thus honoring the IAM policies attached to that user? (a bit similar as to how the serverless-stack tutorial interacts with S3)
All suggestions welcome.
You can explicitly specify to any AWS client library which credentials to use in order to sign requests (by default they are taken from the runtime environment):
import { DocumentClient } from 'aws-sdk/clients/dynamodb';
const client = new DocumentClient({
credentials: ...
});
Those security credentials are obtain via STS. There are various scenarios how to get a hold of the user's identity to obtain credentials, but usually you would either assumeRole, if you have an arn of a role, or assumeRoleWithWebIdentity, if there is an actual user that did a flow of OpenID Connect:
import { Credentials, STS } from 'aws-sdk';
const sts = new STS();
const stsResponse = await sts.assumeRole({ RoleArn: 'can-be-cognito-group-arn' }).promise();
// or
// const stsResponse = await sts.assumeRoleWithWebIdentity({ WebIdentityToken: 'open-id-token' }).promise();
const credentials = new Credentials(
response.Credentials.AccessKeyId,
response.Credentials.SecretAccessKey,
response.Credentials.SessionToken);
I am trying to create a namespace on a K8s cluster on Azure using teh fabric8 java client . Here is the code
#Before
public void setUpK8sClient() {
apiServer = "";
config = new ConfigBuilder().withMasterUrl(apiServer).withUsername("user").withPassword("pass").build();
client = new DefaultKubernetesClient(config);
System.setProperty(Config.KUBERNETES_TRUST_CERT_SYSTEM_PROPERTY, "true");
}
#Test
public void getClientVersion() {
System.out.println("Client version "+client.getApiVersion());
}
#Test
public void createNamespace() {
Namespace myns = client.namespaces().createNew()
.withNewMetadata()
.withName("myns")
.addToLabels("a", "label")
.endMetadata()
.done();
System.out.println("Namespace version " + myns.getStatus());
}
This gives me the following error
i
o.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: "https://...api/v1/namespaces. Message: Unauthorized! Token may have expired! Please log-in again. Unauthorized
What did I miss?
Since you are working on Azure, I guess you could follow the instructions to configure kubectl and then use the token from the kubeconfig file to access the cluster from the fabric8 client.
That token is probably an admin token, so you can also create new credentials (user/password) if you want to limit what the fabric8 client could do. API requests are tied to either a normal user or a service account, or are treated as anonymous requests.
Normal users are assumed to be managed by an outside, independent service (private keys, third parties like Google Accounts, even a file with a list of usernames and passwords). Kubernetes does not have objects which represent normal user accounts.
Service accounts are users managed by the Kubernetes API, bound to specific namespaces. Service accounts are tied to a set of credentials stored as Secrets. To manually create a service account, simply use the kubectl create serviceaccount ACCOUNT_NAME command. This creates a service account in the current namespace and an associated secret that holds the public CA of the API server and a signed JSON Web Token (JWT).
We have an MVC app that connects to the Exchange server. We used to connect to an on premises server using this code to create the service:
if (string.IsNullOrEmpty(Current.UserPassword))
{
throw new UnauthorizedAccessException("Exchange access requires Authentication by Password");
}
return new ExchangeService
{
Credentials = new NetworkCredential(Current.User.LoginName, Current.UserPassword),
Url = new Uri(ConfigurationManager.AppSettings["ExchangeServiceUrl"]),
};
This worked fine, but now our IT department is migrating the Exchange server to the cloud, and some users are on the cloud server while others are on premises. So I changed the code to this:
if (string.IsNullOrEmpty(Current.UserPassword))
{
throw new UnauthorizedAccessException("Exchange access requires Authentication by Password");
}
var user = ConfigurationManager.AppSettings["ExchangeUser"];
var password = ConfigurationManager.AppSettings["ExchangePassword"];
var exchangeService = new ExchangeService(ExchangeVersion.Exchange2010_SP2)
{
Credentials = new NetworkCredential(user, password),
};
exchangeService.AutodiscoverUrl(Current.EmaiLuser + "#calamos.com", RedirectionCallback);
exchangeService.Credentials = new NetworkCredential(Current.EmaiLuser + "#calamos.com", Current.UserPassword);
return exchangeService;
I am using a service account to do the autodiscovery ( for some reason it doesn't work with a regular account) and then I am changing the credentials of the service to the user that logs in, so he can access the inbox. The problem is that , randomly, the server returns "The request failed. The remote server returned an error: (401) Unauthorized.".
I asked the IT department to check the Exchange logs, but there is nothing there about this error, so I don't know how to fix it...
So by cloud do you mean Office365 ?
I am using a service account to do the autodiscovery ( for some reason it doesn't work with a regular account)
For the users in the cloud you need to ensure the request are sent to the cloud servers maybe enable tracing https://msdn.microsoft.com/en-us/library/office/dd633676(v=exchg.80).aspx and then have a look at where the failed requests are being routed. From what you are saying your discovery is going to always point to your internal servers which is why the request will fail for the cloud based users. You need to have a way of identifying the users that are in the cloud and I would suggest you then just use the single Office365 Endpoint (eg you don't need Autodiscover for that) https//outlook.office365.com/EWS/Exchange.asmx