How to configure minio to only allow anonymous users to download without allow to list bucket or object - minio

We have a minio server. Until now anonymous users were not able to do anything.
Now we want to allow them to download object when they know the path.
e.g. https://minio.example.com/minio/download/image-bucket/cf1c42ad182849308c790d98dd89638f.png
I read that the command line mc and the web UI were not able to do this. I didn't found out how to achieve it without both tools.
What I did is create a new policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::images-live/*"
],
"Sid": ""
}
]
}
And adding it to the minio server with mc admin policy add minio getonly-policy policy-test.json.
Now I'm suppose to attach this to a user. How can I achieve this to attach it to an anonymous user?

You can use
mc policy set download play/test
Access permission for `play/test` is set to `download`
This will allow you to download objects. If you want to customize, please use mc policy set-json command
curl https://play.minio.io:9000/test/issue
Ubuntu 18.04.2 LTS \n \l

Related

Is the cloudformation step necessary when we deploy the app to the aws lambda and if yes what are the exact permissions required under it

"arn:aws:iam::123456789:user/demo is not authorized to perform: cloudformation:DescribeStacks on resource: arn:aws:cloudformation:ap-south-1:987654321:stack/demo-test-dev/* because no identity-based policy allows the cloudformation:DescribeStacks action."
when I try to upload the app it gives me this error so can somebody help me out of this.
Note: I have IAM user account with limited permissions.
You don't have enough permissions to be able to deploy the stack.
Yes, you need to have the permissions around CloudFormation.
And to deploy resources in it you will need to have those specific permissions too. see documentation here.
Here the error tells you that you need to have cloudformation:DescribeStacks permissions to continue.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "cloudformation:DescribeStacks",
"Resource": "arn:aws:cloudformation:ap-south-1:987654321:stack/demo-test-dev/*",
}
]
}

Serverless Framework - What permissions do I need to use AWS SSM Parameter Store?

I'm opening this question because there seems to be no documentation on this, so I would like to provide the answer after much time wasted in trial and error.
As background, the Serverless framework [allows loading both plaintext & SecureString values from AWS SSM Parameter Store].1
What permissions are needed to access & load these SSM Parameter Store values when performing serverless deploy?
In general, accessing & decrypting AWS SSM parameter store values requires these 3 permissions:
ssm:DescribeParameters
ssm:GetParameters
kms:Decrypt
-
Here's a real world example that only allows access to SSM parameters relating to my lambda functions (distinguished by following a common naming convention/pattern) - it works under the following circumstances:
SecureString values are encrypted with the default AWS SSM encryption key.
All parameters use the following naming convention
a. /${app-name-or-app-namespace}/serverless/${lambda-function-name/then/whatever/else/you/want
b.${lambda-function-name} must begin with sls-
So let's say I have an app called myCoolApp, and a Lambda function called sls-myCoolLambdaFunction. Perhaps I want to save database config values such as username and password.
I would have two SSM parameters created:
/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/username (plaintext)
/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/password (SecureString)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:DescribeParameters"
],
"Resource": [
"arn:aws:ssm:${region-or-wildcard}:${aws-account-id-or-wildcard}:*"
]
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter"
],
"Resource": [
"arn:aws:ssm:${region-or-wildcard}:${aws-account-id-or-wildcard}:parameter/myCoolApp/serverless/sls-*"
]
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:*:${aws-account-id}:key/alias/aws/ssm"
]
}
]
}
Then in my serverless.yml file, I might reference these two SSM values as function level environment variables like so
environment:
DATABASE_USERNAME: ${ssm:/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/username}
DATABASE_PASSWORD: ${ssm:/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/password~true}
Or, even better, if I want to be super dynamic for situations where I have different config values depending on the stage, I can set the environment variables like so
environment:
DATABASE_USERNAME: ${ssm:/myCoolApp/serverless/sls-myCoolLambdaFunction/${self:provider.stage}/database/username}
DATABASE_PASSWORD: ${ssm:/myCoolApp/serverless/sls-myCoolLambdaFunction/${self:provider.stage}/database/password~true}
With this above example, if I had two stages - dev & prod, perhaps I would create the following SSM parameters:
/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/username (plaintext)
/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/password (SecureString)
/myCoolApp/serverless/sls-myCoolLambdaFunction/prod/database/username (plaintext)
/myCoolApp/serverless/sls-myCoolLambdaFunction/prod/database/password (SecureString)
I suggest to use AWS SDK to get SSM parameters in code instead of saving in environment file (i.e. .env). It's more secure that way. You need to assign permission to the role you use with action=ssm:GetParameter and resource point to the parameter in the SSM Parameter store. I use serverless framework for deployment. Below is what I have in serverless.yml assumming parameter names with pattern "{stage}-myproject-*" (e.g. dev-myproject-username, qa-myproject-password):
custom:
myStage: ${opt:stage}
provider:
name: aws
runtime: nodejs10.x
stage: ${self:custom.myStage}
region: us-east-1
myAccountId: <aws account id>
iamRoleStatements:
- Effect: Allow
Action:
- ssm:GetParameter
Resource: "arn:aws:ssm:${self:provider.region}:${self:provider.myAccountId}:parameter/${self:provider.stage}-myproject-*"
two useful resources are listed below:
where to save credentials?
wireless framework IAM doc
In the case you are using codebuild in a ci/cd pipeline, dont forget to add the ssm authorization policies to the codebuild service role. (when we are talking about ssm we have to differenciate between secretsmanager and parametstore)

Execution failed due to configuration error: Invalid permissions on Lambda function

I am building a serverless application using AWS Lambda and API Gateway via Visual Studio. I am working in C#, and using the serverless application model (SAM) in order to deploy my API. I build the code in Visual Studio, then deploy via publish to Lambda. This is working, except every time I do a new build, and try to execute an API call, I get this error:
Execution failed due to configuration error: Invalid permissions on Lambda function
Doing some research, I found this fix mentioned elsewhere (to be done via the AWS Console):
Fix: went to API Gateway > API name > Resources > Resource name > Method > Integration Request > Lambda Function and reselected my existing function, before "saving" it with the little checkmark.
Now this works for me, but it breaks the automation of using the serverless.template (JSON) to build out my API. Does anyone know how to fix this within the serverless.template file? So that I don't need to take action in the console to resolve? Here's a sample of one of my methods from the serverless.template file
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Transform" : "AWS::Serverless-2016-10-31",
"Description" : "An AWS Serverless Application.",
"Resources" : {
"Get" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"VpcConfig":{
"SecurityGroupIds" : ["sg-111a1476"],
"SubnetIds" : [ "subnet-3029a769","subnet-5ec0b928"]
},
"Handler": "AWSServerlessInSiteDataGw::AWSServerlessInSiteDataGw.Functions::Get",
"Runtime": "dotnetcore2.0",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaBasicExecutionRole","AWSLambdaVPCAccessExecutionRole","AmazonSSMFullAccess"],
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "/",
"Method": "GET"
}
}
}
}
},
You may have an issue in permission config, that's why API couldn't call your lambda. try to explicitly add to template.yaml file invoke permission to your lambda from apigateway as a principal here's a sample below:
ConfigLambdaPermission:
Type: "AWS::Lambda::Permission"
DependsOn:
- MyApiName
- MyLambdaFunctionName
Properties:
Action: lambda:InvokeFunction
FunctionName: !Ref MyLambdaFunctionName
Principal: apigateway.amazonaws.com
Here's the issue that was reported in SAM github repo for complete reference and here is an example of hello SAM project
If you would like to add permission by AWS CLI for testing things out, you may want to use aws lambda add-permission. please visit official documentation website for more details.
I had a similar issue - I deleted then re-installed a lambda function. My API Gateway was still pointing at the old one, so I had to go into the API Gateway and change my Resource Methods to alter the Integration Request setting to point to the new one (it may look like it's pointing to the correct one but wasn't in my case)
I was having the same issue but I was deploying through Terraform. After a suggestion from another user, I reselected my Lambda function in the Integration part of the API Gateway, and then checked what changed in my Lambda permissions. Turns out I needed to add a "*" where I was putting the stage name in the source_arn section of the API Gateway trigger in my Lambda resource. Not sure how SAM compares to Terraform but perhaps you can change the stage name or just try this troubleshooting technique that I tried.
My SO posting: AWS API Gateway and Lambda function deployed through terraform -- Execution failed due to configuration error: Invalid permissions on Lambda function
Same error, and the solution was simple: clearing and applying the "Lambda Function" mapping again in the integration setting of the API Gateway.
My mapping looks like this: MyFunction-894AR653OJX:test where "test" is the alias to point to the right version of my lambda
The problem was caused by removing the ALIAS "test" on the lambda, and recreating it on another version (after publishing). It seems that the API gateway internally still links to the `old' ALIAS instance.
You would expect that the match is purely done on name...
Bonus: so, via the AWS console you cannot move that ALIAS, but you can do this via the AWS CLI, using the following command:
aws lambda --profile <YOUR_PROFILE> update-alias --function-name <FUNCTION_NAME> --name <ALIAS_NAME> --function-version <VERSION_NUMBER>
I had the same issue. I changed the integration to mock first, i.e unsetting the integration type to Lambda, and then after one deployment, set the integration type to lambda again. It worked flawlessly thereafter.
I hope it helps.
Facing the same issue, I figured out the problem is : API Gateway is not able to invoke the Lambda function as I couldn't see any CloudWatch logs for the lambda Function.
So firstly I went through API Gateway console and under the Integration Request - gave the full ARN for the Lambda Function. and it is started working.
Secondly, through the CloudFormation
x-amazon-apigateway-integration:
credentials:
Fn::Sub: "${ApiGatewayLambdaRole.Arn}"
type: "aws"
uri:
Fn::Sub: "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${lambda_function.Arn}/invocations"
I had the same problem so I deleted then created the stack and it worked.
Looks like "Execution failed due to configuration error: Invalid permissions on Lambda function" is a catch all for multiple things :D
I deployed a stack with CloudFormation templates and hit this issue.
I was using the stage name in the SourceArn for the AWS::Lambda::Permission segment.
when i changed that to a * AWS was a bit more explicit about the cause, which in my case happened to be an invalid Handler reference (I was using Java, the handler had moved package) in the AWS::Lambda::Function section.
Also, when i hit my API GW i got this message
{
"message": "Internal server error"
}
It was only when I was at the console and sent through the payload as a test for the resource that I got the permissions error.
If I check the Cloudwatch logs for the API GW when I configured that, it does indeed mention the true cause even when the Stage Name is explicit.
Lambda execution failed with status 200 due to customer function error: No public method named ...
In my case, I got the error because the Lambda function had been renamed. Double-check your configuration just in case.
Technically, the error message was correct—there was no function, and therefore no permissions. A helpful message would, of course, have been useful.
I had a similar problem and was using Terraform. It needed the policy with the "POST" in it. For some reason the /*/ (wildcard) policy didn't work?
Here's the policy and the example terraform I used to solve the issue.
Many thanks to all above.
Here is what my Lambda function policy JSON looked like and the terraform:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "AllowAPIGatewayInvoke",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-1:999999999999:function:MY-APP",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:execute-api:us-east-1:999999999999:d85kyq3jx3/test/*/MY-APP"
}
}
},
{
"Sid": "e841fc76-c755-43b5-bd2c-53edf052cb3e",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-1:999999999999:function:MY-APP",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:execute-api:us-east-1:999999999999:d85kyq3jx3/*/POST/MY-APP"
}
}
}
]
}
add in a terraform like this:
//************************************************
// allows you to read in the ARN and parse out needed info, like region, and account
//************************************************
data "aws_arn" "api_gw_deployment_arn" {
arn = aws_api_gateway_deployment.MY-APP_deployment.execution_arn
}
//************************************************
// Add in this to support API GW testing in AWS Console.
//************************************************
resource "aws_lambda_permission" "apigw-post" {
statement_id = "AllowAPIGatewayInvokePOST"
action = "lambda:InvokeFunction"
//function_name = aws_lambda_function.lambda-MY-APP.arn
function_name = module.lambda.function_name
principal = "apigateway.amazonaws.com"
// "arn:aws:execute-api:us-east-1:473097069755:708lig5xuc/dev/POST1/cloudability-church-ws"
source_arn = "arn:aws:execute-api:${data.aws_arn.api_gw_deployment_arn.region}:${data.aws_arn.api_gw_deployment_arn.account}:${aws_api_gateway_deployment.MY-APP_deployment.rest_api_id}/*/POST/${var.api_gateway_root_path}"
}
The documentation for AWS lambda resource permissions shows 3 levels of access you can filter or wildcard, /*/*/*, which is documented as $stage/$method/$path. However, their example and most examples online only use 2 levels and I was bashing my head against the wall using 3 only to get Access Denied. I changed down to 2 levels and lambda then created the trigger. Hopefully, this will save someone from throwing their computer against the wall.
In my case I used Lambda path, that doesn't starts with a '/', like Path: "example/path" in my template.yaml.
As a result AWS generate incorrect permission for this lambda:
{
"ArnLike": {
"AWS:SourceArn": "arn:aws:execute-api:{Region}:{AccountId}:{ApiId}/*/GETexample/path/*"
}
}
So I fixed it by adding '/' to my lambda path in the template.

Mantaining OAuth keys between elastic beanstalk deployment

I have a laravel application run in AWS Elastic Beanstalk environment. I use Laravel Passport to handle the authentication.
Every time I run eb deploy the keys will be deleted, since it is not part of the version control files (included in .gitignore). Thus, I have to manually run php artisan passport:keys in the EC2 instance to generate the keys. But this will make all users need to login again because the old token is now invalid, since it's a new key pair.
What is the best practice to provide a consistent oauth-public and oauth-private key for my configuration?
I am thinking of including the key into the repository, but I believe this is not recommended.
Another way is that I generate the key once, then upload it to S3. Then have a post-deployment script to retrieve the S3.
Is there any better way?
I managed to solve this yesterday, with S3.
Create a private S3 Repository, where you store your sensitive files (oauth-private.key, etc.)
In your .ebextensions directory, you have to create a .config file where you define a Resource(see https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-files - authentication section) - looking like this:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["<BUCKET-NAME>"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
Assuming A) Your S3 Bucket is called <BUCKET-NAME> and B) The IAM instance profile in your ElasticBeanstalk environment is called aws-elasticbeanstalk-ec2-role
Now you have to add the files to a location on the instance, where you can access it, you're free too choose where. In your .config file insert following:
files:
"/etc/keys/oauth-private.key":
mode: "000755"
owner: webapp
group: webapp
authentication: "S3Auth" # Notice, this is the same as specified in the Resources section
source: "https://<BUCKET-NAME>.s3-<REGION>.amazonaws.com/<PATH-TO-THE-FILE-IN-THE-BUCKET>"
Now for this to work, you still need to grant access to the IAM instance profile (aws-elasticbeanstalk-ec2-role), therefore you need to edit the ACL of your Bucket, like this:
{
"Version": "2012-10-17",
"Id": "BeanstalkS3Copy",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ID>:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:ListBucketVersions",
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<BUCKET-NAME>/*"
]
}
]
}
You can find the ARN of the IAM instance profile by going to the IAM Dashboard > Roles > aws-elasticbeanstalk-ec2-role and the copy the Role ARN
In your Laravel application you have to use Passport::loadKeysFrom('/etc/keys')

How to pass environment variables when programmatically starting a new Amazon EC2 from image?

I am using AWS Java API RunInstance() to start a new EC2 instance from my custom AMI image. How do I pass environment variables to the new EC2 INSTANCE such as database url, AWS credentials etc. ?
http://alestic.com/2009/06/ec2-user-data-scripts explains how to do this with user-data. for gotchas about using Java see AmazonEC2 launch with userdata.
note that I've seen mention that this doesn't work with Windows, only Unix.
[update] more data on setting environment variables here: https://forums.aws.amazon.com/message.jspa?messageID=139744
[after much testing] for me, echoing the environment variables into /etc/environment works best, like this:
reservation = connection.run_instances(image_id = image_id,
key_name = keypair,
instance_type = 'm1.small',
security_groups = ['default'],
user_data = '''#!/bin/sh\necho export foozle=barzle >> /etc/environment\n''')
then upon login:
ubuntu#ip-10-190-81-29:~$ echo $foozle
barzle
DISCLAIMER: I am not a sys admin!
I use a secure S3 bucket meaning a bucket that only the instance you're launching has access to. You can setup an IAM role that looks like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::some-secure-bucket/*"
}
]
}
Then you can upload your .env file in that bucket (store it encrypted). Then to access it on your EC2 instance, you could use the AWS cli tools:
sudo apt-get install -y python-pip (for aws s3 CLI library)
sudo pip install awscli
aws s3 cp --region us-east-1 s3://some-secure-bucket/.some-dot-env-file output_file_path
You can pull this file down when the code runs or optionally make it happen at boot by putting the aforementioned cp command in an init script located somewhere like /etc/init.d/download_credentials.sh
I think this is a really good option for downloading things that every instance using an AMI needs like credentials. However, if you want to specify per instance metadata, I just implemented using tags which I think works nice. To do this, alter the above IAM role with something more like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::some-secure-bucket/*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeTags"
],
"Resource": "*"
}
]
}
Then install ec2-api-tools
sudo sed -i.dist 's,universe$,universe multiverse,' /etc/apt/sources.list
sudo apt-get update
sudo apt-get install -y ec2-api-tools
And now you should be able to get per instance metadata through tags, such as the "Name" of your instance:
ec2-describe-tags --filter resource-id="$(ec2metadata --instance-id)" --filter "key=Name" | cut -f5
Note: I suck at bash so I'm stripping the name in ruby but you could use tr to remove the newline if you're into it!
You can also use instance metadata retrieval as explained at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
From the above document, the following GET request would retrieve user data for an instance if you run it from within the instance:
GET http://169.254.169.254/latest/user-data
This way, user data can be retrieved dynamically even after the instance is already started and running.

Resources