I want a file stored in an S3 bucket to be available only for users of a specific web application located on an EC2 instance.
This is answered here, but it leads to a deleted link...
It is also explained here, but I don't know how to implement the solution...I'm supposed to include this code somewhere:
{
"Version":"2012-10-17",
"Id":"http referer policy example",
"Statement":[
{
"Sid":"Allow get requests originated from www.example.com and example.com",
"Effect":"Allow",
"Principal":"*",
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition":{
"StringLike":{"aws:Referer":["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
How do I do this?
The above code of your's is aws security policy.....
You need to do the following things:
Step1: Create a IAM user from AWS Dashboard -> Users -> Create New Users.
Step2: Create a policy for your bucket from AWS Dashboard -> Policies -> Get Started -> Create Policy.
Step3: Attach the policy created in Step2 to the user created in step1.
Now your bucket will be accessible only to the user you created with the policies you specified.
Since your policy has a rule
"aws:Referer":["http://www.example.com/*","http://example.com/*"]
the bucket will be accessible only from the urls you have specified.
Related
I have a keycloak server setup. I am using the token endpoint: http://localhost:8080/auth/realms/demo/protocol/openid-connect/token to authenticate a user and generate a token. This token I understand can be used in subsequent calls to verify if it is a valid user.
However, I am not sure how do I use this to authorize the user? ie verify if this user has the roles to access a resource.
I see that it is possible to configure a resource URI under the client section. But once that is done, I want to be able to read the token and verify if this user has the roles to access this resource.
Right now, this is what I am doing:
I have used spring boot here.
doSomething(String token)
{
1. get token info using: http://localhost:8080/auth/realms/demo/protocol/openid-connect/userinfo
2. from this get the roles the user has
3. Manually check the roles required for the above function. (Right now, this is set in a simple switch statement)
4. If the role obtained from step 2 matches what we get in step 3, go ahead. Else return failure.
}
I want to know if step 3 above can be done in a better way. I know taht you can set a resource in clients from the keycloak console. What I was hoping is we could replace the 4 steps above with something like:
keycloakAPIToAuthorizeToken(token,resource)
which would tell me whether this user has the roles (obtained from token) to access this resource.
Please suggest if this is doable.
Thanks in advance.
Om
There are MANY ways to do this. One of them is to use Keycloak's roles, and assign those roles to users. That way, in your server/api you can check if the user has that role and proceed or reject the call.
Example, in my api project, I have some endpoints that are exclusive for system administrators, so I have a role SystemAdministrators:
and then, if you go to Users, Role Mappings, you can add that role to the users to set as admins:
Then, any api call should include a bearer token (obtained from the Keycloak login), in your code, you can decode this jwt (it will depend what language you are using, I use python so its pretty simple), it will have an element "realm_access" and inside it, a "role" element, example:
"realm_access": {
"roles": [
"offline_access",
"uma_authorization",
"SystemAdministrators"
]
},
If that element contains the role SystemAdministrators, you know is a system administrator.
I found this is the simplest way to do it. You can get fancy and use role attributes to determine individual options inside screens, etc (attributes are key/value pairs so the way I implemented this is, the key is an option name and the value is the permission level, for example: "screen1": "read", "screen2": "write", and so on. For this, you would need to use the keycloak admin api: https://www.keycloak.org/docs-api/6.0/rest-api/index.html#_roles_resource which has many endpoints that can help you.
I have backend API written in the laravel framework. Every client request goes through a single AWS api-gateway. This gateway only verifies the users identity(authenticates) and proxies it to the php backend API. Gateway does not have API for each resources like user, product, order etc. I have two types of users in AWS user pool. Admin and normal users. Now, I want to restrict certain php API endpoints (edit, delete) to the non admin users.
One way of doing this is to maintain the user information with their role and permission in the database and handle logic in the php code itself. While I am reading AWS documentation, it says it can control the access to the API. As I have already mentioned I don't have separate api-gateway for each resources, I don't know if it is possible to control access in the gateway itself. Can somebody help me which approach should i use. Is maintaining RBAC logic in the php code the right approach or just overkill.
Simplest way might be to go with standard IAM roles and policies: https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html
To allow an API caller to invoke the API or refresh its caching, you must create IAM policies that permit a specified API caller to invoke the API method for which the IAM user authentication is enabled. The API developer sets the method's authorizationType property to AWS_IAM to require that the caller submit the IAM user's access keys to be authenticated. Then, you attach the policy to an IAM user representing the API caller, to an IAM group containing the user, or to an IAM role assumed by the user.
You can find more information in the official documentation, but here is an IAM policy example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Permission",
"Action": [
"execute-api:Execution-operation"
],
"Resource": [
"arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path"
]
}
]
}
The steps that I have already done are as follows:
1)I've set up my EC2 instance already.
2) I've linked it up to amazon CodeDeploy
3) I've created an s3 bucket that will hold my cloud code when I push it to my instance.
4) I've created a folder that will contain my cloud code.
5) I've initialised npm within it and created an index.js file (it is a sublime text file actually - not sure if this is correct or not?)
6) I've set things up from the command line that index.js is the main entry point.
7) I have put the following email adapter code within it:
var server = ParseServer({
// Enable email verification
verifyUserEmails: true,
// The public URL of your app.
// This will appear in the link that is used to verify email addresses and reset passwords.
// Set the mount path as it is in serverURL
publicServerURL: 'etc',
// Your apps name. This will appear in the subject and body of the emails that are sent.
appName: 'etc',
// The email adapter
emailAdapter: {
module: 'parse-server-simple-mailgun-adapter',
options: {
// The address that your emails come from
fromAddress: 'parse#example.com',
// Your domain from mailgun.com
domain: 'example.com',
// Your API key from mailgun.com
apiKey: 'key-mykey',
}
}
});
8) I've set up an account with mailgun its asking me for a domain to send the emails from? I'm not sure what this is?
My main question is regarding the code i posted above. Is this enough to put into index.js to create an email adapter? And is uploading a sublime text file ok? How can the cloud know what "ParseServer" class is without importing libraries? - Do i have to add anymore code to index.js?
Additionally what else do I need in the cloud code package besides the index.js file? This has been such an obscure topic and there seems to be no clear guides online as to how to upload functional cloud code to amazon EC2 instances.
Any help appreciated, cheers.
Part of your steps are correct but you must also modify the form and the domain. Your domain (to your question) must be taken from your mailgun account. You must add some steps in order to setup your domain with your DNS provider (e.g. goDaddy etc.) If you don't want to use you can try to use the default sandbox domain that has been provided to you by mail-gun but it's better to use your own domain. In the from field you need to put some email address so users that will receive the email will see from which email this message sent to them. usually what I love to put is donotreplay# ( is your domain of course)
In my project, this is how I configure it (and it works):
"verifyUserEmails": true,
"emailAdapter": {
"module": "parse-server-simple-mailgun-adapter",
"options": {
"fromAddress": "donotreply#*******.com",
"domain": "mail.*******.com",
"apiKey": "<API_KEY_TAKEN_FROM_MAILGUN>"
}
},
Your list of domain in mail-gun can be found in here: https://app.mailgun.com/app/domains (login is required of course)
in here: https://documentation.mailgun.com/en/latest/user_manual.html#verifying-your-domain you can read how to verify your domain
Hope it helps.
I am trying to use the Google Cloud Resource Manager API to test whether the authenticated user has permissions to a given project. I have read the [Google Cloud Resource Manager API documentation][1] and have tried sending requests, all which fail with the following error:
{ "error": { "code": 400, "message": "Request contains an invalid argument.", "status": "INVALID_ARGUMENT" } }
The POST request is:
https://cloudresourcemanager.googleapis.com/v1/projects/{projectId}:testIamPermissions
where {projectId} is a defined projectId from the Google Cloud Developer Console. I am aware that I can use the project.list method and determine if the given projectId is present in the list of projects for the user. I want to understand how to use the project.testIamPermissions request and determine which permission the user has on the project. [1]: https://cloud.google.com/resource-manager/reference/rest/v1/projects/testIamPermissions
In order to use the Cloud Resource Manager API methods organizations.testIamPermissions or projects.testIamPermissions, you need to provide the resource you'd like to check in the URL and then the permissions you'd like to check in the body.
So, for example, if I want to test if I, the authenticated user, have access to a particular permission (ex. compute.instances.create) on a particular project (ex. my-project) then, I would POST this:
{
"permissions": [
"compute.instances.create"
]
}
to the URL:
https://cloudresourcemanager.googleapis.com/v1/projects/my-project:testIamPermissions
which would give me the following response:
{
"permissions": [
"compute.instances.create"
]
}
because I do in fact have permissions to create new instances in my-project.
However, if I did not have permission to create new instances, the response would look like:
{
}
Try it here via the API Explorer.
If your goal is to find the test of all permissions that the user has on the project, then you have to provide the full list of all project level permissions in your request body and the response will include the subset of those permissions that the user has.
Hope this helps!
IAM has a component of service called Simple Token Service (STS). It allows you to create temporary access through SDK/ API to access AWS resources without having the need to create dedicated credentials. These STS tokens have a user-defined life period and those are destroyed post that. People use this service for accessing content from mobile devices such as Android/ IOS Apps.
But i don't know how to use this service.
Any help or support is appreciated.
Thanks
STS and IAM go hand in hand, and is real simple to use. Since you have not given a use case, please allow me to explain a few things before we get in coding.
Note: I code in PHP and the SDK version is 3.
The idea of STS is to create some tokens which allows the bearer to do certain actions without you (the owner or the grantee) compromising your own credentials. Which type of STS you are going to use depends on what you want to do. Possible actions are listed here.
E.g.1: Typically, you use AssumeRole for cross-account access or
federation. Imagine that you own multiple accounts and need to access
resources in each account. You could create long-term credentials in
each account to access those resources. However, managing all those
credentials and remembering which one can access which account can be
time consuming. Instead, you can create one set of long-term
credentials in one account and then use temporary security credentials
to access all the other accounts by assuming roles in those accounts.
E.g.2: Typically, you use GetSessionToken if you want to use MFA to protect
programmatic calls to specific AWS APIs like Amazon EC2 StopInstances.
Let us assume you have an IAM user and you want to create many temporary credentials for that user, each credential with a time frame of 15 minutes. Then you will write the following code:
$stsClient = \Aws\Laravel\AwsFacade::createClient('sts', array(
'region' => 'us-east-1'
));
$awsTempCreds = $stsClient->getSessionToken(array(
'DurationSeconds' => 900
));
Points to note:
Credentials that are created by IAM users are valid for the duration that you specify, from 900 seconds (15 minutes) up to a maximum of 129600 seconds (36 hours), with a default of 43200 seconds (12 hours); credentials that are created by using account credentials can range from 900 seconds (15 minutes) up to a maximum of 3600 seconds (1 hour), with a default of 1 hour.
In the above example I am getting $stsClient using AWS Facade which is part of Laravel framework. It is up to you how you get hold of $stsClient by passing credentials. Read this installation guide on how to instantiate your $stsClient.
Since STS is a global resource i.e. it does not require you to be in a specific region, you MUST ALWAYS set the region to us-east-1. If your region is set to anything else, you will get errors like should be scoped to a valid region, not 'us-west-1'.
There is no limit on how many temporary credentials you make.
These credentials will have the SAME permissions from which Account/IAM they are derived from.
Above code returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token, plus a few other information such as expiry time.
You can now give these temporary credentials to someone else. Let us say I gave this to my friend who happens to use the Javascript API. He now can write codes like this:
<script>
var accessKey = '<?php echo $credentials["AccessKeyId"] ?>';
var accessToken = '<?php echo $credentials["SecretAccessKey"] ?>';
var accessSessionToken = '<?php echo $credentials["SessionToken"] ?>';
AWS.config.update({
credentials: {
accessKeyId: accessKey,
secretAccessKey: accessToken,
sessionToken: accessSessionToken
},
region: 'us-east-1'
});
var bucket = new AWS.S3();
var file = fileChooser.files[0];
var params = {Bucket: 'mybucket', Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
results.innerHTML = err ? 'ERROR!' : 'UPLOADED.';
}).on('httpUploadProgress', function(evt) {
console.log('Progress:', evt.loaded, '/', evt.total);
});
</script>
What this script does?
Creates a new client using temporary credentials.
Uploads a very large file (more than 100MB) to mybucket using Multipart Upload.
Similarly, you can do operations on any AWS resource as long as you temporary credentials have permission to do so. Hope this helps.
STS is little tough to understand (need to put real time in reading it). I will try (yes try!) to explain as simple as possible. The service is useful if you need to do things like this:
You have a bucket on S3. You have say 100 users who you would like to upload files to this bucket. It is obvious that you should not distribute your AWS key/secret to all of them. Here comes STS. You can use STS to allow these users to write to a "portion" (say a folder by their google-id) of the bucket for a "limited time" (say 1 hr). You achieve this by doing the required setup (IAM, S3 policy and STS-AssumeRoles) and sending them a URL (which they use to upload). In this case, you can also use the Web Identity Federation to authenticate users by Google/FB/Amazon.com. So with no backend code, this workflow is achievable. Web Identity Federation Playground gives you a sense of how this works.
You have an AWS account and want somebody else (another AWS user) to help you manage this. Here again you give the other user limited-time access to a selected portion of your AWS resource without sharing the Key/secret
Assume you have a DynamoDB setup with a row of data per your app-user. Here you need to ensure a given user can write only on his row of data and not others. You can use STS to setup stuff like this.
A complete example is here:
Web Identity Federation with Mobile Applications : Articles & Tutorials : Amazon Web Services : http://aws.amazon.com/articles/4617974389850313
More reading:
Creating Temporary Security Credentials for Mobile Apps Using Identity Providers - AWS Security Token Service : http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingWIF.html