I'm actively looking around to see if there's a way to restrict which AWS users can manage aliases on lambda methods. One of the projects I'm adjacent to stages their lambda methods with aliases, which requires us to add specialized permissions to the associated API Gateway. These permissions need to be added through the CLI. Since this isn't very intuitive, we would like to see if we can make sure that our production aliases can only be managed by specific people.
Related
I've implemented two lambda's (let's call A and B) behind api gateway. Assume A is called from "outside" and B is being called from outside and also from A.
I've also implemented lambda Authorizer (token-based; cognito) as auth layer. Everything is working as expected.
Is there a way to bypass authorizer process for B, for calls coming from A only?
Tnx
There are multiple possibilities I have explored myself in the past for the exact same issue.
Change the calls to lambda:Invoke
Assuming you're generating some client code for your micro-services, you can create two versions of these clients:
external to call your service via HTTP API
internal to use lambda:Invoke operation straight to your micro-service.
Create a mirrored VPC-private API
This is probably feasible if you're deploying your infrastructure using CDK (or a similar alternative). Essentially, you keep your existing API where it is, and you create another internal version of it that does not have the authorizer. (Note that you may still want some sort of authorization process happening depending on the nature of your project.)
From this point on, you can pass the endpoint of your internal HTTP API to the Lambdas as environment variables and have them call that.
You can find more info about this, here. As a perk you should probably get lower latencies when talking to API Gateway as traffic through the VPC endpoints will only flow through AWS network, instead of going out on the internet and back in.
Move your workloads to ECS
This is perhaps a major change to your project, but one worth mentioning.
You can create true micro-services using ECS. You can run these services in private subnets of your VPC. In order not to have to deal with IP addresses yourself, you can explore multiple options:
have a VPC-internal Route53 Hosted Zone (more on this here). See more on ECS Service Discovery here
create Network Load Balancers in the private subnets of your VPCs and pass their endpoints to your services.
I would like to restrict a users permissions, so they can't modify infrastructure without going through a process.
For example, as a requirement, a developer must go through the process of opening a PR, code review, tests pass, before it is merged. They can't push to master until that is complete. Similarly, a user should not be able to terraform apply, despite their AWS account having significant access to access/update/delete resources.
The issue is that running terraform plan is very helpful locally, and saves a lot of time when making changes to the HCL files.
Is there a way to restrict the terraform apply step, while still being able to run terraform plan?
Because Terraform and the associated providers run entirely on the machine where Terraform CLI is installed, those components alone are not able to implement any sort of access controls: a user could, for example, simply modify Terraform CLI or one of the providers to not enforce whatever checks you'd put in place.
Instead, enforcing permissions must be done by some other system. There are two main options for this, and these two options are complementary and could be implemented together as part of a "defense in depth" strategy:
Use the access control mechanisms offered by the remote system you are interacting with. For example, if you are working with Amazon Web Services then you can write IAM policies that only permit read access to the services in question, which should then be sufficient for most plan-time operations.
Unfortunately the details about which permissions are required for each operation in AWS are often not clearly documented, so for AWS at least this approach often involves some trial-and-error. Other systems may have clearer documentation.
Require all Terraform usage to be done remotely via some sort of remote automation, where the automation system can then restrict which users are able to start which actions.
There are various automation products which enable restricting which actions are available to which users. HashiCorp also offers Terraform Cloud, which includes the possibility of running Terraform remotely either in an execution context provided by Terraform Cloud itself or via an agent running on your own infrastructure. You can configure Terraform Cloud to allow applying only through the version control workflow.
I am planning to use AWS to host a global website that have customers all around the world. We will have a website and app, and we will use serverless architecture. I will also consider multi-region DynamoDB to allow users closer to the region to access the closest database instance.
My question regarding the best design to implement a solution that is not locked down to one particular region, and we are a borderless implementation. I am also looking at high traffic and high number of users across different countries.
I am looking at this https://aws.amazon.com/getting-started/serverless-web-app/module-1/ but it requires me to choose a region. I almost need a router in front of this with multiple S3 buckets, but don't know how. For example, how do users access a copy of the landing page closest to their region?, how do mobile app users call up lambda functions in their region?
If you could point me to a posting or article or simply your response, I would be most grateful.
Note: would be interested if Google Cloud Platform is also an option?
thank you!
S3
Instead of setting up an S3 bucket per-region, you could set up a CloudFront distribution to serve the contents of a single bucket at all edge locations.
During the Create Distribution process, select the S3 bucket in the Origin Domain Name dropdown.
Caveat: when you update the bucket contents, you need to invalidate the CloudFront cache so that the updated contents get distributed. This isn't such a big deal.
API Gateway
Setting up an API Gateway gives you the choice of Edge-Optimized or Regional.
In the Edge-Optimized case, AWS automatically serves your API via the edge network, but requests are all routed back to your original API Gateway instance in its home region. This is the easy option.
In the Regional case, you would need to deploy multiple instances of your API, one per region. From there, you could do a latency-based routing setup in Route 53. This is the harder option, but more flexible.
Refer to this SO answer for more detail
Note: you can always start developing in an Edge-Optimized configuration, and then later on redeploy to a Regional configuration.
DynamoDB / Lambda
DynamoDB and Lambda are regional services, but you could deploy instances to multiple regions.
In the case of DynamoDB, you could set up cross-region replication using stream functions.
Though I have never implemented it, AWS provides documentation on how to set up replication
Note: Like with Edge-Optimized API Gateway, you can start developing DynamoDB tables and Lambda functions in a single region and then later scale out to a multi-regional deployment.
Update
As noted in the comments, DynamoDB has a feature called Global Tables, which handles the cross-regional replication for you. Appears to be fairly simple -- create a table, and then manage its cross-region replication from the Global Tables tab (from that tab, enable streams, and then add additional regions).
For more info, here are the AWS Docs
At the time of writing, this feature is only supported in the following regions: US West (Oregon), US East (Ohio), US East (N. Virginia), EU (Frankfurt), EU West (Ireland). I imagine when enough customers request this feature in other regions it would become available.
Also noted, you can run Lambda#Edge functions to respond to CloudFront events.
The lambda function can inspect the AWS_REGION environment variable at runtime and then invoke (and forward the request details) a region-appropriate service (e.g. API Gateway). This means you could also use Lambda#Edge as an API Gateway replacement by inspecting the query string yourself (YMMV).
The ec2-describe-instances command is not very helpful in distinguishing the instances.
Are there command line tools that give a better overview?
Perhaps somewhat like http://github.com/newbamboo/manec2 but with support for different regions etc.
Amazon recently has added a feature to 'tag' your EC2 instances.
You can use security group to identify your instance. See http://www.shlomoswidler.com/2009/06/tagging-ec2-instances-using-security_30.html
You can add an empty security_group to each instance to be able to communicate with each other.
I'm using http://github.com/cocoy/mr.awsome to provision and name instance via security groups. This way I can identify each instance whether I'm using Elasticfox, AWS console, or the same tool mr.awsome.
See also: http://www.capsunlock.net/2010/05/five-easy-steps-to-tag-ec2-instance-using-mr-awsome.html
Cheers,
Rodney
http://www.capunlock.net
I think Rodney's method of using securty groups is the most comprehensive, but if you just want a way to assign a free-form tag to your instances, you can do that if you set up a free account at RightScale and use it to launch your instances. Warning - you'll have to put up with occasional emails from RightScale asking if you want to use their paid services.
This isn't a command-line solution, and it's far from perfect, but my company currently maintains a shared Word document in a Dropbox folder that maintains the role/name -> instance id mapping for all of our active instances.
Using a Word document also allows us to keep track of some more information that is nice to have available at a glance:
The "environment" the instance is a part of.
The RDP shortcut to the instance for simple access.
The AMI the instance was launched from.
Any additional attached volumes.
The region the instance is in.
etc.
I know there are tools to manage your EC2 environment. I currently use the Eclipse Plugin and the iPhone app iAWSManager. What i'm looking for is a management service that allows you to create multiple users with roles and privileges. I have clients that sign up for EC2 but need help setting up and managing everything. At the very least they should be able to setup multiple logins so they can monitor who is doing what on the account (rather than sharing the single login). Better would be to assign privileges for who could create and launch an instance, create and assign/just assign Elastic IPs/EBS to instances etc.
Since enterprises are supposed to be using EC2 how do they manage this well? How do they create audit trails of activity?
RightScale, YLastic or EnStratus support roles and priviledges. However, they are not for free...
I'll add Scalr to the list, which is a cloud management software like RightScale (disclaimer: I work there). We released our permissions feature last January. It allows you to create different teams and environments and attribute them privileges on a granular basis. It means you can grant different permissions to different people. You can learn more on this blogpost.
Scalr is available as a hosted service which includes support. If you are looking for a free solution, you can download the source code, which is released under the Apache 2 license, and install it your self.
As mentioned earlier, RightScale and enStratus are two other alternatives.