I would like to restrict a users permissions, so they can't modify infrastructure without going through a process.
For example, as a requirement, a developer must go through the process of opening a PR, code review, tests pass, before it is merged. They can't push to master until that is complete. Similarly, a user should not be able to terraform apply, despite their AWS account having significant access to access/update/delete resources.
The issue is that running terraform plan is very helpful locally, and saves a lot of time when making changes to the HCL files.
Is there a way to restrict the terraform apply step, while still being able to run terraform plan?
Because Terraform and the associated providers run entirely on the machine where Terraform CLI is installed, those components alone are not able to implement any sort of access controls: a user could, for example, simply modify Terraform CLI or one of the providers to not enforce whatever checks you'd put in place.
Instead, enforcing permissions must be done by some other system. There are two main options for this, and these two options are complementary and could be implemented together as part of a "defense in depth" strategy:
Use the access control mechanisms offered by the remote system you are interacting with. For example, if you are working with Amazon Web Services then you can write IAM policies that only permit read access to the services in question, which should then be sufficient for most plan-time operations.
Unfortunately the details about which permissions are required for each operation in AWS are often not clearly documented, so for AWS at least this approach often involves some trial-and-error. Other systems may have clearer documentation.
Require all Terraform usage to be done remotely via some sort of remote automation, where the automation system can then restrict which users are able to start which actions.
There are various automation products which enable restricting which actions are available to which users. HashiCorp also offers Terraform Cloud, which includes the possibility of running Terraform remotely either in an execution context provided by Terraform Cloud itself or via an agent running on your own infrastructure. You can configure Terraform Cloud to allow applying only through the version control workflow.
Related
Within a DevSecOps Ci/Cd pipeline one of the best practices is to automatically discover and apply patches to vulnerable software prior to deployment.
Is it possible to check a CVE database, find patches, and then deploy. I want to build this capability into my pipeline.
The environments applicable to the above is AWS and Azure.
Can you provide examples of tools I could use to achieve the above?
• Automatically discover and apply patches to vulnerable open-source software prior to deployment.
A tool like Trivy may be your answer.
Trivy looks at the contents of an image to check which packages it includes, scans for all vulnerabilities, and sends them to AWS Security Hub.
If the scan does not uncover vulnerabilities, the Docker images are pushed to Amazon Elastic Container Registry for deployment.
To use Trivy, you must have Security Hub enabled in the AWS Region where you deploy this solution.
There are many other open-source solutions out there. Trivy is just one of them.
I'm actively looking around to see if there's a way to restrict which AWS users can manage aliases on lambda methods. One of the projects I'm adjacent to stages their lambda methods with aliases, which requires us to add specialized permissions to the associated API Gateway. These permissions need to be added through the CLI. Since this isn't very intuitive, we would like to see if we can make sure that our production aliases can only be managed by specific people.
Im hoping to move an application to AWS.
I would like to use the AutoScaling so not all my EC2 instances are in use when the application use is quiet.
My problem is.....
I have one service account used for all communication between the various components of the application and the servers in that environment
We have a security exception with my company which allows us to use the service account to perform its actions on each individual server.
Every time we introduce a new server to the environment, we have to request that the security team update our exception list to allow the new server in as well.
There is no automatic method for doing this. We have to submit a request to the security team asking for the new server to be added to the exception.
So while AutoScaling would be prefect how can it work in this case if each time a server is added the security team needs to be notified so they can add the new server to the exception list?
Thanks
You can get notifications when your autoscale group scales either up or down. SNS can send a variety of things, including SMS (text) messages to a cell phone.
While this would work, it is incredibly manual. The goal of an autoscale group is to let the environment expand and contract without human intervention. I personally would not implement this as, depending on the availability of your security team they may be a bottle neck to scaling up. If for some reason they miss the scale up event that signals them to do something then you've got orphan machines that you're paying for that are doing nothing.
Additionally, there are also ways to script the provisioning of a new machine. Perhaps there is a way to add what you want automatically. AWS calls this userdata - you can learn a bit more about it from the AWS EC2 docs.
But ultimately I'd really take a step back and look at your architecture. If you can't script the machine provisioning then autoscaling is not very worthwhile - it's just plain "have devops add another machine if needed and hope they remember to take it down when it's not needed".
Is Salt suited for PaaS?
Let's say I'd like to provision a PaaS compute service, such as Amazon BeanStalk, Azure Cloud Service (web role / worker role), or even a Heroku Dyno, as part of an SaltStack state (perhaps besides a VM or a database). Each of these services contain an API and some an SDK, meaning that it should technically be possible for the master to provision the PaaS using a (Python) script.
Of course, SaltStack is primarily written for IaaS. However, is the above use case common/possible for SaltStack?
Short answer: If it has an API, Salt can talk to it.
Long answer:
There are currently no built in execution modules or states for provisioning Amazon Beanstalk, Azure Cloud Service*, or Heroku. That said, there's no reason there could not be. See, for example, the suite of boto_* execution modules and states (search for "boto_*" on http://docs.saltstack.com/en/latest/). Such state modules could be used in your state SLSs and execution modules could be called from a custom runner.
*I'm not personally familiar with the Azure platform or salt-cloud, but salt-cloud does support Azure.
Every PaaS services usually have API supports in multiple languages. Using Python for example, you can create modules to do the needful and call modules from salt states as required.
I know there are tools to manage your EC2 environment. I currently use the Eclipse Plugin and the iPhone app iAWSManager. What i'm looking for is a management service that allows you to create multiple users with roles and privileges. I have clients that sign up for EC2 but need help setting up and managing everything. At the very least they should be able to setup multiple logins so they can monitor who is doing what on the account (rather than sharing the single login). Better would be to assign privileges for who could create and launch an instance, create and assign/just assign Elastic IPs/EBS to instances etc.
Since enterprises are supposed to be using EC2 how do they manage this well? How do they create audit trails of activity?
RightScale, YLastic or EnStratus support roles and priviledges. However, they are not for free...
I'll add Scalr to the list, which is a cloud management software like RightScale (disclaimer: I work there). We released our permissions feature last January. It allows you to create different teams and environments and attribute them privileges on a granular basis. It means you can grant different permissions to different people. You can learn more on this blogpost.
Scalr is available as a hosted service which includes support. If you are looking for a free solution, you can download the source code, which is released under the Apache 2 license, and install it your self.
As mentioned earlier, RightScale and enStratus are two other alternatives.