We maintain dozens of developer accounts on AWS and for maintenance purposes it would be amazing if on all cloudshell environments we would have a set of scripts available.
It is possible to upload files to the cloudshell environment manually by using the Actions->Upload File feature in the web console, but that is not feasible to manage dozens of environments.
Is there any ansible module or other way to upload files to cloudshell? Probably via S3 bucket, but we're missing the last mile into the cloudshell environment.
Related
Good Day.
I am trying to place restrictions on deploying to specific environments from CLI for specific members of my team.
I am happy for them to deploy to our Dev environment locally but I would like to place restrictions on them doing so with regard to our Staging and Production Environments.
I have read the documentation and tried to apply these restrictions through the CLI and the dashboard but it seems as though I can only restrict deployments across all environments for a single user.
At the moment, this is simply not possible.
Vapor (laravel/vapor-core V2.0) only allows you to apply permission per user across all projects and environments as per a discussion I had with one of the support technicians. The permissions you can edit exist under the Team Settings menu on the Vapor Dashboard. These permissions apply to a specific user for all projects and their respective environments.
The ansible playbook I'm running via aws codebuild only deploys to the same account. Instead of using a separate build for each account, I'd like to use only one build and manage multi-account deployment via ansible inventory. How can I set up the ansible static library to add yml files for every other aws account or environment it will be deploying to? That is, the inventory classifies those accounts into dev, stg & prod environments.
I know a bit about how this should be structured and that is to create a yml file in the inventory folder having the account name and also create a relevant file in the group-vars subfolder without the yml extension. But, I do not know the details of file contents. Can you please explain this to me?
On the other side, codebuild environment variable is given a few account names, the environment, and the role it should be assuming in those accounts to deploy. My question is how inventory structure and file content should be set up for this to work?
If you want to act on resources in different account, the general idea in AWS is to "assume" a role in that account and run API calls as normal. I see that Ansible has a module 'sts_assume_role' which helps to assume a role. I found the following blog article that may give you some pointers. Whether you run the ansible command on your laptop or CodeBuild, the idea is the same:
http://www.drivenbydevops.io/aws-ansible-and-assumed-roles/
I am planning to build an enterprise application using aws lambda and serverless framework.
I want to separate the dev, test and prod environments and I am planning to use AWS Parameter store for it.
I don't want my production environment configuration be exposed to developers. If the developer runs the command serverless offline -s production start then the production configuration should not be obtained.
It should be obtained only when the serverless function has been successfully deployed to aws lambda.
Here are few considerations based on your question:
To have different environments on Serverless framework you have to set up the stage. This value can be passed as a parameter when executing sls commands.
If you are keeping your code in a repo, the developers will have access to all the configurations. If this is really important, you could keep the production configuration in a diff repo where only very specific people will have access to it, and then you make a reference to in in your serverless.yml. Ex:
custom: ${file(./config/${opt:stage, 'dev'}.json)} and then in your config folder you create the prod.json file, but pointing to the real one of the new repo you created. Note: this would make your project harder to maintain.
Considering you don't want your developers to execute your production environment locally. You can use the global variable of serverless offline to block the execution. You could also inform then to not do so.
Here is what should be a good practice and solution based on your problem:
Considering you have a production environment you want to isolate from a given group in your company, you should create VPC's and configure their resources access, accordingly.
Then you create users to have diff access. When your developer try to execute the code accessing a resource (dynamoDB for example) in a VPC they don't have access, they will be blocked.
AWS configure to define which user will execute the SLS command.
Your development team will still have access to your configuration file.
Note: In this case the person/group with access to the production VPC will have to do the deploy.
If the answer does not suffice, could you please reinforce which type of resource(s) are sensitive across your Serverless project? I am taking for granted it is the DB as it is the most common scenario.
I have been trying to find a solution for this but I need to ask you all. Do you know if there is a windows desktop application out there which would put (real time sync) objects from a local folder into predefined AWS S3 bucket? This could work just one way - upload from local to s3.
Setting it up
Insall AWS cli https://aws.amazon.com/cli/ for windows.
Through AWS website/console. Create an IAM user with a strict policy that allows access only to the required S3 bucket.
Run aws configure in powershell or cmd and set up the region, access key and secrect key for the IAM user that you created.
Test if your set up is correct by running aws s3 ls in the command line and verify you see a list of your account S3 buckets.
If not, then you probably configured IAM permissions incorrectly, you might need ListBuckets on all of S3 too.
How to sync examples
aws s3 sync path/to/yourfolder s3://mybucket/
aws s3 sync path/to/yourfolder s3://mybucket/images/
aws s3 sync path/to/yourfolder s3://mybucket/images/ --delete deletes files on S3 that are no longer available on your local path.
Not sure what this has to do with electron but you could set up a trigger on your application to invoke these commands. For example, in atom.io or VS code, you could bind this to saving a document on "ctrl+s".
If you are programming an application using Electron then you should consider using AWS JavaScript SDK instead of the AWS CLI but that is a whole different story.
And lastly, back up your files somewhere else before trying to use possibly destructive commands such as sync until you get a feeling of how they work.
I am working on a laravel project which is hosted on AMAZON AWS. We are also using AWS AutoScaling service, since new instances can be added/removed on the fly, I am also using AWS CodeDeploy so whenever a new instance will be created it will pull the code from github as we do not include environment variable file on git so new instance will not have the environment variable file so the application will not be able to run. I also do not want to include the environment variable file on git as it is not recommended to include that file on git. If I ignore the best practices here and add the env file on git then still there is a problem as I have different branches with different env files so when I merge the code it will replace the env file as well. So what are the best practices or solutions for this case ?
FYI: we are not using ElasticBeanstalk as I am familiar that on elastic beanstalk there is an option on EB dashboard to add environment variables and the path where env file will create upon new instance creation but we are not using ElasticBeanstalk, we are using AutoScaling service and according to my findings AWS do not provide such functionality for AutoScaling service.
There are several options to configure the env vars for an application.
Place the env files on S3 and on boot in the user data/launch config for that environments auto scaling pull down the config file for that env. Also to lock it down in the role for an environment only allow access to that bucket.
Store the env vars in Dynamodb for the env, and on boot look those up in user data and set at env vars. (unless the contain secrets/connection strings, et al., no way to store encrypted secrets in DDB. )
Use Consul https://www.consul.io/
KV Store: Applications can make use of Consul's hierarchical key/value store for any number of purposes, including dynamic configuration, feature flagging, coordination, leader election, and more. The simple HTTP API makes it easy to use.