Restrict CLI Deployment for environment Laravel Vapor - laravel

Good Day.
I am trying to place restrictions on deploying to specific environments from CLI for specific members of my team.
I am happy for them to deploy to our Dev environment locally but I would like to place restrictions on them doing so with regard to our Staging and Production Environments.
I have read the documentation and tried to apply these restrictions through the CLI and the dashboard but it seems as though I can only restrict deployments across all environments for a single user.

At the moment, this is simply not possible.
Vapor (laravel/vapor-core V2.0) only allows you to apply permission per user across all projects and environments as per a discussion I had with one of the support technicians. The permissions you can edit exist under the Team Settings menu on the Vapor Dashboard. These permissions apply to a specific user for all projects and their respective environments.

Related

Laravel App on AWS Elastic Beanstalk: How can I use a different database (on the same RDS) according to subdomain?

We are running a B2B business and we have a Laravel (v8) app running on AWS.
Before, I created different environments for different customer companies.
One subdomain for each customer company and running on different environment as the configuration described here:
AWS: Pointing sub-domains to different elastic beanstalk environments
In the beginning it seemed perfect. But then, managing git branches, merges and then deployments on several machines proved to be too difficult to maintain, furthermore, too expensive.
So, I decided to point all the subdomains to the same environment for production.
BUT, ALSO I don't want that data of customer companies get mixed together; I want to keep them separate, even if on the same database server.
SO, FINALLY I want that my laravel app uses a different database (on the same RDS instance) according to subdomain.
1. Does it make sense?
2. How can I do it?

How to manage production, test and development environments with serverless framework

I am planning to build an enterprise application using aws lambda and serverless framework.
I want to separate the dev, test and prod environments and I am planning to use AWS Parameter store for it.
I don't want my production environment configuration be exposed to developers. If the developer runs the command serverless offline -s production start then the production configuration should not be obtained.
It should be obtained only when the serverless function has been successfully deployed to aws lambda.
Here are few considerations based on your question:
To have different environments on Serverless framework you have to set up the stage. This value can be passed as a parameter when executing sls commands.
If you are keeping your code in a repo, the developers will have access to all the configurations. If this is really important, you could keep the production configuration in a diff repo where only very specific people will have access to it, and then you make a reference to in in your serverless.yml. Ex:
custom: ${file(./config/${opt:stage, 'dev'}.json)} and then in your config folder you create the prod.json file, but pointing to the real one of the new repo you created. Note: this would make your project harder to maintain.
Considering you don't want your developers to execute your production environment locally. You can use the global variable of serverless offline to block the execution. You could also inform then to not do so.
Here is what should be a good practice and solution based on your problem:
Considering you have a production environment you want to isolate from a given group in your company, you should create VPC's and configure their resources access, accordingly.
Then you create users to have diff access. When your developer try to execute the code accessing a resource (dynamoDB for example) in a VPC they don't have access, they will be blocked.
AWS configure to define which user will execute the SLS command.
Your development team will still have access to your configuration file.
Note: In this case the person/group with access to the production VPC will have to do the deploy.
If the answer does not suffice, could you please reinforce which type of resource(s) are sensitive across your Serverless project? I am taking for granted it is the DB as it is the most common scenario.

How can I have separate APIs for staging and production environments on Heroku?

I was just checking on how pipelines work in Heroku. I want the staging and production apps to be the same except that they should access different API endpoints.
How could I achieve that?
Heroku encourages getting configuration from the environment:
A single app always runs in multiple environments, including at least on your development machine and in production on Heroku. An open-source app might be deployed to hundreds of different environments.
Although these environments might all run the same code, they usually have environment-specific configurations. For example, an app’s staging and production environments might use different Amazon S3 buckets, meaning they also need different credentials for those buckets.
An app’s environment-specific configuration should be stored in environment variables (not in the app’s source code). This lets you modify each environment’s configuration in isolation, and prevents secure credentials from being stored in version control. Learn more about storing config in the environment.
On a traditional host or when working locally, you often set environment variables in your .bashrc file. On Heroku, you use config vars.
In this instance you might use an environment variable called API_BASE that gets set to the base URL of your staging API on your staging instance and to the base URL of your production API in production.
Exactly how you read those values depends on the technology you're using, but if you look for "environment variables" in your language's documentation you should be able to get started.

What are the possible capabilities of IAM in AWS?

One of my clients wants to understand IAM feature before migrating business application to Amazon cloud.
I have figured out two use cases which we can recommend to our client, these are:
Resource-Level Permissions for EC2
• Allow users to act on a limited set of resources within a larger, multi-user EC2 environment.
• Control which users can terminate which instances.
• Restricting a user access to a single EC2 instance ( currently not supported by amazon API’s)
IAM Roles for Amazon ec2 resources
Command Line Usage
• Unix/Linux/Windows - Use the AWS Command Line Interface, which is a unified tool to manage the AWS services. We can access the Command Line Interface using the EC2 instance launched with IAM role support without specifying the credentials explicitly.
Programmatic Usage
• Use the appropriate AWS SDK for your language of choice. Configure it without specifying the credentials.
I would like to know other capabilities of IAM which we can recommend to our client and other use cases which you can recommend to us. Please let us know if any further explanation is required.
Any prompt response will be highly appreciated.
Thanks in advance
This is a very useful feature of AWS !
User Management - If you are a large team, you will have to give different users (or developers/testing, deployment) different type of permissions. Access levels like (say S3 read-only, DynamoDB full-access etc).
Manage Users : http://aws.amazon.com/iam/details/manage-users/
Not to keep credentials in code. Is you use IAM roles, you can mention that say an EC2 should work on this role. This will help you achieve things like "cluster with only access to S3, not DB")
IAM Roles for Amazon EC2 - Amazon Elastic Compute Cloud : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Handle Release staging. This is a benefit from the ROLE. You move apps from dev, qa, staging and prod. I usually keep different accounts for this. In this case, if you configure the EC2 to run on roles, then the stage difference can be handled witout code change. Just move the build from one account to another, and it works with no risk!
Lot of other benefits;
Product Details : http://aws.amazon.com/iam/details/

Setting up a collaborative environment for web application development

My office is growing and ive been tasked to build out the IT for our web development.
Whats the best tool/setup for doing web development in a group setting? The requirements are a centralized code repository, a location to test development code on, and finally a way to push tagged code out to a staging server. What im thinking is svn/redmine for code repo, each user has an account on a central development machine to allow for ssh access(eclipse over ssh) and their own virtual host on the dev server which gives everyone a centralized development sandbox. Code is written and tested on this dev box then checked back into svn and later tagged and pushed out to the staging server. Yeah? Thoughts comments or recommendations?
*Also, in a dev environment what is the best way to handle databases? Is it wise to pull from the production database? Also should each developer have his/her own db or work off a master db?
**We are building a magento application and also have some custom backoffice tools that run on cakePHP.
Although this subject is off-topic in StackOverflow and flagged so then you need to concentrate on following areas:
VERSION-CONTROL
GIT has all the glory and you don't need your own box for this as https://bitbucket.org/ offers unlimited data and private/public repos and you can set your codebase there. http://github.com is also powerful and de facto most popular version-control oriented tool out there although it comes for a small price
so your master branches live in your version control and your devs will checkout frpom there and commit to it as well
your deployment tools will deploy data to your live and staging environments from your master
ENVIRONMENTS
usually three are used LIVE, STAGE, DEV
LIVE is well live and only approved code gets deployed there
STAGE is pre-live environment and should be exact replica environment according to LIVE so all things can be tested there by merchant
DEV is cool to have exact replica but can as well be on developers local env and is ment for loose testing and experimenting
DATABASES AND DEPLOYMENT
mysql databases are pain in the ass to sync so you better have a script for it that syncs from live to others and prevent syncing from other environments to LIVE. This limitation also requires that all the configuration and content will be added from LIVE only and only then synced down the line. Every change to schema or permanent setting should be handled by update scripts (As we are talking MAGENTO CE , MAGENTO EE has migration built in)
for deployment I also suggest you to build a fabric or capistrano script that resets dev and staging environments, handles database reset and pull from LIVE DB, and imports code from central repository.
it's also a good idea to target the following everyday tasks:
clients needs to reset the stage for it's tests
project manager, developer or testers need to test so spawning a test clone should be oneclick action (take current db and code and make it live in some subfolder for specific test only) as well as deleting the test
3rd party devs might need access to specific test or dev environment (this is actual with magento as in average there are at least 10 external extensions installed in every magento store)

Resources