I created a CloudFormation stack based on the Drupal template.
That created an EC2 instance which I can see, but it also asked for a MySQL root password during the template set up.
So I expected to see an entry in "Amazon Management Console->Amazon RDS" for that, but I don't.
Can someone explain where this MySQL database is? And why it's not under RDS.
Presumably you have selected Drupal Content Management System in the AWS CloudFormation Create New Stack wizard?!
Depending on your region, this template resolves to e.g. https://s3.amazonaws.com/cloudformation-samples-eu-west-1/Drupal_Simple.template, which is accessible via the respective regional endpoint in turn, e.g. https://s3-eu-west-1.amazonaws.com/cloudformation-samples-eu-west-1/Drupal_Simple.template.
The name already implies this to be a simple approach only, and if you examine the template, you'll see that it indeed just Install[s] Apache Web Server, MySQL, PHP and Drupal in a single Amazon EC2 instance. Accordingly, the MySQL database is created in the MySQL instance running on the very same EC2 instance you are already seeing and can be accessed there as usual (e.g. via SSH and the MySQL command line tools or the Drupal database connection string targeting localhost:3306).
What you are looking for is available as well though within the collection of AWS CloudFormation Sample Templates, where you can find the following three variations:
Single EC2 Instance with local MySQL database (pretty similar to the 'Drupal_Simple.template')
Single EC2 Instance web server with Amazon RDS database instance (likely what you expected)
Highly Available Web Server with Multi-AZ Amazon RDS database instance and using S3 for storing file content (advanced version of what you expected)
Related
I have a Spring Web Service deployed on Elastic Beanstalk. I'm using AWS CloudFormation for the infrastructure and I'm using AWS CodePipeline to deploy the web service automatically from merges to the master branch.
Recently I added DynamoDB integration, and I need to configure a couple things in my application.properties. I attempted to use environment variables to configure the application.properties but I hit a wall when trying to set the environment variables from CodeDeploy.
This is my application.properties
amazon.dynamodb.endpoint=${DYNAMODB_ENDPOINT:http://localhost:8000}
amazon.dynamodb.region=${AWS_REGION:default-region}
amazon.dynamodb.accesskey=${DYNAMODB_ACCESS_KEY:TestAccessKey}
amazon.dynamodb.secretkey=${DYNAMODB_SECRET_KEY:TestSecretKey}
spring.data.dynamodb.entity2ddl.auto = create-drop
spring.data.dynamodb.entity2ddl.gsiProjectionType = ALL
spring.data.dynamodb.entity2ddl.readCapacity = 10
spring.data.dynamodb.entity2ddl.writeCapacity = 1
The defaults are for when I'm running a local DynamoDB instance and they work fine. However, I can't figure out how to get CodeDeploy to set environment variables for me, I also considered getting CloudFormation to set the environment variables, but couldn't find how to do that either. I tried manually setting the environment variables in the EC2 instance but that didn't work and isn't the solution I'm looking for as I'm using EB and want this project to use fully automated deployments. Please let me know if this is possible, what the industry standard is for configuring web services, and if I'm misunderstanding either CodeDeploy or CloudFormation.
In general, it is a bad practice to include access and secret keys in any sort of files or in your deployment automation.
Your instance that your application is deployed to should have an instance profile (i.e. IAM Role) attached to it which should have the appropriate DynamoDB permissions you need.
If you have that instance profile attached, the SDK should automatically be able to detect the credentials, region and endpoint is needs to communicate with.
You may need to update the way you are creating your DynamoDB client to just use the defaults.
To setup your development machine with these properties in a way that the AWS SDK can retrieve without explicitly putting them in properties files, you can run the aws configure command of the AWS CLI which should setup your ~/.aws/ folder with information about your region and credentials to use on your dev machine.
I'm having a hard time deploying a Laravel app for test purposes on AWS Elastic Beanstalk.
Followed all sources i could find in web including AWS documentation.
Created a Elastic Beanstalk environment and uploading an application is straightforward as long as i do not include .ebextensions and the .yaml file in it.
Based on Maximilian's tutorial i created init.config file inside .ebextensions with contents:
container_commands:
01initdb:
command: "php artisan migrate"
Environment gets to a degraded state as it finishes to update and i get the following logs:
[2018-11-20T23:14:08.485Z] INFO [7969] : Command processor returning results:
{"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"(TRUNCATED)...y exists\")\n/var/app/ondeck/vendor/laravel/framework/src/Illuminate/Database/Connection.php:458\n\n2 PDOStatement::execute()\n/var/app/ondeck/vendor/laravel/framework/src/Illuminate/Database/Connection.php:458\n\nPlease use the argument -v to see more details. \ncontainer_command 01initdb in .ebextensions/init.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI","returncode":1,"events":[]}],"truncated":"true"}
I have been trying different .config files from other instruction resources but none of them seems to work.
I'm running:
Laravel Framework 5.7.5
EB Platform uses PHP 7.2 running on 64bit Amazon Linux/2.8.4
RDS uses MySQL 5.6.40
I really do not know what is going on and would appreciate if you could give any suggestion.
I finally found my way out. Providing some documentation for anyone that hits the same issue.
What I was trying to do...
My main objective was to test a Laravel 5.7 application on a live AWS Elastic Beanstalk (EB) server. I was also in need of a way to visualize data using phpMyAdmin, a tool that fits my need. This is a very simple CRUD app just for learning the basics of both technologies.
What I did (worked)
Followed the normal workflow of creating an EB application mainly using the web console.
Name the application
Chose PHP as a platform
Start off with a base application (do not upload code yet)
Hit configure more options
In security card select your key pair and save. (This is valuable for SSH'ing on your server)
In the database, the card creates an RDS instance. Select whatever options that fit your needs and set a username/password.
Create environment.
After a while, you should have all resources created by EB (EC2 and RDS instances, security group, EIP, Buckets, etc) in the app environment.
Preparing your Laravel application is a straight forward process. You must not forget to change config/database.php to read server variables. My approach was to define them at the start of the file.
The main sources of troubles reside in configuring your server instance to include all software and configuration needed by your app and specific needs. This is done by including a .yaml file inside .ebextensions folder. This folder should reside in the root directory of your Laravel application. It's also a good idea to check your syntax before submitting another app version to EB. As per my needs, I used this script which basically installs phpMyAdmin as I deploy a new version. Specifically for this startup script, environment variables should be defined, namely $PMA_VER, $PMA_USERNAME, $PMA_PASSWORD for phpMyAdmin to work. You can create more environment variables in the software tab of your EB configuration page. Read the docs.
Another detail that might cause issues in running commands at startup using YAML script (specifically migration) is caused by Laravel and MySql versions. As for example, I am using Laravel 5.7 and the default MySQL version option in EB RDS creation wizard is something like 5.6.x. This will throw issues of the type:
Illuminate\Database\QueryException : SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes (SQL: alter table `users` add unique `users_email_unique`(`email`))
If this is your scenario, despite you should have already googled and sorted out that adding the line of code Schema::defaultStringLength(191); to the boot function of your app/Providers/AppServiceProviders.php file will do the trick.
You can do a typical migration passing the script:
container_commands:
01_drop_tables:
command:
"php artisan migrate:fresh"
02_initdb:
command:
"php artisan migrate"
This will drop existing tables avoiding conflicts and create a new one based on your code. You can read more logs from your server by SSH'ing and getting content of /var/log/eb-activity.log.
One of my clients wants to understand IAM feature before migrating business application to Amazon cloud.
I have figured out two use cases which we can recommend to our client, these are:
Resource-Level Permissions for EC2
• Allow users to act on a limited set of resources within a larger, multi-user EC2 environment.
• Control which users can terminate which instances.
• Restricting a user access to a single EC2 instance ( currently not supported by amazon API’s)
IAM Roles for Amazon ec2 resources
Command Line Usage
• Unix/Linux/Windows - Use the AWS Command Line Interface, which is a unified tool to manage the AWS services. We can access the Command Line Interface using the EC2 instance launched with IAM role support without specifying the credentials explicitly.
Programmatic Usage
• Use the appropriate AWS SDK for your language of choice. Configure it without specifying the credentials.
I would like to know other capabilities of IAM which we can recommend to our client and other use cases which you can recommend to us. Please let us know if any further explanation is required.
Any prompt response will be highly appreciated.
Thanks in advance
This is a very useful feature of AWS !
User Management - If you are a large team, you will have to give different users (or developers/testing, deployment) different type of permissions. Access levels like (say S3 read-only, DynamoDB full-access etc).
Manage Users : http://aws.amazon.com/iam/details/manage-users/
Not to keep credentials in code. Is you use IAM roles, you can mention that say an EC2 should work on this role. This will help you achieve things like "cluster with only access to S3, not DB")
IAM Roles for Amazon EC2 - Amazon Elastic Compute Cloud : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Handle Release staging. This is a benefit from the ROLE. You move apps from dev, qa, staging and prod. I usually keep different accounts for this. In this case, if you configure the EC2 to run on roles, then the stage difference can be handled witout code change. Just move the build from one account to another, and it works with no risk!
Lot of other benefits;
Product Details : http://aws.amazon.com/iam/details/
THE MISSION:
I have a development environment running on an Amazon AWS EC2 virtual server which i want to have tested by third parties.
THE PROBLEM:
I do NOT trust the companies who will test it not to sabotage environment and / or steal code. Therefore, i don't want them to know URL's, permanent IP's or even to access the web pages, which they could eventually use a crawler to find.
My environment includes web applications and socket servers. I do NOT want to expose the web applications, while giving access only to socket servers.
THE CONCEPT:
I have opted to use a secondary, impermanent Elastic IP pointing to the environment. this IP will be destroyed after 1 or 2 days, after basic tests have run. Subject to change (depending on suggestions from this thread).
THE QUESTION:
Can i create a secondary Elastic IP instance that allows access only to ports 5000-5100? If so, how?
THE ALTERNATIVE: In case this is not the most efficient procedure, what alternative would you propose?
MY SOLUTIONS: followed FAQ Launching Instance From Backup
create snapshot
create image from snapshot (snapshot menu - create image tag)
instances - launch instance
choose image created from snapshot as your root volume
edit security groups (opened port range for sockets only, no web)
deleted all web code from this instance
after 2 days, will delete instance
followed Create Image From, Instance
select (exclusively) running instance you wish to mirror
right click on selected instance
choose create image from dropdown
to 7. same as above
this second solution seems to be more stable (especially re: status check and connectivity issues).
any better solutions? thanx!
I have an AWS account with 14 instances and using scalr. I added the Api reference details and it showed up, at that time instances were pretty low. As and when I keep adding new instances it accepted few and reject the rest. Now I have an instance newly made on AWS which is not getting loaded in scalr.
Any ideas?
Instances that you create using AWS will not show up in Scalr.
Instead, you create Farms (in Scalr) through the use of custom and/or pre-configured Scalr Roles. When you launch those farms/roles, it will launch the required instances in AWS. It's like a wrapper around AWS that provides extra features, but it will only ever know about instances that have been launched from a Scalr role.
It is possible to import an existing server into Scalr although it involves installing the scalarizr software onto that server and opening some ports. Full details can be found here. Once complete, you'll have a new role that you can add to a farm and then launch.