I hope this is a simple question.
Currently I have an Apache2 webserver on Ubuntu with multiple websites.
The basic structure of the Apache is
/etc
/apache2
/sites-available --> the .conf files for the websites
/sites-enabled --> the enabled .conf file links for the websites
/var
/www
/html
/sites-admin --> the location of the websites code
My task is to create an auto-scaling group that will be adjusting with the load.
My thought is to mount an EFS drive under /var/www/html/efs_mount and store the websites code there
However, this creates two issues:
this approach does not accommodate adding websites as i will have to update AMI and launch template, as well as instance refresh every time I add a website
when adding the website configuration to /etc/apache2/sites-available, in order to enable it we run a2ensite webiste.conf. as in the issue #1 this requires an update to the AMI and launch template, as well as instance refresh
Is there a way to work around this issue?
I know there's an option to use code deploy with the in-place or replace approach. Are there any other options?
Thanks
Igal
This was resolved by mounting EFS under /mnt/efs/fs1, setting the Apache root directory to /mnt/efs/fs1/www/html, and configuring bitbucket pipelines to trigger AWS CodeDeploy to deploy to the root directory and, as part of the script, run sudo systemctl reload apache2
Related
I have a recently deployed kolla-ansible stable/victoria with several services I wanted to try but no longer need (designate, octavia, etc.) What is the "right" way to remove these services? I have attempted:
kolla-ansible -i multinode reconfigure --tags <services>
kolla-ansible -i multinode reconfigure --tags common,haproxy,<services>
kolla-ansible -i multinode deploy --tags <services>
In each case I'm left with still-running containers, leftover configuration artifacts (/etc/kolla/.*.conf) and haproxy config files.
I know it's been a while since you posted this question, but I recently had the same problem and haven't found documentation about this anywhere.
The reason why reconfigure and deploy don't do anything even if you set enable_<service> to no is because the Ansible playbooks only run tasks involving a given service if its corresponding enable is true. If you look at the output of your commands run with --tags, you'd see that Ansible isn't really doing anything with regards to your disabled service.
Since Kolla-Ansible deploys everything with containers, I've found most services can simply be removed by doing the following:
Stop and delete all the containers running the service to be removed
Delete those containers' volumes
Remove the configuration and log files (under /etc/kolla and /var/log/kolla respectively)
Remove databases used by the service you're deleting
You can remove the HAproxy config files for each service you're removing.
I know this is perhaps not in the spirit of automating the Openstack management with Ansible, but I've done this a few times without too many problems. I would avoid removing core services like Keystone, Neutron, Nova, Mariadb or Rabbitmq though because if you're doing that you're destroying your entire Openstack deployment anyways.
You can run the cleanup-host and cleanup-containers scripts on hosts running your containers, but those remove everything related to Kolla-Ansible. If you want to remove a specific service, you could modify those scripts though. I'm aware certain services like Nova, Neutron, Openvswitch and Zun reconfigure the host too for networking but I haven't been able to find a reliable way to revert those changes, and cleanup-host/cleanup-containers don't address those either. If you stop and delete the openvswitch containers, Openvswitch's interfaces go away on the next host reboot and that may be a viable method for you too. Remember Kolla-Ansible loads the openvswitch kernel module persistently so that's something else you may want to remove as well.
I was also struggling with such scenario recently and I've found just these:
https://bugs.launchpad.net/kolla-ansible/+bug/1874044
https://review.opendev.org/c/openstack/kolla-ansible/+/504592
Unfortunately, seems like a work already started some time ago, but no big progress has been done yet.
I am running chef 13+ on AWS Ubuntu in local mode via EC2 UserData. I have a common role which installs/configures many common things for the organization.
Chef in local mode will create a nodes directory in the repo checkout. It then creates a private-IP.json file that's used for cache.
Everything is fine, I image to an AMI and add to it the LaunchConfig for AutoScaling.
However, in AutoScaling I have to remove that private-IP.json file because I get a new private IP. Thereby effectively deleting all the cache and work done before imaging.
One approach I have in mind is just to rename the file and use some sed magic to replace IP's and hostnames, but I am thinking there much be a better more Chef based approach?
You would generally set the run list via the initial JSON -j or directly via -r for both chef-solo and local mode.
I have created an environment with Elastic Beanstalk with a EC2 instance with PHP installed: my files are in /var/www/html.
First I allowed Auto-Scaling/Load balancer but when the auto-scaling triggered, it created another instance and terminated the old one. And then I realized, the new one was not a clone of the old one : I lost all my config and my files, while I did attached a SSD root volume in my EB config.
I tried again and I created an AMI image which I included in my EB config (in Custom AMI ID). This time my config stays but my folder /var/www/html is emptied and replaced by default index.html files.
1. Is it supposed to happen ? I thought the auto scaling created a cloned of the instance ?
So I decided to disable auto scaling / load balancer and to work on a single instance mode. But then even when I reboot my EC2 instance, the config is preserved but my whole folder /var/www/html is emptied again and only the default files are inside.
2. Why ? There is an EBS volume attached to my instance (EB did that automatically), so it should not happen, if I understand correctly how it works.
Maybe it is the same issue for both but I really don't get why my files are deleted.
Thanks a lot for your help !
Romain
Autoscaling uses an AMI to launch new instances, and AMIs are no more than snapshots of EC2 instances at a certain point in time. Because of this, every time Autoscaling launches a new instance, any differences between the AMI and the current desired state must be applied in boot time prior to receive new traffic.
ElasticBeanstalk provides tools to manage application deployments integrated into the Autoscaling dynamic and also manage instance configurations. Sometimes these configurations become too complex to achieve during bootstrap using the EB tools and there is when custom AMIs come handy.
If you SSH into an autoscaling instance and start manually performing actions out of the ElasticBeanstalk toolstack's scope, all of those changes will be lost in the next Autoscaling event unless you save an updated AMI from your instance and apply it to your Autoscaling Group.
I've been trying to get to grips with Amazons AWS services for a client. As is evidenced by the very n00bish question(s) I'm about to ask I'm having a little trouble wrapping my head round some very basic things:
a) I've played around with a few instances and managed to get LAMP working just fine, the problem I'm having is that the code I place in /var/www doesn't seem to be shared across those machines. What do I have to do to achieve this? I was thinking of a shared EBS volume and changing Apaches document root?
b) Furthermore what is the best way to upload code and assets to an EBS/S3 volume? Should I setup an instance to handle FTP to the aforementioned shared volume?
c) Finally I have a basic plan for the setup that I wanted to run by someone that actually knows what they are talking about:
DNS pointing to Load Balancer (AWS Elastic Beanstalk)
Load Balancer managing multiple AWS EC2 instances.
EC2 instances sharing code from a single EBS store.
An RDS instance to handle database queries.
Cloud Front to serve assets directly to the user.
Thanks,
Rich.
Edit: My Solution for anyone that comes across this on google.
Please note that my setup is not finished yet and the bash scripts I'm providing in this explanation are probably not very good as even though I'm very comfortable with the command line I have no experience of scripting in bash. However, it should at least show you how my setup works in theory.
All AMIs are Ubuntu Maverick i386 from Alestic.
I have two AMI Snapshots:
Master
Users
git - Very limited access runs git-shell so can't be accessed via SSH but hosts a git repository which can be pushed to or pulled from.
ubuntu - Default SSH account, used to administer server and deploy code.
Services
Simple git repository hosting via ssh.
Apache and PHP, databases are hosted on Amazon RDS
Slave
Services
Apache and PHP, databases are hosted on Amazon RDS
Right now (this will change) this is how deploy code to my servers:
Merge changes to master branch on local machine.
Stop all slave instances.
Use Git to push the master branch to the master server.
Login to ubuntu user via SSH on master server and run script which does the following:
Exports (git-archive) code from local repository to folder.
Compresses folder and uploads backup of code to S3 with timestamp attached to the file name.
Replaces code in /var/www/ with folder and gives appropriate permissions.
Removes exported folder from home directory but leaves compressed file intact with containing the latest code.
5 Start all slave instances. On startup they run a script:
Apache does not start until it's triggered.
Use scp (Secure copy) to copy latest compressed code from master to /tmp/www
Extract code and replace /var/www/ and give appropriate permissions.
Start Apache.
I would provide code examples but they are very incomplete and I need more time. I also want to get all my assets (css/js/img) being automatically being pushed to s3 so they can be distibutes to clients via CloudFront.
EBS is like a harddrive you can attach to one instance, basically a 1:1 mapping. S3 is the only shared storage stuff in AWS, otherwise you will need to setup an NFS server or similar.
What you can do is put all your php files on s3 and then sync them down to a new instance when you start it.
I would recommend bundling a custom AMI with everything you need installed (apache, php, etc) and setup a cron job to sync php files from s3 to your document root. Your workflow would be, upload files to s3, let server cron sync files.
The rest of your setup seems pretty standard.
When autoscaling my EC2 instances for application, what is the best way to keep every instances in sync?
For example, there are custom settings and application files like below...
Apache httpd.conf
php.ini
PHP source for my application
To get my autoscaling working, all of these must be configured same in each EC2 instances, and I want to know the best practice to sync these elements.
You could use a private AMI which contains scripts that install software or checkout the code from SVN, etc.. The second possibility to use a deployment framework like chef or puppet.
The way this works with Amazon EC2 is that you can pass user-data to each instance -- generally a script of some sort to run commands, e.g. for bootstrapping. As far as I can see CreateLaunchConfiguration allows you to define that as well.
If running this yourself is too much of an obstacle, I'd recommend a service like:
scalarium
rightscale
scalr (also opensource)
They all offer some form of scaling.
HTH