Simple bash script working on launch of EC2 instance in default VPC but not when launching into custom VPC - bash

If I launch an EC2 instance (t2, Linux) with the user metadata script below into the default VPC everything works and I can HTTP to my instance and see my html page. If I launch an EC2 instance of the exact same type and settings into my own custom VPC it does not work. By does not work I mean that the /var/www/html directory is not even created. Being the newbie that I am I can't see what the difference is. It's like when I launch it into the default VPC the resource has some kind of permissions that I guess I haven't set up when launching instances into my custom VPC. I just don't see where that would be as NACLS, Security Groups, Internet Gateways, Route Tables, Subnets etc wouldn't have anything to do with this I'd have imagined.
#!/bin/bash
yum -y install httpd
systemctl start httpd
systemctl enable httpd
mkdir /var/www/html
cd /var/www/html
echo "<html><h1>This is a test</h1></html>" > index.html
chmod 755 index.html
I have also tried
#!/bin/bash
mkdir /test
cd /test
echo "<html><h1>This is a tester</h1></html>" > index.html
chmod 755 index.html

Related

Can Cloudinit be used to automate complex configuration such as UFW and Apache

Cloudinit can handle basic configuration like creating users and groups, installing packages, mount storage points, and more (see Cloud Config Examples). But can it handle more complex tasks like the below, and if so, how? A minimal working example would be appreciated.
# KNOWN: Replicating the below user creation with sudo privelges and a home
# directory is possible through cloudinit
sudo adduser johnny
sudo usermod -aG sudo johnny
# KNOWN: Replicating the below public/private key creation is possible through
# cloudinit
ssh johny#10.0.0.1 "ssh-keygen -t rsa"
# UNKNOWN: Is it possible to update the firewall rules in cloudinit or should
# one simply SSH in afterwards like so
ssh johnny#10.0.0.1 "
sudo ufw enable
sudo ufw allow http
sudo ufw allow https"
# UNKNOWN: Is it possible to deploy LetsEncrypt cetrificates or should one
# simply SSH in afterwrds like so
ssh johnny#10.0.0.1 "
sudo service apache2 restart
sudo certbot --apache"
# UNKNOWN: Is it possible to clone and install git repositories or should one
# simply SSH in afterwards like so
ssh johnny#10.0.0.1 "
GIT_NAME=johnny
GIT_EMAIL=johnny.rico#citizen.federation
git confing --global user.name $GIT_NAME
git confing --global user.email $GIT_EMAIL
git clone git#github.com:Federation:clandathu.git
cd clandathu/install
make --kill-em-all
sudo make install"
If you're referring specifically to the cloud-config, then all of the unknowns that you have listed don't have specific modules for them. However, you can also run arbitrary shell scripts via the runcmd module, or by specifying a script as your user data instead of a cloud config. It just has to start with #! rather than #cloud-config. If you want both a cloud config and a custom shell script, you can build a mime multi part archive with a cloud-init helper command.

Laravel all routes except '/' return 404 on AWS EC2

I'm trying to run Laravel project on AWS EC2. It was working fine until uploaded a new version to deploy. All routes return error 404 except for '/' though all routes exist. httpd.conf in /etc/httpd/conf contains this
<Directory "/var/www">
AllowOverride All
# Allow open access:
Require all granted
</Directory>
I always execute these commands after deploying a new version
sudo chown -R ec2-user /var/app/current
sudo chmod -R 755 /var/app/current
I tried "sudo a2enmod rewrite" but I get "sudo: a2enmod: command not found"
Any solution?
Replacing the instance and re-deploying the app solved the problem
After modifying /etc/httpd/conf/httpd.conf
You need to also restart httpd using sudo service httpd restart
Refer to Laravel ReST API URL 404 not found on AWS EC2 in Apache + mySQL environment - The request URL was not found on this server

Running a Bash Script from (on Docker Container B) from Docker Container A

I have two Docker Containers configured through a Docker Compose file.
Docker Container A - (teamcity-agent)
Docker Container B - (build-tool)
Both start up fine. But as part of the build process in TeamCity - I would like the Agent (Container A) to run a bash script which is on Docker Container B (Only B can run this script).
I tried to set this up using the SSH build step in Team City, but I get connection refused.
Further reading into it shows that SSH isn't enabled in containers and that I shouldn't really be trying to SSH into a container.
So how can I get Container A to run the script on Container B and see the output of the script on A?
What is the best practice for this?
The only way without modifying the application itself is through SSH. It is completely false you cannot SSH to a container. I use SSH to a database container to run database export inside it.
First be sure openssh-server is installed on B. Then you must setup a passwordless connection between A and B.
Then be sure you link your containers in the docker-compose file so you won't need to expose the SSH port.
Snippet to add in Dockerfile for container B
RUN apt-get install -q -y openssh-server
ADD id_rsa.pub /home/ubuntu/.ssh/authorized_keys
RUN chown -R ubuntu:ubuntu /home/ubuntu/.ssh ; \
chmod 700 /home/ubuntu/.ssh ; \
chmod 600 /home/ubuntu/.ssh/authorized_keys
Also you can run the script outside the containers using docker exec in a crontab in the host. But I think you are not looking for this extreme solution.
I can help you via comments
Regards

Not able to access Kibana running in a Docker container on port 5601

I have built a docker image with the following Docker file.
# gunicorn-flask
FROM devdb/kibana
MAINTAINER John Doe <user.name#gmail.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python python-pip python-virtualenv gunicorn
# Setup flask application
RUN mkdir -p /deploy/app
COPY gunicorn_config.py /deploy/gunicorn_config.py
COPY app /deploy/app
RUN pip install -r /deploy/app/requirements.txt
WORKDIR /deploy/app
EXPOSE 5000 5601 9200
# Start gunicorn
CMD ["/usr/bin/gunicorn", "--config", "/deploy/gunicorn_config.py", "listener:app"]
I am running the container from the image created from this Docker file as follows.
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -v /home/Workspace/xits/config/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml -v /home/Workspace/xits/config/kibana.yml:/opt/kibana/config/kibana.yml es-kibana-gunicorn:latest
The issue I am facing is that I cannot access Kibana port 5601 on my host machine. My browser page says ERR_CONNECTION_REFUSED
I am able to access port 5000 though.
I can't figure out why this is.Any help would be greatly appreciated.
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself.
When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles.
Since your CMD only starts gunicorn, elasticsearch and kibana won't ever be started. That's why there is no response on their respective network ports.
The Docker image you inherits from inherits itself from phusion/baseimage which has its own way of making multiple processes run in Docker containers. I recommend you follow the instructions on their README file to learn how to add your gunicorn to the list of services to start. Basically you would have to define a script named run and add it to your docker image within the /etc/service/<service name>/ directory.
In your Dockerfile, add:
COPY run /etc/service/gunicorn/
and the run script should be something similar to:
#!/bin/bash
cd /deploy/app
/usr/bin/gunicorn --config /deploy/gunicorn_config.py listener:app

install java6 and tomcat7 on Amazon EC2

Ubuntu is running on Amazon EC2, version 10.10
installed java using
sudo apt-get install openjdk-6-jdk
(more about openjdk6 https://launchpad.net/ubuntu/maverick/+package/openjdk-6-jdk)
did the following to in install tomcat7
wget -c http://apache.petsads.us/tomcat/tomcat-7/v7.0.27/bin/apache-tomcat-7.0.27.tar.gz
sudo tar xvfz apache-tomcat-7.0.27.tar.gz -C /var
Then I see a folder called apache-tomcat-7.0.27 under /var
go to /var/apache-tomcat-7.0.27/bin and run:
sudo bash startup.sh
It looks like tomcat starts successfully:
ubuntu#ip-XX-XXX-XX-XXX:/var/apache-tomcat-7.0.27/bin$ sudo bash startup.sh
Using CATALINA_BASE: /var/apache-tomcat-7.0.27
Using CATALINA_HOME: /var/apache-tomcat-7.0.27
Using CATALINA_TMPDIR: /var/apache-tomcat-7.0.27/temp
Using JRE_HOME: /usr
Using CLASSPATH: /var/apache-tomcat-7.0.27/bin/bootstrap.jar:/var/apache-tomcat-7.0.27/bin/tomcat-juli.jar
I did a test by doing:
sudo fuser -v -n tcp 8080
then i got result(looks like tomcat is up and running):
0 USER PID ACCESS COMMAND
8080/tcp: root 1234 F.... java
But if i type in address of my server in browser, i can't see the default tomcat page...
Am I missing anything????? I am open to any advices.
I followed some of the steps (not all of them) in http://www.excelsior-usa.com/articles/tomcat-amazon-ec2-java-stack.html#tomcat
The solution of this problem is:
This instance is not owned by me.
I asked my friend to change the rule for 8080 in the firewall configuration via his aws management console.
Then it worked.
With out knowing exactly what your setup is, my first guess is you need to open port 8080 on the security group for that instance. Go to security groups and either open it to 0.0.0.0/0 or ur specific IP (this depends on your security requirements for the server)

Resources