Supervisor won't automatically startup when Ubuntu Server is booted - laravel

I'm working on beantalkd and supervisord for my Laravel project with homestead vm. Everytime i vagrant up the homestead vm, the supervisord does not start. I have to manually type below for it to run:
sudo service supervisor start
The version i'm running on is 3.0b2-1. I have also installed rcconf to check which service is started automatically at boot time and supervisor is checked as well.
Another thing that i tried is running crontab to try to start the service. Below is the crontab script i wrote:
#reboot root /usr/bin/supervisord -c /etc/supervisor/supervisord.conf
* * * * * php /home/vagrant/projects/llpm/artisan scheduled:run 1>> /dev/null 2>&1 --env=local
Still it won't automatically start at reboot. Anyone have any solution?

I've found the answer from here.
Somehow, it's caused by vagrant. So what i did is added this line below to Homestead/scripts/homestead.rb:
config.vm.provision "shell", inline: "service supervisor restart", run: "always"
Vagrant up and supervisor is booted up as well.

I would use: supervisord -c '/etc/supervisord.conf'
in place of: /usr/bin/supervisord -c /etc/supervisor/supervisord.conf which points to a config file that supervisor does not use (/etc/supervisor/supervisord.conf).
I hope it helps.

Related

Start SSH daemon in Laravel sail

I'm using Laravel Sail and have published the Dockerfile (so I can edit it manually). The reason being that I need to use an OpenSSH server on the container itself for a program I'm using.
I am able to install OpenSSH by adding && apt-get install -y openssh-server to the packages section and configure it with the following:
RUN mkdir /var/run/sshd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
EXPOSE 22
The issue though is that it does not start when the container starts.
According to this answer you can add an ENTRYPOINT as follows:
ENTRYPOINT service ssh restart && bash
However you can only have one ENTRYPOINT and Laravel Sail has this by default:
ENTRYPOINT ["start-container"]
I have tried the following:
add /usr/sbin/sshd to the start-container script - it didn't start
add CMD ["/usr/sbin/sshd", "-D"] into the Dockerfile as per this answer - it gave a permission denied error when starting the container
As the php container uses supervisor you can add another application to the config. Reference for the config can be found here.
Example
ssh.ini:
[program:sshd]
command=/usr/sbin/sshd -D

Auto boot MailHog on Ubuntu 20.04

I installed MailHog in my staging environment by following these steps:
sudo apt-get -y install golang-go
go get github.com/mailhog/MailHog
In order to manually start the service I do:
cd ~/go/bin
./MailHog
Since I'm using Laravel I already have supervisor running for workers.
I'm wondering if there is a way to add a new .conf file in order to start MailHog.
I tried to follow how Laravel workers are started, but so far no luck
[program:mailhog]
process_name=%(program_name)s_%(process_num)02d
command=~/go/bin/MailHog
user=ubuntu
stdout_logfile=/var/www/api/storage/logs/mailhog.log
I get mailhog:mailhog_00: ERROR (no such file) when I try to start supervisor.
I need a way to auto boot MailHog, no matter if I need supervisor or via services.
I'll really appreciate it if you can provide the "recipe" for starting MailHog from the supervisor or by using a service.
I figure out how the complete installation/setup should be:
Downloading & installation
sudo apt-get -y install golang-go
go get github.com/mailhog/MailHog
Copy Mailhog to bin directory
sudo cp ~/go/bin/MailHog /usr/local/bin/MailHog
Create MailHog service
sudo tee /etc/systemd/system/mailhog.service <<EOL
[Unit]
Description=MailHog
After=network.target
[Service]
User=ubuntu
ExecStart=/usr/bin/env /usr/local/bin/MailHog > /dev/null 2>&1 &
[Install]
WantedBy=multi-user.target
EOL
Note: Change the User=ubuntu to your username.
Check status service is loaded successfully.
sudo systemctl status mailhog
Output
mailhog.service - MailHog
Loaded: loaded (/etc/systemd/system/mailhog.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Start service
sudo systemctl enable mailhog
Reboot system and visit http://yourdomain.com:8025/
You don't need a supervisor, you can use Linux systemd to create a startup application.
systemd is the standard system and service manager in modern Linux. It is responsible for executing and managing programs during Linux startup
before that add mailhog to your system path variable to call it only by name
export PATH=$PATH:/home/YOUR-USERNAME/go/bin/MailHog
sudo systemctl enable mailhog
or if you are using any desktop environment , you can follow this
https://askubuntu.com/questions/48321/how-do-i-start-applications-automatically-on-login

Kubernetes Installation with Vagrant & CoreOS and insecure Docker registry

I have followed the steps at https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html to launch a multi-node Kubernetes cluster using Vagrant and CoreOS.
But,I could not find a way to set an insecure docker registry for that environment.
To be more specific, when I run
kubectl run api4docker --image=myhost:5000/api4docker:latest --replicas=2 --port=8080
on this set up, it tries to get the image thinking it is a secure registry. But, it is an insecure one.
I appreciate any suggestions.
This is how I solved the issue for now. I will add later if I can automate it on Vagrantfile.
cd ./coreos-kubernetes/multi-node/vagrant
vagrant ssh w1 (and repeat these steps for w2, w3, etc.)
cd /etc/systemd/system/docker.service.d
sudo vi 50-insecure-registry.conf
add below line to this file
[Service]
Environment=DOCKER_OPTS='--insecure-registry="<your-registry-host>/24"'
after adding this file, we need to restart the docker service on this worker.
sudo systemctl stop docker
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl status docker
now, docker pull should work on this worker.
docker pull <your-registry-host>:5000/api4docker
Let's try to deploy our application on Kubernetes cluster one more time.
Logout from the workers and come back to your host.
$ kubectl run api4docker --image=<your-registry-host>:5000/api4docker:latest --replicas=2 --port=8080 —env="SPRING_PROFILES_ACTIVE=production"
when you get the pods, you should see the status running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api4docker-2839975483-9muv5 1/1 Running 0 8s
api4docker-2839975483-lbiny 1/1 Running 0 8s

Not able to access Kibana running in a Docker container on port 5601

I have built a docker image with the following Docker file.
# gunicorn-flask
FROM devdb/kibana
MAINTAINER John Doe <user.name#gmail.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python python-pip python-virtualenv gunicorn
# Setup flask application
RUN mkdir -p /deploy/app
COPY gunicorn_config.py /deploy/gunicorn_config.py
COPY app /deploy/app
RUN pip install -r /deploy/app/requirements.txt
WORKDIR /deploy/app
EXPOSE 5000 5601 9200
# Start gunicorn
CMD ["/usr/bin/gunicorn", "--config", "/deploy/gunicorn_config.py", "listener:app"]
I am running the container from the image created from this Docker file as follows.
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -v /home/Workspace/xits/config/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml -v /home/Workspace/xits/config/kibana.yml:/opt/kibana/config/kibana.yml es-kibana-gunicorn:latest
The issue I am facing is that I cannot access Kibana port 5601 on my host machine. My browser page says ERR_CONNECTION_REFUSED
I am able to access port 5000 though.
I can't figure out why this is.Any help would be greatly appreciated.
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself.
When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles.
Since your CMD only starts gunicorn, elasticsearch and kibana won't ever be started. That's why there is no response on their respective network ports.
The Docker image you inherits from inherits itself from phusion/baseimage which has its own way of making multiple processes run in Docker containers. I recommend you follow the instructions on their README file to learn how to add your gunicorn to the list of services to start. Basically you would have to define a script named run and add it to your docker image within the /etc/service/<service name>/ directory.
In your Dockerfile, add:
COPY run /etc/service/gunicorn/
and the run script should be something similar to:
#!/bin/bash
cd /deploy/app
/usr/bin/gunicorn --config /deploy/gunicorn_config.py listener:app

install java6 and tomcat7 on Amazon EC2

Ubuntu is running on Amazon EC2, version 10.10
installed java using
sudo apt-get install openjdk-6-jdk
(more about openjdk6 https://launchpad.net/ubuntu/maverick/+package/openjdk-6-jdk)
did the following to in install tomcat7
wget -c http://apache.petsads.us/tomcat/tomcat-7/v7.0.27/bin/apache-tomcat-7.0.27.tar.gz
sudo tar xvfz apache-tomcat-7.0.27.tar.gz -C /var
Then I see a folder called apache-tomcat-7.0.27 under /var
go to /var/apache-tomcat-7.0.27/bin and run:
sudo bash startup.sh
It looks like tomcat starts successfully:
ubuntu#ip-XX-XXX-XX-XXX:/var/apache-tomcat-7.0.27/bin$ sudo bash startup.sh
Using CATALINA_BASE: /var/apache-tomcat-7.0.27
Using CATALINA_HOME: /var/apache-tomcat-7.0.27
Using CATALINA_TMPDIR: /var/apache-tomcat-7.0.27/temp
Using JRE_HOME: /usr
Using CLASSPATH: /var/apache-tomcat-7.0.27/bin/bootstrap.jar:/var/apache-tomcat-7.0.27/bin/tomcat-juli.jar
I did a test by doing:
sudo fuser -v -n tcp 8080
then i got result(looks like tomcat is up and running):
0 USER PID ACCESS COMMAND
8080/tcp: root 1234 F.... java
But if i type in address of my server in browser, i can't see the default tomcat page...
Am I missing anything????? I am open to any advices.
I followed some of the steps (not all of them) in http://www.excelsior-usa.com/articles/tomcat-amazon-ec2-java-stack.html#tomcat
The solution of this problem is:
This instance is not owned by me.
I asked my friend to change the rule for 8080 in the firewall configuration via his aws management console.
Then it worked.
With out knowing exactly what your setup is, my first guess is you need to open port 8080 on the security group for that instance. Go to security groups and either open it to 0.0.0.0/0 or ur specific IP (this depends on your security requirements for the server)

Resources