Prevent Gunicorn 30 seconds timeout using AJAX on Django - ajax

I configured my Django app with Gunicorn (in a virtual env) and Lighttpd.
When I process a lot of data, a 500 error occurs. It is due to a default Gunicorn timeout set to 30 seconds.
I tried to use AJAX thinking of solving this issue but the only (wrong) way I found to solve the issue is to add a --timeout of 300 seconds in my costom systemd service:
[Unit]
Description=django daemon
After=network.target
[Service]
User=root
ExecStart=/root/startDjango.sh
[Install]
WantedBy=default.target
with startDjango.sh:
#!/bin/bash
/opt/django_apps/venv36/bin/gunicorn --log-file=/opt/django_apps/mydjangoapp/logs/gunicorn.log --bind XXX.XXX.XXX.XXX:YYYY --chdir /opt/django_apps/mydjangoapp mydjangoapp.wsgi --timeout 300
Is there a way to solve this issue without increasing gunicorn timeout?

Related

Run Airflow HA Scheduler as systemd services

I want to run 2 airflow schedulers for which i created a systemd service file ending with #.service.
Now if I try to run the service like
sudo systemctl start airflow-scheduler#{1..2}
Only one of the schedulers manages to run, while the other one runs into an error which says
sqlalchemy.exc.DatabaseError: (mysql.connector.errors.DatabaseError) 3572 (HY000): Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set.
My service file looks like this:
[Unit]
Description=Airflow scheduler daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service
[Service]
EnvironmentFile=/etc/sysconfig/airflow
User=myuser
Group=myuser
Type=simple
ExecStart=/usr/local/bin/airflow scheduler
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
The problem was with mysql connector python. I used mysqldb instead in the airflow config file and it works fine now.

systemd service not being triggered by its timer unit

Here is my service unit definition
[Unit]
Description=My Service
[Service]
ExecStart=/bin/bash -lc /usr/local/bin//myservice
# ExecStop=/bin/kill -15 $MAINPID
EnvironmentFile=/etc/myservice/config
User=myuser
Group=mygroup
and its timer unit file
[Unit]
Description=Timer for myservice
[Timer]
Unit=myservice.service
OnCalendar=*-*-* 10:33:00
[Install]
WantedBy=timers.target
I have tentatively set the OnCalendar to *-*-* 10:33:00 (followed by a sudo systemctl daemon-reload) but as a I was watching my machine, I didn't see the service firing. I had also set it for 5AM but this morning I saw no evidence of execution
When I perform a manual sudo systemctl start myservice it works as expected.
What might be preventing the service from executing according to its timer schedule?
You did not start the timer.
sudo systemctl start myservice.timer

Instruct to execute an unit after completing another unit successfully

Friends.
I use cloud-config to install and configure DCOS cluster.
Normally "agentinstall.service" service takes 5 minutes to complete.
Is it possible to instruct to systemd to execute "agentconfigure.service" ONLY AFTER "agentinstall.service" completed?
#cloud-config
coreos:
units:
- name: "agentinstall.service"
command: "start"
content: |
[Unit]
Description=agent_setup
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/tmp
ExecStartPre=/bin/curl -o /tmp/dcos_install.sh http://bootstapnode-0.dev.myztro.internal:9090/dcos_install.sh
ExecStartPre=/bin/chmod 755 dcos_install.sh
ExecStart=/bin/bash dcos_install.sh slave
[Install]
WantedBy=multi-user.target
- name: "agentconfigure.service"
command: "start"
content: |
[Unit]
Description=agent_config
After=agentinstall.service
[Service]
Type=simple
User=root
WorkingDirectory=/opt/mesosphere/etc/
ExecStartPre=/bin/echo "MESOS_ATTRIBUTES=cluster:uploader" >> /opt/mesosphere/etc/mesos-slave-common
ExecStartPre=/bin/rm -f /var/lib/mesos/slave/meta/slaves/latest
ExecStart=/usr/bin/systemctl restart dcos-mesos-slave
[Install]
WantedBy=multi-user.target
Thank you.
This is very much possible with systemd using After/Before keywords.
you can use something like below
In agentconfigure.service, provide below instruction
After=agentinstall.service
Usually After ensures that the dependent service is launched after launch of given service.
Since you mentioned that agentinstall.service takes 5 minutes to complete, so you have to add Type=notify in agentinstall.service, and from your application do a sd_notify() when your processing is done.
Based on this systemd shall start next service i.e agentconfigure.service
Read more about same here
Read about sd_notify() here

Kubernetes Installation with Vagrant & CoreOS and insecure Docker registry

I have followed the steps at https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html to launch a multi-node Kubernetes cluster using Vagrant and CoreOS.
But,I could not find a way to set an insecure docker registry for that environment.
To be more specific, when I run
kubectl run api4docker --image=myhost:5000/api4docker:latest --replicas=2 --port=8080
on this set up, it tries to get the image thinking it is a secure registry. But, it is an insecure one.
I appreciate any suggestions.
This is how I solved the issue for now. I will add later if I can automate it on Vagrantfile.
cd ./coreos-kubernetes/multi-node/vagrant
vagrant ssh w1 (and repeat these steps for w2, w3, etc.)
cd /etc/systemd/system/docker.service.d
sudo vi 50-insecure-registry.conf
add below line to this file
[Service]
Environment=DOCKER_OPTS='--insecure-registry="<your-registry-host>/24"'
after adding this file, we need to restart the docker service on this worker.
sudo systemctl stop docker
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl status docker
now, docker pull should work on this worker.
docker pull <your-registry-host>:5000/api4docker
Let's try to deploy our application on Kubernetes cluster one more time.
Logout from the workers and come back to your host.
$ kubectl run api4docker --image=<your-registry-host>:5000/api4docker:latest --replicas=2 --port=8080 —env="SPRING_PROFILES_ACTIVE=production"
when you get the pods, you should see the status running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api4docker-2839975483-9muv5 1/1 Running 0 8s
api4docker-2839975483-lbiny 1/1 Running 0 8s

Configure uwsgi and nginx using Docker

I have configured uwsgi and nginx separately for a python production server following this link. I have configured them separately with working configuration. My uwsgi alone works fine, and nginx alone works fine. My problem is I am planning to use docker for this setup and am not able to run both uwsgi and nginx simultaneously, even though I am using a bash file. Below are the relevant parts in my configuration.
Dockerfile :
#python setup
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s mysite.conf /etc/nginx/sites-enabled/
EXPOSE 80
CMD ["/bin/bash", "start.sh"]
mysite.conf
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket
}
server {
listen 80;
server_name aa.bb.cc.dd; # ip address of the server
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file
}
}
start.sh :
service nginx status
uwsgi --socket :8001 --module server.wsgi
service nginx restart
service nginx status # ------- > doesn't get executed :(
out put of the shell file
Can someone help me how to set this up using a bash script ?
Your start.sh script has the risk to end immediately after executing those two commands.
That would terminate the container right after starting it.
You would need at least to make sure nginx start command does not exit right away.
The official nginx image uses:
nginx -g daemon off;
Another approach would be to keep your script as is, but use for CMD a supervisor, declaring your script in /etc/supervisor/conf.d/supervisord.conf.
That way, you don't expose yourself to the "PID 1 zombie reaping issue": stopping your container will wait for both processes to terminate, before exiting.
I think there is a very basic but important alternative worth pointing out.
Your initial scenario was:
Production environment.
Both uwsgi and nginx working fine alone.
TCP socket for uwsgi <=> nginx communication.
I don't think you should go with some complicated trick to run both processes in the same container.
You should simply run uwsgi and nginx in separate containers.
That way you achieve:
Functional Isolation: If you want to replace Nginx by Apache, don't need to modify/rebuild/redeploy your uwsgi container.
Resource Isolation: You can limit memory, CPU and IO separately for nginx and uwsgi.
Loose Coupling: If you want you could even deploy the containers on separate machines (just need to make your upstream server uri configurable).

Resources